Use code DAD23 for 20% off + Free shipping on $45+ Shop Now!
The Myth of Experience
Why We Learn the Wrong Lessons, and Ways to Correct Them
By Emre Soyer
Formats and Prices
- Hardcover $28.00 $35.00 CAD
- ebook $3.99 $4.99 CAD
- Audiobook Download (Unabridged)
This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around September 1, 2020. This date is subject to change due to shipping delays beyond our control.
Also available from:
“If history repeats itself, and the unexpected always happens, how incapable must Man be of learning from experience!”
—GEORGE BERNARD SHAW
Man and Superman
STORIES THAT LIE
When Experience Becomes an All-Too-Simple Narrative
CONSIDER THE FOLLOWING SHORT FILM.
On the screen is a circle, motionless and slightly to the right of center. A triangle then enters from the left and slides toward the circle. When the two shapes meet in the middle of the screen, the triangle stops moving, and the circle starts sliding toward the right. It eventually falls outside the screen. The triangle remains on the screen, motionless. The end.
Please think about what just happened. What could this sequence of events represent?
During some of our talks and workshops, we show this film and ask the audience members this question. They usually react, surprisingly quickly, with a wide range of responses. A few see and interpret the film rather literally: “A triangle pushes a circle out of the picture.”
Other responses are more colorful and metaphorical:
“Change is inevitable.”
“Solutions evolve to fit problems better.”
“Order wins over chaos.”
“Those who have a clear set of principles trump those who don’t.”
“Reason beats emotion.”
After collecting numerous responses, we ask a follow-up question: What will happen next? Once again, we get many answers, but now most are based on the previous ones:
“A square will come and push away the triangle.”
“The return of the circle… It will come back and take revenge.”
“A better solution will replace this one… perhaps it will have more colors.”
“Emotion will eventually prevail.”
This exercise is based on the work of psychologists Fritz Heider and Marianne Simmel, themselves inspired by the psychologist Albert Michotte. They would show certain shapes to audiences, make them move around, and then study people’s perceptions.1
The story of the triangle and the circle reveals important clues about the way we learn from experience.
First, experiences quickly become stories. People are able to effortlessly generate narratives based on their observations, often linking their interpretations to their previous experiences, beliefs, and knowledge.
Second, the chronology of events often leads people to perceive cause-and-effect relationships. The film simply shows two objects moving in various ways, but based on the sequence of events, viewers quickly conclude that one thing causes the other to move and ultimately to fall.
Third, people can easily use a perceived story to generate a prediction of what will happen next. Because one object pushed the other, now it’s time for the latter to take revenge on the former. Or because change is inevitable, the newcomer will be replaced in turn by something else. The content of the initial story paves a path for guesses and expectations regarding what lies ahead.
Hence, this little exercise suggests that we humans are able to quickly and proficiently generate stories based on our experience and then use them in our future judgments about the situation. This is a rather complex task, yet we excel at it. Perhaps we’ve become so good at storytelling, in part, because stories provide us with such powerful, valuable tools for dealing with experience.
Stories help us understand our experience. They provide a way to attach meaning to complicated yet important events that affect our lives. They allow us to create order out of chaos.
Stories help us remember our experience. Memorizing a list of a dozen words or concepts and remembering their order sometime later would be difficult. But if we connect them through a unifying story, they are easier to recall when needed.
Stories help us communicate our experience. We can convey them easily to others, making sure that the learning becomes collective. We can also learn from others’ experience through their stories.
Stories help us predict the future based on our experience. We can use them to educate our guesses about what will take place at a later time. Stories about the past and the present shape the ones about the future.2
In Sapiens, historian Yuval Harari emphasizes the importance of the human ability to create, believe, and spread stories for our dominance as a species on earth. Stories helped us collaborate, defeat our adversaries, survive deadly dangers, build massive cities, maintain complex systems, and invent new things. Countries, for example, are built on and supported by stories that vividly encapsulate the shared experiences of their fellow citizens. By contrast, failing to see a story can deprive us of valuable lessons, collaborations, and opportunities.3
Given that stories helped us evolve to become what we are today, one can argue that we are programmed to see stories automatically in what we experience.
Great. What can go wrong?
Unfortunately, our storytelling proclivity can also create serious problems. If our perception of events is shaped by filters, distortions, missing details, and irrelevant information, then the stories we generate would be too simplistic and unrealistic to capture the nuances of the actual situation—or to prepare us adequately for the future. Such misleading stories, however, may still be influential and durable. In Human, All Too Human, philosopher Friedrich Nietzsche argues that “partial knowledge is more often victorious than full knowledge: it conceives things as simpler than they are and therefore makes its opinion easier to grasp and more persuasive.”4
The nature of history itself makes partial knowledge inevitable. When we learn from history, we get to observe the unfolding of just one of many possible outcomes. And what occurred might not even be the most probable version. When it comes to learning from history, the learning environment is, by definition, wicked. In Everything Is Obvious: Once You Know the Answer, sociologist Duncan Watts warns that “when we look to the past, we see only the things that happened—not all the things that might have happened but didn’t—and as a result, our commonsense explanations often mistake for cause and effect what is really just a sequence of events.”5
What we see isn’t necessarily all there is.
In the story of the triangle and the circle, for example, there may be more than what meets the eye. It may be that the departure of the circle is what prompts the triangle to arrive, so the latter event is actually causing the former. The story could also be probabilistic: one event leads to the other only some of the time, and we simply observed a particular sequence where it happened. Or maybe there is no causation but only correlation: one event happens after the other but not because of it.
To make things even more complicated, what if there’s no meaningful story? No pushing, no pulling, no pattern, no lesson, no cause, no effect, nothing to predict. What if events from our experience, or from the history we learn, have a large element of randomness? We humans run the real risk of seeing stories when none really exist.
When we interpret random events as meaningful, psychologists say that we are under the spell of a clustering illusion. Author and skeptic Michael Shermer refers to “our tendency of seeing meaningful patterns in meaningless noise” as “patternicity,” while psychiatrist Klaus Conrad named it “apophenia.” Generating elaborate stories based on randomness gets a name too; applied statistician and author Nassim Taleb calls it the “narrative fallacy.”6
It’s much easier for us to write a story based on our experience than to ignore it. And when operating under complexity and uncertainty, it’s awfully easy to write the wrong story. We can thus inadvertently compose, learn from, believe in, act on, and tell others stories that either don’t exist or that are severely inaccurate. And once we’re hooked on a particular story, it can be hard to change our minds. The lessons learned can stick and determine what we do next.
In the case of bloodletting, for instance, a faulty belief about the cause of illnesses led to the specific treatment. Our subsequent experience, which seemingly reflected an illusory cause-and-effect relationship, reinforced and propagated that wrong story for a long time. On occasion, it drove us to take things too far, harming patients in need and people we loved.
If not handled with care, our experience can make us believe in the wrong causes, expect unrealistic consequences, evaluate performances inadequately, make bad investments, reward or punish the wrong people, and fail to prepare us for future risks. Worse, we may not even notice that we are acting upon faulty stories and fail to revise them in a timely and appropriate way. As a result, we may end up solving the wrong problems, using inadequate methods, and failing to achieve our objectives.7
Only by acknowledging these potential weaknesses and going beyond the available experience can we identify mechanisms to help us develop more accurate representations of many complicated situations we face. We can even use our storywriting prowess to our advantage by considering these as theories to be questioned and improved, rather than actionable truths, no matter how compelling they may sound. A timely and healthy skepticism regarding our experience-based stories can help us judge which causal links are stronger than others and which may be absent altogether.
Throughout this book, we will feature a wide variety of stories that lie, leading to an illusion of learning as we tackle important decisions in different domains of life. Let us start in this chapter with a few specific examples that will lay the foundation for the chapters to come.
Stories That Discount Randomness
The following events all happened in 2015.8
Serena Williams, one of the all-time tennis greats and the world’s number-one women’s player, was expected to win what’s called a “calendar Grand Slam.” This happens when a player wins all four of the Grand Slam tournaments in the same year: the Australian Open, the French Open, Wimbledon, and the US Open. It’s an exceptional accomplishment, and Williams was dominating the tour at the time. So, she was prominently featured on the cover of Sports Illustrated’s August 31 issue, under the headline “THE SLAM: All Eyes on Serena.”
Then the unexpected happened. In a thrilling semifinal match at the US Open tournament in September, heavy favorite Williams lost to unseeded Roberta Vinci.
Baseball star Daniel Murphy was having a great season for the New York Mets, where he “set a major league record with homers in six straight postseason games and was batting .421 with seven home runs and 11 RBI through 9 games headed into the World Series.” So, Murphy was promptly featured on the cover of Sports Illustrated’s November 2 issue. The headline dubbed him “The Amazin’ Murph.”
Then the unexpected happened. Murphy’s batting performance declined, and he missed a groundball in the eighth inning of the pivotal fourth game of the World Series, which was reportedly one reason that the Mets lost the series to the Kansas City Royals.
Actor Will Smith was nominated for a Golden Globe for his portrayal of forensic pathologist Dr. Bennet Omalu, who had discovered a link between football and brain damage. As this is an important development for the sports world, Sports Illustrated featured Smith and the film Concussion on its cover in the December 28 issue, bearing the headline, “Will Smith shines a light on football’s darkest corner and the future of America’s game.”
Then the unexpected happened. Despite the popularity of the film and the praise Smith received for his performance, he was not nominated for an Oscar. In fact, the film was completely excluded from the nominations list for the prestigious event.
All of these incidents happened within a few months and featured a similar sequence of events: Person produces a great performance. Person appears on the cover of Sports Illustrated. Person’s performance suffers. What emerges is a simple story: the person was jinxed by the cover. The magazine caused the failure.
The cover of Sports Illustrated is thus cursed, and something bad will happen soon to any athlete or team featured on it. Belief in this so-called Sports Illustrated cover jinx is based on this recurring sequence of events, which sports fans have, over the years, repeatedly observed. The most up-to-date resource for the extent of the jinx is its Wikipedia entry, which chronicles hundreds of cases. The magazine itself also explored the phenomenon in their January 2002 issue, which revealed that 37 percent of the covers up to that point (913 out of 2,456 since the first issue in August 1954) had indeed been followed by subsequent and substantial decline in performance.9
This is a case where it’s particularly easy to generate a simple narrative based on experience. When an athlete or team fails to meet expectations, fans are desperate to understand exactly what happened. After a while, they notice that prior to many disappointments, a magazine cover had featured the athlete or the team in question. There has to be a connection. Perhaps the athlete or the team could not cope with the pressure of being publicly named great. Maybe they were distracted by the heightened media scrutiny and the increased adulation from fans. Maybe they got complacent and stopped working hard.
All these scenarios are possible. But let’s give those on the cover some credit. Pro athletes work hard to achieve success. Certainly, not many of them would bend that much under pressure and get spoiled that easily. The question, Why does going on the cover of Sports Illustrated lead to a decline in performance? may not be the correct one to ask. Instead, it would be wiser to begin the investigation into the phenomenon by asking: When does an athlete or a team go on the cover of that magazine? The answer: When they are at the very top of their game!
If the magazine does its job right, many of those on the cover would be the best of the best at that moment. The headlines they’ve been earning by their remarkable achievements have propelled them to that status. There would be little room to improve beyond that extreme point. And if their remarkable achievement is a combination of skill and some events that are outside their control (as it always is in sports and many other walks of life), then there is a good chance that performance will decline toward a relatively more “normal” level the next time around. And although this decline might happen soon after their appearance on the cover, it wouldn’t be caused by it—it would merely be part of the natural course of events.
In fact, it would have been a bad sign for Sports Illustrated if many of those featured on the cover did not subsequently do worse. That would mean that the editors were not really doing a good job in identifying and singling out the best performers at their own best. Here, we are dealing not with a jinx but mainly with the statistical phenomenon of regression to the mean, or, as Sports Illustrated itself described it, “water seeking its own level.”10
Of course, one may argue that no harm is done by the existence of such an urban legend. After all, this is a peculiar faulty story about certain rare and extreme events limited to the world of sports.11
But this example actually represents a much bigger and widely prevalent phenomenon. Regression to the mean exists in all domains and situations where an outcome is partially determined by luck or random events. The greater the role luck plays in an extreme outcome, the greater the likelihood that events will soon revert to more normal levels. This is, of course, true for both positive and negative extremes. And we care deeply about extreme events across many domains of life—which means that our tendency to overlook the phenomenon of regression to the mean often leads us to craft flawed stories that misidentify the causes and overstate their effects on the outcomes we experience.
In medicine, for example, if a drug or treatment is mostly administered in extreme cases, its curative effects could be overestimated. Bloodletting, in fact, may have benefited from regression effects if some patients opted for it when they felt really bad. Their conditions would have improved after a while anyway, but bleeding got most of the credit along the way. The same is also true for many alternative health therapies, sometimes dubbed snake oil remedies. And, as in the case of the Sports Illustrated cover jinx, the right question to ask wouldn’t be Why does snake oil heal? but When would one use snake oil?12
Similarly, regression to the mean may cause consultants, in any context, to get more credit than they deserve, especially if they are consulted when the performance of their clients is uncharacteristically poor. Part of the subsequent improvement would be thanks to them and part due to luck, but they would often receive the whole glory.
Any type of evaluation would be incomplete without a consideration of possible regression effects. Suppose a company gives bonuses to its best performers and penalizes its worst. Makes sense… but regression to the mean would ensure that some of the best will do worse and some of the worst will get better next time, independent of rewards and penalties. Taken at face value, that experience would falsely reinforce the belief that penalties work much better to motivate people, while bonuses are detrimental.13
The so-called Peter principle is partly due to regression to the mean as well. It asserts that people “are promoted to their level of incompetence.” If people get promoted when their job performance is at or near its best, their subsequent performance is likely to decline due to regression effects.14
An analogous situation occurs in leadership changes, too. Managers or administrators who experience a streak of bad performance may be replaced—and the organization’s performance may subsequently improve. Yet was this improvement purely because of that change? What if the decline was partly due to particularly unlucky circumstances? Because we never see what would have happened had the change not occurred, this alternative explanation is rarely considered.15
In The Drunkard’s Walk, physicist Leonard Mlodinow offers examples from the movie industry, where producers have been fired because they successively selected several films that did not perform well. Upon the arrival of the new executive, the studio’s improvement in performance is considered proof that indeed it was previous management that caused the slump and that the new one was a good choice. Ironically, however, this perception occurs even when the subsequent successes were actually selected by the fired executive and already in the pipeline.16
Speaking of the film industry, movie sequels are also cursed by regression effects. It shouldn’t really be a surprise if the second installments that are made after an original blockbuster get relatively worse ratings. The bigger the first hit, the more “cursed” will be the follow-up. This doesn’t mean that a sequel would be objectively bad, of course, and data suggest that movie sequels tend to generate substantial returns. Hence, there’s no risk that we’ll have a shortage of them anytime soon.17
Ultimately, a faulty understanding of regression to the mean based on experience prompts us to generate the wrong story, which leads us to misplace blame and praise. Why, then, don’t we hear much about such possible regression effects around us? Despite its prevalence, the concept is missing from most classroom discussions and journalistic analyses. We never see a headline in the sports or finance section exclaim, “Performance Is Down: Regression to the Mean Strikes Again!”
This is partly because accepting regression effects means assigning a crucial role to chance in important outcomes we experience. We humans aren’t comfortable with randomness. We are reluctant to admit that we don’t have complete control over the results we work hard or pay good money to obtain. Instead, we make up stories to explain both good performance and bad, hoping that this will enable us to consistently obtain the first and avoid the second.
Discounting the role of chance and reading too much into random fluctuations is only one way stories may mislead. There are more ways in which our experience-based narratives can be at odds with the complex underlying causality. And these don’t need to involve extreme events at all.
Stories That Warp Time
When we plant a seed, we don’t expect to harvest the fruit immediately. We have to plan things ahead, make the investment, stay the course, and wait for the outcome. We know it’ll take time.
When we go to school, we don’t get the returns immediately. We have to think about what we’d like to achieve, use our knowledge, make the investment, and build toward an outcome. Education and its benefits, too, take time.
If we wish to have a healthy body, we don’t get to achieve it immediately. We have to eat well, commit to regular exercise, make the investment, and gradually reach a certain level of desired fitness. Good health also takes time.
Almost everything worthwhile in life takes will and effort—as well as time. And when a treatment or intervention requires a considerable and uncertain time to show its effects, it becomes easier to get confused about what really caused what.
The economy, for instance, is a complex system where processes are imperfectly understood and often take longer than we’d like to show their effects. If measures are taken by the government and other agencies to improve economic conditions, their consequences won’t likely present themselves immediately.
What’s more, many policies may need a costly upfront investment to yield desired results later, leading to “worse-before-better” dynamics. For example, if the government seeks to reduce unemployment by adjusting its education policies, any possible consequences of today’s costly actions would not be seen for some time. Policy makers and managers plant the seeds, and the fruits take time to grow. Yet, we’re often inclined to simplify the story, skipping impatiently over the time required for real change to happen and drawing from experience a story that is drastically shortened and therefore oversimplified.18
As a result, we risk reaching false conclusions. If an action does not promptly produce a certain projected consequence, it may be deemed ineffective. When an outcome does emerge, we tend to attribute it to actions taken recently. A newly elected politician can thus easily claim credit for positive changes in economic or social conditions that were in fact initiated by actions of previous administrations. The same goes for a newly hired executive in any type of organization. In fact, the case of movie executives fired (or rewarded) primarily based on the current situation is also an apt example of stories that warp time. We end up making faulty choices if we don’t recognize the time it takes for water to find its own level and for extreme situations to regress to the mean.
Generating stories with this sort of flawed sense of time raises a barrier against a better tomorrow. People in positions of authority are usually aware of the human tendency to learn from immediate experience—many of them possibly share it and may act accordingly. To secure their positions and statuses, they are often incentivized to opt for quick fixes that produce fast and predictable results, even though longer-term solutions may be more desirable.
Unless we learn to have our stories accurately reflect the element of time necessary to grow things, we shouldn’t really be surprised if we reward and then get stuck with inadequate strategies, time and time again. This may even lead those with executive power to gradually become more shortsighted, as they learn to take advantage of our fallible story-generating prowess.
Like randomness, the time delay between a cause and its effect leads us to embrace false stories that prove to be unhelpful guides to future decisions. But the story doesn’t end there…
Stories That Overgeneralize
What images, characteristics, and emotions immediately emerge in your mind as you read these words?
People from [insert a country name here].
People who are [insert a profession here].
Chances are, you didn’t have to spend much time and effort to generate a short story for each.
Our intuition likes to save time and energy, so it tends to categorize things and to construct simple stories about each of these categories. The resulting stereotypes often feature a set of images, a list of characteristics, and a mix of emotions attached to them.19
Many of the stereotypes our brains contain have been imported from the broader culture—from stories, beliefs, assumptions, and attitudes we encounter in our families, in our communities, in our schooling, and in the media. But personal experience is also a stereotype-generating machine. It only takes us one or a few encounters with a particular category of people or a particular type of situation to reach conclusions, develop overarching stories, and incorporate them into our view of reality.
Stereotypes can indeed prove useful. As social psychologist Lee Jussim reports in Social Perception and Social Reality, many generalizations can be statistically accurate and guide predictions appropriately when time and information are scarce.20
Problems arise, however, when stereotypes based on limited personal experience gloss over relevant nuances and lead to absolute conclusions. Experience also doesn’t effectively warn us when stereotypes are unreliable, only partially true, purely subjective, or obsolete. Worse, once stereotypes based on faulty stories take hold, more experience can trap us in them, sometimes against our own best interests.21
For example, author Malcolm Gladwell discusses in Blink the career problems long faced by female classical musicians. Historical practices, unchallenged traditions, and the personal views of decision makers created powerful stereotypes defining the characteristics of orchestral instrumentalists—many orchestra leaders believed that only men could play well enough. As a result, orchestras rarely hired women performers. As economists Claudia Goldin and Cecilia Rouse observe: “Not only were their numbers extremely low until the 1970’s, but many music directors, ultimately in charge of hiring new musicians, publicly disclosed their belief that female players had lower musical talent.”22
For centuries, as a result, many women were denied the opportunity to perform music in public, and many more were likely discouraged from becoming musicians in the first place. And by excluding a specific talent pool, orchestras ended up reducing their ability to achieve their own objective, which was to assemble the talent needed to perform music of the highest possible quality.
- On Sale
- Sep 1, 2020
- Page Count
- 272 pages