Formats and Prices
- ebook $13.99 $17.99 CAD
- Hardcover $37.00 $47.00 CAD
This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around June 10, 2010. This date is subject to change due to shipping delays beyond our control.
Actually, those experts are a big reason we’re in this mess. And, according to acclaimed business and science writer David H. Freedman, such expert counsel usually turns out to be wrong — often wildly so. Wrong reveals the dangerously distorted ways experts come up with their advice, and why the most heavily flawed conclusions end up getting the most attention-all the more so in the online era.
But there’s hope: Wrong spells out the means by which every individual and organization can do a better job of unearthing the crucial bits of right within a vast avalanche of misleading pronouncements.
Also by David H. Freedman
A Perfect Mess: The Hidden Benefits of Disorder—How Crammed
Closets, Cluttered Offices, and On-the-Fly Planning Make the World
a Better Place (with Eric Abrahamson, 2007)
Corps Business: The 30 Management Principles of the
U.S. Marines (2000)
At Large: The Strange Case of the World’s Biggest Internet Invasion
(with Charles C. Mann, 1998)
Brainmakers: How Scientists Are Moving Beyond Computers to
Create a Rival to the Human Brain (1995)
Copyright © 2010 by David H. Freedman
All rights reserved. Except as permitted under the U.S. Copyright Act of 1976,
no part of this publication may be reproduced, distributed, or transmitted in any
form or by any means, or stored in a database or retrieval system, without the
prior written permission of the publisher.
Little, Brown and Company
Hachette Book Group
237 Park Avenue, New York, NY 10017
First Edition: June 2010
Little, Brown and Company is a division of Hachette Book Group, Inc.
The Little, Brown name and logo are trademarks of Hachette Book Group, Inc.
If he is weak in the knees, let him not call the hill steep.
—HENRY DAVID THOREAU
Success consists of going from failure to failure without
loss of enthusiasm.
I’m sitting in a coffee shop in a pediatric hospital in Boston, hard by a nine-foot-tall bronze teddy bear, with a man who is going to perform a surprising trick. I’m thinking of an article recently published in a prestigious medical journal, an article that reports the results of a research study, and he will tell me whether or not the study is likely to turn out to be right or wrong. It’s the sort of study that your doctor might read about, and that you might learn about from a newspaper, website, or morning TV news show. It may well be that the results of this study will change your life—they might convince you to start eating or avoiding certain foods to lower your risk of heart disease, or to take a certain drug to help you beat cancer, or to learn whether or not you are carrying a gene linked to vulnerability to a mental illness. But this man won’t need to hear any of the particulars of the study to perform his feat. All he needs to know is that it was a study published in a top journal.
His prediction: it’s wrong. It’s a prediction that strikes at the foundation of expertise and our trust in it.
The man is John Ioannidis, a doctor and researcher whose specialty is calculating the chances that studies’ results are false. For someone dedicated to spotlighting the inadequacies of his colleagues’ lifework, Ioannidis is pleasant, polite, and soft-spoken, even if he discreetly radiates the fidgety energy of someone who habitually packs too much into his day. He looks young for a man heading into his midforties, with a slight build, a wavy mop of fine, dark hair, and a thin mustache. Also a bit surprising about Ioannidis is that he is highly regarded by his peers. Communities usually find ways to marginalize those who expose their flaws, but the world of medical research, in which extraordinary talent and effort are prerequisites for attaining even the lowest rungs of recognition, has kept Ioannidis in demand via the field’s standard trappings of success: prestigious appointments, including one at the world-class Tufts–New England Medical Center and another at the University of Ioannina Medical School in his native Greece; frequent citations by colleagues of his work, some of which has been published in the field’s top journals; and a stream of invitations to speak at conferences, where he is generally a big draw.
There’s no standard career path to becoming a deconstructor of wrongness, and Ioannidis took a roundabout route to it. Born in 1965 in the United States to parents who were both physicians, he was raised in Athens, where he showed unusual aptitude in mathematics and snagged Greece’s top student math prize. By the end of college, he seemed on track for a career as a mathematician. But he had come to feel the family pull of medicine and, not wanting to turn his back on math, decided to combine the two and become a medical mathematician. “I didn’t know exactly what such a thing might be,” he says, “but I felt sure there was some important component of medicine that was mathematical.” He graduated first in his class at the University of Athens Medical School, then shipped off to Harvard for his residency in internal medicine, followed by a research and clinical appointment at Tufts in infectious diseases. The math had to this point remained in the background, but in 1993, while at Tufts, he saw his chance to even things up a bit. There was growing interest in the new field of “evidence-based medicine”—that is, trying to equip physicians to do not merely what they had been taught to assume would help patients but what had been rigorously proven in studies would help patients. “Amazingly, most medical treatment simply isn’t backed up by good, quantitative evidence,” says Ioannidis—news that would likely come as a surprise to most patients. Distilling this sort of knowledge out of a chaos of patient data often requires more statistical-analysis firepower than clinical researchers bear, providing an opening for Ioannidis to make a mark.
Carrying his new interest to joint appointments at the National Institutes of Health and Johns Hopkins in the mid-1990s, Ioannidis began to look for interesting patterns in those medical-journal studies that explore how patients fare with certain treatments. Such studies are essentially the coin of the realm when it comes to communicating solid evidence of treatment effectiveness to physicians. A good doctor, it is presumed, scans the journals for the results of these studies to see what works and what doesn’t on which patients, and how well and with what risks, modifying her practices accordingly. Does it make sense to prescribe an antibiotic to a child with an ear infection? Should middle-aged men with no signs of heart disease be told to take a small, daily dose of aspirin? Do the potential benefits of a particular surgical intervention outweigh the risks? Studies presumably provide the answers. In examining hundreds of these studies, Ioannidis did indeed spot a pattern—a disturbing one. When a study was published, often it was only a matter of months, and at most a few years, before other studies came out to either fully refute the findings or declare that the results were “exaggerated” in the sense that later papers revealed significantly lesser benefits to the treatment under study. Results that held up were outweighed two-to-one by results destined to be labeled “never mind.”1
What was going on here? The whole point of carrying out a study was to rigorously examine a question using tools and techniques that would yield solid data, allowing a careful and conclusive analysis that would replace the conjecture, assumptions, and sloppy assessments that had preceded it. The data were supposed to be the path to truth. And yet these studies, and most types of studies Ioannidis looked at, were far more often than not driving to wrong answers. They exhibited the sort of wrongness rate you would associate more with fad-diet tips, celebrity gossip, or political punditry than with state-of-the-art medical research.
The two-out-of-three wrongness rate Ioannidis found is worse than it sounds. He had been examining only the less than one-tenth of one percent of published medical research that makes it to the most prestigious medical journals.* In other words, in determining that two-thirds of published medical research is wrong, Ioannidis is offering what can easily be seen as an extremely optimistic assessment. Throw in the presumably less careful work from lesser journals, and take into account the way the results end up being spun and misinterpreted by university and industrial PR departments and by journalists, and it’s clear that whatever it was about expert wrongness that Ioannidis had stumbled on in these journals, the wrongness rate would only worsen from there.
Ioannidis felt he was confronting a mystery that spoke to the very foundation of medical wisdom. How can the research community claim to know what it’s doing, and to be making significant progress, if it can’t bring out studies in its top journals that correctly prove anything, or lead to better patient care? It was as if he had set out to improve the battle effectiveness of a navy and immediately discovered that most of its boats didn’t float. Nor did the problems appear to be unique to medicine: looking at other branches of science, including chemistry, physics, and psychology, he found much the same. “The facts suggest that for many, if not the majority, of fields, the majority of published studies are likely to be wrong,” he says. Probably, he adds, “the vast majority.”
Medical and other scientific expertise aren’t exactly the bottom of the barrel when it comes to expert wisdom. Yes, much-heralded drugs get yanked off the market, we get conflicting advice about what to eat, and toxic chemicals make their way into our homes. But you don’t have to dig far in pretty much any other field to see similar, or worse, arrays of screwups. I could fill this entire book, and several more, with examples of expertise gone wrong—not only in medicine but in physics, finance, child raising, the government, sports, entertainment, and on and on. ( Just for fun, I’ve stuck a small sampling in Appendix 1.) The fact is, expert wisdom usually turns out to be at best highly contested and ephemeral, and at worst flat-out wrong.
Of course, compiling anecdotes and quoting experts about expertise doesn’t prove that experts usually mislead us.* Actually, proving expert wrongness isn’t really the point of this book. I’ve found that most people don’t need much convincing that experts are usually wrong. How could we not suspect that to be the case? We constantly hear experts contradict one another and even themselves on a vast range of issues, whether they’re spouting off on diets, hurricane preparedness, the secrets to being a great manager, the stock market, cholesterol-lowering drugs, getting kids to sleep through the night, the inevitability of presidential candidates, the direction of home values, the key to strong marriages, vitamins, the benefits of alcohol or aspirin or fish, the existence of weapons of mass destruction, and so on. As the world watched its financial institutions and economies teetering and in some cases collapsing in 2008 and 2009, many found it maddening that the great majority of financial experts, from those who advise heads of state to those who advise working stiffs, not only failed to foresee the trouble but in many cases specifically took to the airwaves to counsel that there wasn’t much to worry about, and in general failed to have anything consistent and helpful to say about the problems. We can all agree that there is a growing obesity epidemic, but it sometimes seems as if no two experts agree on what works when it comes to losing the excess weight. And those of us who hope to see our children’s schools improve can choose between experts who say that the curricula need to be less rigid and test-oriented, and experts who say precisely the opposite. If anything, we live in a time of acute frustration with experts, even as many of us remain dependent on them and continue to heed their advice.
Putting trust in experts who are probably wrong is only part of the problem. The other side of the coin is that many people have all but given up on getting good advice from experts. The total effect of all the contradicting and shifting pronouncements is to make expert conclusions at times sound like so much blather—a background noise of modern life. I think by now most of us have at some point caught ourselves thinking, or at least have heard from people around us, something along these lines: Experts! One day they say vitamin X / coffee / wine / drug Y / a big mortgage / baby learning videos / Six Sigma / multitasking / clean homes / arguing / investment Z is a good thing, and the next they say it’s a bad thing. Why bother paying attention? I might as well just do what I feel like doing. Do we really want to just give up on expertise in this way? Even if experts usually fail to give us the clear, reliable guidance we need, there are still situations, as we’ll see, where failing to follow their advice can be self-defeating and even deadly.
So I’m not going to spend much time trying to convince you that experts are often, and possibly usually, wrong. Instead, this book is about why expertise goes wrong and how we may be able to do a better job of seeking out more trustworthy expert advice. To that end, we’re going to look at how experts—including scientists, business gurus, and our other highly trusted sources of wisdom—fall prey to a range of measurement errors, how they come to have deep biases that lead them into gamesmanship and even outright dishonesty, and how interactions among them tend to worsen rather than correct for these problems. We’re also going to examine the ways in which the media sort through the flow of dubious expert pronouncements and further distort them, as well as how we ourselves are drawn to the worst of this shoddy output, and how we end up being even more misled on the Internet. Finally, we’ll try to extract from everything we’ve discovered a set of rough guidelines that can help to separate the most suspect expert advice from the stuff that has a better chance of holding up.
As I said, most people are quite comfortable with the notion that there’s a real problem with experts. But some—mostly experts—do in fact take objection to that claim. Here are the three objections I encountered the most often, along with quick responses.
(1) If experts are so wrong, why are we so much better off now than we were fifty or a hundred years ago? One distinguished professor put it to me this way in an e-mail note: “Our life expectancy has almost doubled in the past seventy-five years, and that’s because of experts.” Actually, the vast majority of that gain came earlier in the twentieth century from a very few sharp improvements, and especially from the antismoking movement. As for all of the drugs, diagnostic tools, surgical techniques, medical devices, lists of foods to eat and avoid, and impressive breakthrough procedures and technologies that fill medical journals and trickle down into media reports, consider this: between 1978 and 2001, according to one highly regarded study,2 U.S. life spans increased fewer than three years on average—when the drop in smoking rates slowed around 1990, so did life-expectancy gains. It’s hard to claim we’re floating on an ocean of marvelously effective advice from a range of experts when we’ve been skirting the edges of a new depression, the divorce rate is around 50 percent, energy prices occasionally skyrocket, obesity rates are climbing, children’s test scores are declining, we’re forced to worry about terrorist and even nuclear attacks, 118 million prescriptions for antidepressants3 are written annually in the United States, chunks of our food supply periodically become tainted, and, well, you get the idea. Perhaps a reasonable model for expert advice is one I might call “punctuated wrongness”—that is, experts usually mislead us, but every once in a while they come up with truly helpful advice.
(2) Sure, experts have been mostly wrong in the past, but now they’re on top of things. In mid-2008 experts were standing in line to talk about the extensive, foolproof controls protecting our banks and other financial institutions that weren’t in place in the late 1920s—just before those institutions started collapsing. Cancer experts shake their heads today over the ways in which generations of predecessors wasted decades hunting down the mythical environmental or viral roots of most cancers, before pronouncing as a sure thing the more recent theory of how cancer is caused by mutations in a small number of genes—a theory that, as we’ll see, has yielded almost no benefits to patients after two decades. Most everyone missed what was happening to our climate, or even spoke of a global cooling crisis, until we came to today’s absolutely certain understanding of global warming and its man-made causes—well, we’ll see how that turns out. How could we have been so foolish before? And what sort of fool would question today’s experts’ beliefs? In any case, the claim that we’ve come from wrong ideas to right ideas suggests that there’s a consensus of experts today on what the right ideas are. But there is often nothing close to such a consensus. When experts’ beliefs clash, somebody has to be wrong—hardly a sign of an imminent convergence on truth.
And, finally, (3) So what if experts are usually wrong? That’s the nature of expert knowledge—it progresses slowly as it feels its way through difficult questions. Well, sure, we live in a complex world without easy answers, so we might well expect to see our experts make plenty of missteps as they steadily chip away at the truth. I’m not saying that experts don’t make any progress, or that they ought to have figured it all out long ago. I’m suggesting three things: we ought to be fully aware of how large a percentage of expert advice is flawed; we should find out if there are perhaps much more disconcerting reasons why experts so frequently get off track other than “that’s just the nature of the beast”; and we ought to take the trouble to see if we can come up with clues that will help distinguish better expert advice from fishier stuff. And, by the way, if experts are so comfortable with the notion that their efforts ought to be expected to spit out mostly wrong answers, why don’t they work a little harder to get this useful piece of information across to us when they’re interviewed on morning news shows or in newspaper articles, and not just when they’re confronted with their errors?
Given that I’ve already started throwing the term “expert” around left and right, I suppose I ought to make sure you know what I mean by the word. Academics study “expertise” in pianists, athletes, burglars, birds, infants, computers, trial witnesses, and captains of industry, to name just a few examples. But when I say “expert,” I’m mostly thinking of someone whom the mass media might quote as a credible authority on some topic—the sorts of people we’re usually referring to when we say things like “According to experts…” These are what I would call “mass” or “public” experts, people in a position to render opinions or findings that a large number of us might hear about and choose to take into account in making decisions that could affect our lives. Scientists are an especially important example, but I’m also interested in, for example, business, parenting, and sports experts who gain some public recognition for their experience and insight. I’ll also have some things to say about pop gurus, celebrity advice givers, and media pundits, as well as about what I call “local” experts—everyday practitioners such as non-research-oriented doctors, stockbrokers, and auto mechanics.*
I’ve heard it said, half kiddingly, that meteorologists are the only people who get paid to be wrong. I would argue that in that sense most of our experts are paid to be wrong, and are probably wrong a much higher percentage of the time than are meteorologists. I’m going to show that although the process of wringing useful insights and advice from complex subjects may indeed be an inherently slow and erratic one, there are many other, less benign reasons why experts go astray. In fact, we’ll see that expert pronouncements are pushed toward wrongness so strongly that in the end it’s harder, I think, to explain why they’re sometimes right. But that doesn’t mean we’re hopelessly mired in this swamp of bad advice. With a decent compass, we can find our way out. Let’s start by exploring some of the muck.
* Ioannidis did find one group of studies that more often than not remained unrefuted: randomized controlled studies (more on these later) that appeared in top journals and that were cited in other researchers’ papers an extraordinary one thousand times or more. Such studies are extremely rare and represent the absolute tip of the tip of the pyramid of medical research. Yet one-fourth of even these studies were later refuted, and that rate might have been much higher were it not for the fact that no one had ever tried to confirm or refute nearly half of the rest.
* Why wouldn’t John Ioannidis, and the many other experts on expertise I’ll be quoting in this book, be just as untrustworthy as other experts? Short answer: experts on expertise may know enough about the traps that experts fall into to avoid falling in as often or as far. But see Appendix 4 for my exploration of that important and interesting question, and of the ways this entire book might be wrong.
* I’m much less interested in decision makers and leaders—such as corporate executives and political officeholders—who are themselves highly dependent on expert advice. I’m also mostly ignoring engineers and designers, who tend to give us tangible items rather than advice.
Some Expert Observations
I got a lot of things wrong.
—INVESTMENT GURU JIM CRAMER
In early 2008 I happened to catch a television news story mentioning new guidelines for performing cardiopulmonary resuscitation, or CPR, aimed at saving some of the 325,000 lives lost to sudden cardiac arrest every year in the United States alone, not to mention those from trauma, drownings, and shocks. The new guidelines hold that you are no longer supposed to bother with the breathing part of CPR—just keep pumping the victim’s chest nonstop, and the oxygen will take care of itself. Having some years ago spent the better part of a day pounding on and blowing air into mannequins to get my CPR certification from the American Red Cross, I did a little digging and discovered that while the change was endorsed by the American Heart Association and the European Resuscitation Council, the Red Cross continues to train the public in the breathing-and-pumping technique. To further complicate the picture, there’s a growing call in some circles to switch from chest compressions to abdominal compressions, which may pump more blood andavoid rib damage. So I dropped in on Paul Schwerdt, an interventional cardiologist at Norwood Hospital in Norwood, Massachusetts, who restarts hearts all the time. He told me to forget about CPR, because even trained laypeople almost never do it well enough to make a difference. If you want to save someone with a stopped heart, he said, find an automated external defibrillator, or AED—a highly portable, easy-to-use device that is becoming available in more and more public places, offices, and even many homes. Sure enough, I turned up a 2008 article in the New York Times stating that the immediate availability of a defibrillator raises the cardiac arrest–survival rate for those outside hospitals from as low as 1 percent to as high as 80 percent1—an astounding difference. Case closed? Well, not quite. Later I came across a study that found home AEDs didn’t increase cardiac-arrest survival a whit compared to homes where someone was capable of performing CPR.2 And the American Heart Association website states that victims whose hearts have gone into fibrillation are up to three times more likely to survive if they receive CPR from a bystander while awaiting defibrillation. I spoke with a second cardiac specialist, an emergency room nurse, and an emergency medical technician and got three additional takes on the issue, all somewhat different. Glad I was able to clear that up.
Expert confusion isn’t unique to medical matters. For example, economists weren’t exactly lining up in late 2007 and early 2008 to warn us all that national economies, global financial institutions, and real-estate markets were rapidly spiraling toward a black hole of potential collapse. And though plenty of experts did line up to offer advice, many of us ended up wishing they hadn’t. For example:
“I don’t anticipate any serious problems… among the large internationally active banks that make up a very substantial part of our banking system.”
—Ben Bernanke, Federal Reserve chairman, February 28, 2008
“Existing-Home Sales to Trend Up in 2008.”
—National Association of Realtors press release, December 9, 2007
“These errors make us look either incompetent at credit analysis or like we sold our soul to the devil for revenue, or a little bit of both.”
—A managing director at Moody’s, the most widely heeded rater of financial institutions and instruments
“It’s nineteen twenty-nine all over again.”
—Donald Trump, speaking in February 2009, almost a year after the start of the near collapse, as the economy was beginning to stabilize
Your personal broker or real-estate agent:
Well, I have no idea what yours told you, but if she steered you clear of the mess instead of straight into it, then you’re in a distinct minority.
In early 2009 I did a search of the past two months’ worth of articles in the New York Times, the Chicago Tribune, and USA Today, and turned up twenty-three stories roughly equally scattered between the three papers that included the word “expert” or “experts” in the headline. About half of the approximately fifty people unambiguously presented as experts in the bodies of these stories were scientists or other types of formal researchers. But the list also included consultants, law-enforcement and public-health officials, CEOs, authors, athletic coaches, financial analysts, and the directors of industry trade groups and nonprofit advocacy groups.
- On Sale
- Jun 10, 2010
- Page Count
- 304 pages
- Little, Brown and Company