Rigor Mortis

How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions

Contributors

By Richard Harris

Formats and Prices

Price

$17.99

Price

$22.99 CAD

Format

Format:

  1. Trade Paperback $17.99 $22.99 CAD
  2. ebook $11.99 $15.99 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around May 1, 2018. This date is subject to change due to shipping delays beyond our control.

An essential book to understanding whether the new miracle cure is good science or simply too good to be true

American taxpayers spend $30 billion annually funding biomedical research, but over half of these studies can’t be replicated due to poor experimental design, improper methods, and sloppy statistics. Bad science doesn’t just hold back medical progress, it can sign the equivalent of a death sentence for terminal patients. In Rigor Mortis, Richard Harris explores these urgent issues with vivid anecdotes, personal stories, and interviews with the top biomedical researchers. We need to fix our dysfunctional biomedical system — before it’s too late.

Excerpt

PREFACE

WHEN YOU READ about advances in medicine, it often seems like long-awaited breakthroughs are just around the corner for cancer, Alzheimer's, stroke, osteoarthritis, and countless less common diseases. But it turns out we live in a world with an awful lot of corners. Most of the time we round one only to discover another corner rather than a destination. I've reported countless medical stories since I arrived at National Public Radio in 1986, in retrospect with more hopefulness than they often deserved. And lately I've come to realize that the reason medical research is so painfully slow isn't simply because it's hard—which indeed it is. It also turns out that scientists have been taking shortcuts around the methods they are supposed to use to avoid fooling themselves. The consequences are now haunting biomedical research. Simply too much of what's published is wrong. It doesn't have to be that way.

These should be halcyon days for medical science. The human genome, our genetic blueprint, was deciphered and laid out for all to see in 2003. Technology for research labs has progressed at an astonishing pace. What used to take years of toil by a dedicated team can now be accomplished in an afternoon by a technician with the right instruments. Scientists can custom-design mice and engineer them to stand in for humans in laboratory experiments. Researchers can sift through terabytes of data to find clues for new diagnostic tests, treatments, and cures. To be sure, there have been great strides in medicine when viewed over the long haul—antibiotics, vaccines, and heart surgery, along with potent public health advice (especially, "Don't smoke!"). Life expectancy in the United States continues to creep upward, by and large. Underlying this ongoing effort is a generous pool of money. American taxpayers contribute more than $30 billion a year to fund the National Institutes of Health. Add in other sources, like the price of research that's baked into the pills we swallow and the medical treatments we receive, and the average American household spends $900 a year to support biomedical studies.

Yet metastatic cancer is nearly as unstoppable now as it was decades ago (with only a few exceptions). Alzheimer's disease remains untreatable, even as an avalanche of baby boomers ages and becomes more vulnerable to that grim and costly condition. Lou Gehrig's disease (amyotrophic lateral sclerosis, or ALS) is one of many devastating neurological conditions for which there is no effective remedy. In fact, of the 7,000 known diseases, only about 500 have treatments, many offering just marginal benefits. As Malcolm Macleod at the University of Edinburgh put it, medical science is in the doldrums.

Biomedical science hasn't ground to a halt. Far from it. But this wasted effort is slowing progress—and at a time we can least afford it. After long periods of growth, federal support for biomedical research is now shrinking, given the growing cost of doing science. So it's never been more important to make the most of these precious resources.

Despite the technology, the effort, the money, and, yes, even the passion on the part of many scientists who are determined to make a difference, medical research is plagued with unforced and unnecessary errors. Scientists often face a stark choice: they can do what's best for medical advancement by adhering to the rigorous standards of science, or they can do what they perceive is necessary to maintain a career in the hypercompetitive environment of academic research. It's a choice nobody should have to make.

In the following chapters, you will read about the many ways that research has gone astray, as perverse incentives discourage scientists from following the rigorous path of top-quality science. My use of the term "rigor mortis," or the stiffness that comes after death, is of course a bit of hyperbole in the service of wordplay. Rigor in biomedical science certainly isn't dead, but it does need a major jolt of energy.

The good news is that these problems are now being recognized. Some can be fixed without a lot of technical difficulty. For example, any scientist can send a sample of cells off to a lab that can authenticate its identity. Researchers working at the lab bench can make small adjustments to their experiments to reduce the risk that wishful thinking is tainting their results. And biostatisticians can help make sure an experiment is designed and analyzed properly. The challenge now isn't identifying the technical fixes. The much harder challenge is changing the culture and the structure of biomedicine so that scientists don't have to choose between doing it right and keeping their labs and careers afloat.

In researching this book, I expected scientists would be reluctant to talk about the troubles facing their enterprise. That was surprisingly not the case. In fact, most scientists I called or visited were eager to tell their stories and to share their suggestions for how to put things right. Leaders at the National Institutes of Health and elsewhere have also stepped up to acknowledge these problems and seek solutions. (I've noticed that people are generally more willing to admit a problem exists when there are some concrete solutions at hand.) Patient-advocacy groups are increasingly addressing these problems. And a few pioneers have taken on these issues as their personal crusade. They know that medical science has already done much to serve humanity, and it has the potential to do so much more.

Let me end this introduction with an important philosophical point, relevant both to science and to this book. Most of science is built on inference rather than direct observation. We can't see the atoms or molecules inside our bodies, and we can't truly explain the root cause of disease. Science progresses by testing ideas indirectly, throwing out the ones that seem wrong, and building on those best supported by the facts at hand. Gradually, scientists build stories that do a better job of approximating the truth. But at any given moment, there are parallel narratives, sometimes sharply at odds with one another. Scientists rely on their own individual judgments to decide which stories come closer to the truth (absolute Truth is forever out of reach). Some stories that seem on the fringe today will become the accepted narrative some years from now. Indeed, it's the unexpected ideas that often propel science forward. Writers often don't say so clearly, but we too are in the business of weighing evidence and making judgment calls, assembling observations that bring us closer to the truth as we perceive it. It's a necessary element of storytelling. No doubt there will be those who see the world differently, who weigh a somewhat different set of facts and come to different conclusions. Since I explore the contingent nature of science in this book, it seems only fair to acknowledge that I'm making judgments as well, not revealing the objective Truth.




Chapter One

BEGLEY'S BOMBSHELL

IT WAS ONE of those things that everybody knew but was too polite to say. Each year about a million biomedical studies are published in the scientific literature. And many of them are simply wrong. Set aside the voice-of-God prose, the fancy statistics, and the peer review process, which is supposed to weed out the weak and errant. Lots of this stuff just doesn't stand up to scrutiny. Sometimes it's because a scientist is exploring the precarious edge of knowledge. Sometimes the scientist has unconsciously willed the data to tell a story that's not in fact true. Occasionally there is outright fraud. But a large share of what gets published is wrong.

C. Glenn Begley decided to say what most other people dared not speak. An Australian-born scientist, he had left academia after twenty-five years in the lab to head up cancer research at the pioneering biotech company Amgen in Southern California. While working in academia, Begley had codiscovered a protein called human G-CSF, which is now used in cancer treatments to reconstitute a person's immune system after a potentially lethal dose of chemotherapy. G-CSF ultimately proved to be Amgen's first blockbuster drug, so it's not surprising that years later, when the company wanted to create an entire cancer research program, it hired Begley for the job.

Pharmaceutical companies rely heavily on published research from academic labs, which are largely funded by taxpayers, to get ideas for new drugs. Companies can then seize upon those ideas, develop them, and make them available as new treatments. Begley's staff scoured the biomedical literature for hot leads for potential new drugs. Every time something looked promising, they'd start a dossier on the project. Begley insisted that the first step of any research project would be to have company scientists repeat the experiment to see if they could come up with the same results. Most of the time, Amgen labs couldn't. That was duly noted on the dossier, the case was closed, and the scientists moved on to the next exciting idea reported in the scientific literature.

After ten years at Amgen, Begley was ready to move on. But before he went, he wanted to take stock of the studies that his team had filed away as not reproducible—focusing in particular on the ones that could have led to important drugs, if they had panned out. He chose fifty-three papers he considered potentially groundbreaking. And for this review, the company didn't simply try to repeat the experiments—Begley asked the scientists who originally published these exciting results to help.

"The vast majority of the time the scientists were willing to work with us. There were only a couple of occasions where truly the scientists hung up on us and refused to continue the conversation," he said. First, Begley asked the scientists to provide the exact materials that they had used in the original experiment. If Amgen again couldn't reproduce the result with this material, they kept trying. "On about twenty occasions we actually sent [company] scientists to the host laboratory and watched them perform experiments themselves," Begley told me. This time, however, the original researchers were kept in the dark about which part of the experiment was supposed to produce positive results and which would serve as a comparison group (the control). Most of the time, the experiments failed under these blinded conditions. "So it wasn't just that Amgen was unable to reproduce them," Begley said. "What was more shocking to me was the original investigators themselves were not able to." Of the fifty-three original exciting studies that Begley put to the test, he could reproduce just six. Six. That's barely one out of ten.

Begley went to the Amgen board of directors and asked what he should do with this information. They told him to publish it. The German drug maker Bayer had undertaken a similar project and got nearly as desultory results (it was able to replicate 25 percent of the studies it reexamined). That study, published in a specialty journal in September 2011, hadn't sparked a lot of public discussion. Begley thought his study would gain more credibility if he recruited an academic scientist as a coauthor. Lee Ellis from the MD Anderson Cancer Center in Houston lent his name and analysis to the effort. He, too, had been outspoken about the need for more rigor in cancer research. When the journal Nature published their commentary in March 2012, people suddenly took notice. Begley and Ellis had put this issue squarely in front of their colleagues.

They were hardly treated as heroes. Robert Weinberg, a prominent cancer researcher at the Massachusetts Institute of Technology, told me, "To my mind that [paper] was a testimonial to the silliness of the people in industry—their naïveté and their lack of competence." When they spoke at conferences, Begley said, scientists would stand up and tell them "that we were doing the scientific community a disservice that would decrease research funding and so on." But he said the conversation was always different at the hotel bar, where scientists would quietly acknowledge that this was a corrosive issue for the field. "It was common knowledge; it just was unspoken. The shocking part was that we said it out loud."

The issue of reproducibility in biomedical science has been simmering for many years. As far back as the 1960s, scientists raised the alarm about well-known pitfalls—for instance, warning that human cells widely used in laboratory studies were often not at all what they purported to be. In 2005, John Ioannidis published a widely cited paper, titled "Why Most Published Research Findings Are False," that highlighted the considerable problems caused by flimsy study design and analysis. But with the papers from Bayer and then Begley, a problem that had been causing quiet consternation suddenly crossed a threshold. In a remarkably short time, the issue went from back to front burner.

Some people call it a "reproducibility crisis." At issue is not simply that scientists are wasting their time and our tax dollars; misleading results in laboratory research are actually slowing progress in the search for treatments and cures. This work is at the very heart of the advances in medicine. Basic research—using animals, cells, and the molecules of life such as DNA—reveals the underlying biology of health and disease. Much of this endeavor is called "preclinical research" with the hope that discoveries will lead to actual human studies (in the clinic). But if these preclinical discoveries are deeply flawed, scientists can spend years (not to mention untold millions of dollars) lost in dead ends. Those periodic promises that we're going to cure cancer or vanquish Alzheimer's rest on the belief that scientific discoveries are moving us in that direction. No doubt some of them are, but many published results are actually red herrings. And the shock from the Begley and Ellis and Bayer papers wasn't just that scientists make mistakes. These studies sent the message that errors like that are incredibly common.

At first blush, that seems implausible, which is perhaps one reason that it took so long for the idea to gain currency. After all, scientists on the whole are very smart people. Collectively they have a long record of success. Biomedical research is responsible for most of the pills in our medicine cabinets, not to mention Nobel Prize–winning insights about the very nature of our being. Many biomedical scientists are motivated to discover new secrets of life—and to make the world a better place for humanity. Some scientists studying disease have relatives or loved ones who have suffered from these maladies, and they want to find cures. Academics aren't generally in it for the money. There are more lucrative ways to make use of a PhD in these fields of science. Last but not least, scientists take pride in getting it right. Failure is an inevitable aspect of research—after all, scientists are groping around at the edges of knowledge—but avoidable mistakes are embarrassing and, worse, counterproductive.

The ecosystem in which academic scientists work has created conditions that actually set them up for failure. There's a constant scramble for research dollars. Promotions and tenure depend on their making splashy discoveries. There are big rewards for being first, even if the work ultimately fails the test of time. And there are few penalties for getting it wrong. In fact, given the scale of this problem, it's evident that many scientists don't even realize that they are making mistakes. Frequently scientists assume what they read in the literature is true and start research projects based on that assumption. Begley said one of the studies he couldn't reproduce has been cited more than 2,000 times by other researchers, who have been building on or at least referring to it, without actually validating the underlying result.

There's little funding and no glory involved in checking someone else's work. So errors often only become evident years later, when a popular idea that is poorly founded in fact is finally put to the test with a careful experiment and suddenly melts away. A false lead can fool whole fields into spending years of research and millions of dollars of research funding chasing after something that turns out not to be true.

Failures often surface when it's time to use an idea to develop a drug. That's why Glenn Begley's results were so jaw-dropping. That very high failure rate focused on studies that really mattered. Drug companies rely heavily on academic research for new insights into biology—and particularly for leads for new drugs to develop. If academia is pumping out dubious results, that means pharmaceutical companies will struggle to produce new drugs. Of course, Begley's test involved just fifty-three studies out of the millions in the scientific literature. And he chose those papers because they had surprising, potentially useful results. Perhaps a survey of more mundane studies would show a higher success rate—but, of course, those studies aren't likely to lead to big advances in medicine.

There has been no systematic attempt to measure the quality of biomedical science as a whole, but Leonard Freedman, who started a nonprofit called the Global Biological Standards Institute, teamed up with two economists to put a dollar figure on the problem in the United States. Extrapolating results from the few small studies that have attempted to quantify it, they estimated that 20 percent of studies have untrustworthy designs; about 25 percent use dubious ingredients, such as contaminated cells or antibodies that aren't nearly as selective and accurate as scientists assume them to be; 8 percent involve poor lab technique; and 18 percent of the time, scientists mishandle their data analysis. In sum, Freedman figured that about half of all preclinical research isn't trustworthy. He went on to calculate that untrustworthy papers are produced at the cost of $28 billion a year. This eye-popping estimate has raised more than a few skeptical eyebrows—and Freedman is the first to admit that the figure is soft, representing "a reasonable starting point for further debate."

"To be clear, this does not imply that there was no return on that investment," Freedman and his colleagues wrote. A lot of what they define as "not reproducible" really means that scientists who pick up a scientific paper won't find enough information in it to run the experiment themselves. That's a problem, to be sure, but hardly a disaster. The bigger problem is that the errors and missteps that Freedman highlights are, as Begley found, exceptionally common. And while scientists readily acknowledge that failure is part of the fabric of science, they are less likely to recognize just how often preventable errors taint studies.

"I don't think anyone gets up in the morning and goes to work with the intention to do bad science or sloppy science," said Malcolm Macleod at the University of Edinburgh. He has been writing and thinking about this problem for more than a decade. He started off wondering why almost no treatment for stroke has succeeded (with the exception of the drug tPA, which dissolves blood clots but doesn't act on damaged nerve cells), despite many seemingly promising leads from animal studies. As he dug into this question, he came to a sobering conclusion. Unconscious bias among scientists arises every step of the way: in selecting the correct number of animals for a study, in deciding which results to include and which to simply toss aside, and in analyzing the final results. Each step of that process introduces considerable uncertainty. Macleod said that when you compound those sources of bias and error, only around 15 percent of published studies may be correct. In many cases, the reported effect may be real but considerably weaker than the study concludes.

Mostly these estimated failure rates are educated guesses. Only a few studies have tried to measure the magnitude of this problem directly. Scientists at the MD Anderson Cancer Center asked their colleagues whether they'd ever had trouble reproducing a study. Two-thirds of the senior investigators answered yes. Asked whether the differences were ever resolved, only about a third said they had been. "This finding is very alarming as scientific knowledge and advancement are based upon peer-reviewed publications, the cornerstone of access to 'presumed' knowledge," the authors wrote when they published the survey findings.

The American Society for Cell Biology (ASCB) surveyed its members in 2014 and found that 71 percent of those who responded had at some point been unable to replicate a published result. Again, 40 percent of the time, the conflict was never resolved. Two-thirds of the time, the scientists suspected that the original finding had been a false positive or had been tainted by "a lack of expertise or rigor." ASCB adds an important caveat: of the 8,000 members it surveyed, it heard back from 11 percent, so its numbers aren't convincing. That said, Nature surveyed more than 1,500 scientists in the spring of 2016 and saw very similar results: more than 70 percent of those scientists had tried and failed to reproduce an experiment, and about half of those who responded agreed that there's a "significant crisis" of reproducibility.

These concerns are not being ignored. From the director's office in Building 1 at the National Institutes of Health (NIH), Francis Collins and his chief deputy, Lawrence Tabak, declared in a 2014 Nature comment, "We share this concern" over reproducibility. In the long run, science is a self-correcting system, but, they warn, "in the shorter term—the checks and balances that once ensured scientific fidelity have been hobbled." Janet Woodcock, a senior official at the Food and Drug Administration (FDA), was even more blunt. "I think it's a totally chaotic enterprise." She told me drug companies like Amgen usually discover problems early on in the process and bear the brunt of weeding out the poorly done science. But "sometimes we [FDA regulators] have to use experiments that have been done in the academic world," for example, by university scientists who are working on a drug for a rare disease. "And we just encounter horrendous problems all the time." When potential drugs make it into the more rigorous pharmaceutical testing regimes, nine out of ten fail. Woodcock said that's because the underlying science isn't rigorous. "It's like nine out of ten airplanes we designed fell out of the sky. Or nine out of ten bridges we built failed to stand up." She rocked back and laughed at the very absurdity of the idea. And then she got serious. "We need rigorous science we can rely on."

Arturo Casadevall at the Johns Hopkins Bloomberg School of Public Health shares that sense of alarm. "Humanity is about to go through a couple of really rough centuries. There is no way around this," he said, looking out on a future with a burgeoning population stressed for food, water, and other basic resources. Over the previous few centuries, we have managed a steadily improving trajectory, despite astounding population growth. "The scientific revolution has allowed humanity to avoid a Malthusian crisis over and over again," he said. To get through the next couple of centuries, "we need to have a scientific enterprise that is working as best as it can. And I fundamentally think that it isn't."

We're already experiencing a slowdown in progress, especially in biomedicine. By Casadevall's reckoning, medical researchers made much more progress between 1950 and 1980 than they did in the following three decades. Consider the development of blood-pressure drugs, chemotherapy, organ transplants, and other transformative technologies. Those all appeared in the decades before 1980. His ninety-two-year-old mother is a walking testament to steadily improving health in the developed world. She is taking six drugs, five of which "were being used when I was a resident at Bellevue Hospital in the early 1980s." The one new medication? For heartburn. "You would think that with all we know today we should be doing a lot better. Why aren't we there?"

The rate of new-drug approval has been falling since the 1950s. In 2012, Jack Scannell and his colleagues coined the term "Eroom's law" to describe the steadily worsening state of drug development. "Eroom," they explained, is "Moore" spelled backward. Moore's law charts the exponential progress in the efficiency of computer chips; the pharmaceutical industry, however, is headed in the opposite direction. If you extrapolate the trend, starting in 1950, you'll find that drug development essentially comes to a halt in 2040. Beyond that point developing any drug becomes infinitely expensive. (That forecast is undoubtedly too pessimistic, but it makes a dramatic point.) The only notable uptick occurred around the mid-1990s, when researchers made some remarkable progress in developing drugs for HIV/AIDS. (The situation improved modestly in the years after Scannell and colleagues' analysis ended in 2010.) These researchers blame Eroom's law on a combination of economic, historical, and scientific trends. Scannell told me that a lack of rigor in biomedical research is an important underlying cause.

For Sally Curtin, it's personal. Crisis struck on February 5, 2010. She came downstairs in her eastern Maryland home to find her fifty-eight-year-old husband, Lester "Randy" Curtin, lying unconscious on the floor. She and an emergency crew fought through a blizzard to get him to the hospital. It took doctors four days to reach a diagnosis, and the news could hardly have been worse. Randy had a brain tumor, glioblastoma multiforme.

Both Sally and Randy worked at the National Center for Health Statistics (part of the Centers for Disease Control and Prevention). He was the guy colleagues went to when they were having trouble working through a statistical problem. When it came to his own odds, the doctors told them not to look at the survival numbers—but "we're numbers people," Sally Curtin told me. "The first thing we did was go look at the numbers." Half of patients with this diagnosis live less than fifteen months, and 95 percent are dead within five years.

"I had never heard the term glioblastoma. It seemed unreal to me that there was a cancer this lethal that they had not made progress on in fifty years," Sally told me. This cancer strikes about 12,000 Americans per year. (Senator Ted Kennedy was one of the most notable victims. Vice President Joe Biden also lost his son Beau to glioblastoma.) Even so, the Curtins hoped they could beat the odds. They signed Randy up for three separate clinical trials at the National Institutes of Health—experimental treatments that they hoped would keep the spreading tumors in check. None of them worked. In fact, in one brief period during the treatment, the tumors grew by 40 percent.

The worst part was that the disease was attacking the brain of a man with a powerful intellect. "His oldest daughter put it best. She said it's like telling someone who's afraid of the water that you are going to have death by drowning." With treatment options exhausted, Randy returned home to Huntingtown, Maryland, and registered for hospice. As the disease progressed, Sally said her husband had hallucinations. He would smash furniture, and once he pulled down the TV. "He really scared the kids," nine-year-old Daniel and eleven-year-old Kevin. "It wasn't like he was abusive or angry at us. He was just out of his mind" as the tumor grew. He hung on for seven months, increasingly agitated and in constant pain. Near the end, he asked Sally to overdose him with morphine, but she could not take his life. Eventually he slipped into a coma. At one point he had a seizure that jolted him out of it and was lucid enough to tell Sally, "I love you." That was the last thing he said to her. Five days later, he died, shortly after his sixtieth birthday.

Genre:

  • Named by Amazon as one of their "Best Nonfiction Books of the Month"
  • Named one of PRI/SCIENCE FRIDAY's "Best Science Books of 2017"
  • "Rigor Mortis provides an excellent summation of the case for fixing science."—SLATE
  • "Harris makes a strong case that the biomedical research culture is seriously in need of repair."—Nature
  • "Rigor Mortis effectively illustrates what can happen when a convergence of social, cultural, and scientific forces...conspires to create a real crisis of confidence in the research process."—Science
  • "A rewarding read for anyone who wants to know the unvarnished truth about how science really gets done."—Financial Times
  • "Rigor Mortis effectively illustrates what can happen when a convergence of social, cultural, and scientific forces, as well as basic human motivation, conspires to create a real crisis of confidence in the research process."—SCIENCE
  • "Harris makes a strong case that the biomedical research culture is seriously in need of repair."—Nature
  • "Rigor Mortis is rife with examples of things that go awry in medical studies, how they happen, and how they can be avoided and fixed. For the most part, academic biomedical scientists are not evil, malicious, or liars at heart."—Ars Technica
  • "An alarming and highly readable summation of what has been called the 'reproducibility crisis' in science--the growing realization that large swathes of the biomedical literature may be wrong."—Spectrum Magazine
  • "This engaging book will inform and challenge readers who care about the public image of science, the state of peer review, and US funding for science."—Physics Today
  • "This behind-the-scenes look at biomedical research will appeal to students and academics. A larger audience of impacted patients and taxpayers will also find this critical review fascinating and alarming. Highly recommended for public and academic libraries."—Library Journal
  • "[An] easy-reading but hard-hitting exposé..."—Kirkus
  • "Just as 'post-truth' was selected as the word of the year in 2016 for its political connotations, Richard Harris masterfully shows how this pertains to science, too. Rigor Mortis is a compelling, sobering, and important account of bad biomedical research, and the pressing need to fix a broken culture."
    Eric Topol, Director of the Scripps Translational Science Institute and author of The Patient Will See You Now
  • "Science remains the best way to build knowledge and improve health, but as Richard Harris reminds us in Rigor Mortis, it is also carried out by humans subject to 'publish or perish' and other perverse incentives. Tapping into these tensions, Harris deftly weaves gripping tales of sleuthing with possible paths out of what some call a crisis. Read this book if you want to see how biomedical research is reviving itself."—Ivan Oransky, Co-Founder of Retraction Watch and Distinguished Writer In Residence at New York University
  • "Richard Harris's elegant and compelling dissection of scientific research is must-reading for anyone seeking to understand today's troubled research enterprise-and how to save it."
    Deborah Blum, Pulitzer Prize-winning journalist and Director of the Knight Science Journalism Program at MIT
  • "Richard Harris has written an essential guide to how scientific research may arrive at the wrong conclusions. From the 235 ways that scientists can fool themselves to the misuse of statistics and the persistence of unsound research methods, Harris outlines the problems underlying the so-called 'reproducibility crisis' in biomedical research and introduces readers to the people working on solutions."—Christie Aschwanden, lead science writer for FiveThirtyEight and health columnist for the Washington Post

On Sale
May 1, 2018
Page Count
288 pages
Publisher
Basic Books
ISBN-13
9781541644144

Richard Harris

About the Author

Richard Harris is one of the nation’s most celebrated science journalists, covering science, medicine, and the environment for more than thirty years for NPR, and the three-time winner of the AAAS Science Journalism Award. He lives in Washington, DC.

Learn more about this author