The Skeptics' Guide to the Future

What Yesterday's Science and Science Fiction Tell Us About the World of Tomorrow


By Dr. Steven Novella

With Bob Novella

With Jay Novella

Formats and Prices




$38.00 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around September 27, 2022. This date is subject to change due to shipping delays beyond our control.

From the bestselling authors and hosts of "The Skeptics' Guide to the Universe," a high-tech roadmap of the future in their beloved voice, cracking open the follies of futurists past and how technology will profoundly change our world, redefining what it means to be human.

Our predictions of the future are a wild fantasy, inextricably linked to our present hopes and fears, biases and ignorance. Whether they be the outlandish leaps predicted in the 1920s, like multi-purpose utility belts with climate control capabilities and planes the size of luxury cruise ships, or the forecasts of the ‘60s, which didn’t anticipate the sexual revolution or women’s liberation, the path to the present is littered with failed predictions and incorrect estimations. The best we can do is try to absorb the lessons from futurism's checkered past, perhaps learning to do a little better.

In THE SKEPTICS' GUIDE TO THE FUTURE, Steven Novella and his co-authors build upon the work of futurists of the past by examining what they got right, what they got wrong, and how they came to those conclusions. By exploring the pitfalls of each era, they give their own speculations about the distant future, transformed by unbelievable technology ranging from genetic manipulation to artificial intelligence and quantum computing. Applying their trademark skepticism, they carefully extrapolate upon each scientific development, leaving no stone unturned as they lay out a vision for the future.




An Introduction to the Future

1. Futurism—Days of Future Passed

The future begins with the past.

The future is a wild fantasy. It’s feverishly concocted out of our hopes, fears, biases, ignorance, and imagination, saying far more about us than what is to come. Predictions of the future are really just reflections of the present. And that means we’re really bad at predicting what the future will bring. But that’s not going to stop us from trying to do just that—it’s simply too irresistible.

We can, however, try to learn from futurism’s checkered past, correct what errors we can find, and perhaps do a little better. Along the way, we can learn about the past and present of the technologies that dominate our world. We can follow the arc of the history of science and technology and perhaps extrapolate it a little bit into the future. My brothers and I have been doing this our whole lives.

As children growing up in the ’60s and ’70s, we were in love with science, technology, science fiction, and the incredible promise of the future. We were too young to have experienced the disappointment of future promises repeatedly failed, and so we naively believed the advances set before us. Many of them are now solid clichés about the future, but back then we anxiously anticipated our flying cars, jetpacks, moon settlements, and intelligent robot servants.

Our fondness for science fiction didn’t help. The movies and television we watched depicted a near future with technology that now seems to be about a century premature. In The Six Million Dollar Man Steve Austin sported prosthetic robotic limbs that fifty years later are still not even close to achievable. In 2001 we were supposed to have space stations and sentient computers. And weren’t researchers working on beaming vivid experiences directly into our brains? Even dark futures, like the year 2019 depicted in Blade Runner, had flying cars and genetically modified androids indistinguishable from humans. No matter how socially and environmentally devastated the future was presented to be, what I marveled at was the technology. We could work the other stuff out—as long as there were flying cars.

Our techno-optimism was likely significantly influenced by growing up in the era of Apollo. We were landing people on the moon and using “advanced” computers, and despite a few hiccups, it all worked out well. Watching Gene Cernan step off the moon and back into the lunar lander during the Apollo 17 mission in 1972, my younger self couldn’t conceive that fifty years later we still would not have returned, let alone not have a settlement on the moon. Where is Moonbase Alpha?

The flip side of this disappointment and false promises is that some of the biggest technological advances in the last half century, the ones that have had the most significant impact on our lives, were not featured in future predictions or science fiction. As I write this, I carry in my pocket a supercomputer (by the standards of my youth) that allows me to communicate instantly through video, audio, or text to almost anyone, anywhere in the world. As a bonus, I have access to my entire personal music library, it serves as a digital camera that can take as many photos as I want without the need for film, and can even give me directions to anywhere I want to go. It can access practically the sum of human knowledge in a searchable interactive format. If I get bored, this device can play movies and contains countless video games that would have blown child-me away.

The smartphone and the World Wide Web that can be accessed through it, along with social media, online shopping, countless apps, and other features, are absolute miracles of future technology. It far exceeds what I would have thought possible thirty to forty years ago. Past depictions of the future generally did not anticipate anything like it. Even Star Trek, a favorite techno-optimist utopian future, did not see our digital revolution coming.

So, we have made great technological strides in the last fifty years, just not in the ways that we thought. Why is it that people are consistently so bad at predicting the future? If we can understand this, perhaps we can do it a little better. Or perhaps the forces that shape the future are too chaotic to predict with any accuracy beyond a certain point, like trying to predict the weather.

But while we cannot predict specific weather with much accuracy, we can better predict overall changes in climate. It’s easier to predict that travel will get faster in the future, for example, rather than the specifics of automotive technology. One potential fix to bad futurism, therefore, is to focus on large trends, rather than trying to imagine tiny details. Even there, however, futurists can get tripped up.

In the movie Minority Report, for example, they presented a thoughtful picture of the near future of 2054. I will not be able to say how accurate it was for another thirty years, but one choice stuck out to me: People were using teeny-tiny phones. The movie was made in 2002, prior to smartphones. The trend then for cell phones was to get smaller and smaller, so the writers extrapolated this out another fifty years.

Unfortunately for them, the iPhone was released in 2007, essentially reversing the trend by fundamentally changing how personal phones are used. Suddenly, screen real estate became a premium, and inevitably phones got larger. The iPhone was a disruptive technology, changing the status quo of an entire industry and changing our lives forever. Perhaps we are now settling into a range of screen sizes that represents the optimal balance of portability and usability, depending on some personal and situational variables, or maybe a new disruption will occur.

Companies are indeed looking for ways to disrupt the cell phone market again by creating foldable or expandable phones reminiscent of a time before iPhones. Will this technology take off, remain niche, or completely fail? If you could reliably predict even the next step in how a single technology will evolve, you would be a tech millionaire. Even scientists and tech leaders in the past famously made terrible predictions about the future, including about technology in which they were involved. Take Thomas Edison, who said, “The phonograph has no commercial value at all” in the 1880s. Or Ken Olsen who claimed in 1977 that “there is no reason anyone would want a computer in their home.” Even if you can foresee that first step, try predicting the next fifty steps for a thousand technologies. That’s the future.

Of course, “the future” includes one second from the moment you read these words, to the ultimate heat death of the universe in 10^100 years (some trends are inevitable). Different factors apply depending on how far into the future you are trying to predict. The near future, say ten to twenty years, can benefit from high-probability extrapolation of existing trends, as well as coming technology that is already in the works. The medium future, twenty to a hundred years, gets a lot harder, but if you focus on the big picture and give yourself some wiggle room, you might get a glimpse of life in a century.

The far future, more than a hundred years, is where things really get interesting, and the technologies we are now just beginning to explore reach their full mature potential. While it may be easy to predict that some technologies will eventually be realized, the wiggle room for predictions here is in how long it will take. We may not be able to envisage when essentially complete brain-machine interfaces will exist, but when they do, we can imagine what it may be like. The far future is where we can enjoy speculating about entirely new technologies that are now only a footnote in a physics paper about some newly discovered aspect of nature.

Our guide through the future will cover all of this—advancements in existing technologies, exploring emerging technologies, and speculating about fantastical tech from possible futures. As much as possible, science will be our guide.

Through it all, we will maintain a highly skeptical eye because that’s what we do. In addition to being science enthusiasts and technophiles, we are also scientific skeptics. For the last quarter century, we have been studying and promoting critical thinking and scientific literacy. We host the award-winning podcast The Skeptics’ Guide to the Universe, and our first book, of the same name, is a primer on science and critical thinking.

This means we always try to temper our enthusiasm for the future with sharp criticism. We let the failures and disappointments of the past inform our thoughts about the future. It is not enough, however, to simply be cynical. Being skeptical means separating the probable from the improbable, with solid evidence and logic.

Sometimes we let our enthusiasm get the better of us, but in the end, we always bring it back down to reality. This is, after all, a skeptic’s guide to the future.

The future, ironically, begins in the past. Our journey starts with the history of futurism to see what it can teach us.

2. A Brief History of the Future

The pitfalls of futurism.

Yogi Berra famously said, “The future ain’t what it used to be.” (You knew that quote would make an appearance, right? Although he wasn’t the first person to utter this phrase.) He had a quirky genius for making a concept that ultimately makes sense sound self-contradictory or nonsensical. What has changed is not necessarily the future itself, but our beliefs about the future. Futurism ain’t what it used to be.

In traveling back in time and into the future at the same time, we can look at some of the visions of yesterday’s future—what did they get right and what did they get spectacularly wrong? When tracking the errors of past futurists, some common themes emerge—what I call “futurism fallacies”—and they will help us shape our own future vision.

Futurism Fallacy #1—Overestimating short-term progress while underestimating long-term progress.

One core challenge of predicting the future is trying to determine not only what technology will develop, but also how long it will take. It is almost a certainty that eventually we will develop fully sapient general artificial intelligence. What is extremely challenging is predicting when we will cross that finish line. When trying to glimpse the future, there is also a tendency to overestimate short-term advancement, while underestimating long-term advancement. We see this frequently in science fiction movies—whether they are serious, comedic, utopian, or dark, the technology in twenty to thirty years is typically portrayed as transformational, rather than more realistically incremental. So, from Back to the Future to Blade Runner, we have flying cars by 2015.

This overestimation of short-term progress comes partly from the tendency to think of “the future” as one homogenous time, much as we often think of “the past” as one indistinct era. My favorite example of this is that Cleopatra (69–30 BCE) lived closer in time to the Space Shuttle (first launch in 1981) than the building of the Pyramids of Giza (2550–2490 BCE).

But, getting back to the future (heh!), we often imagine that “the future” contains whatever the next big advance is expected to be in any given technology. So instead of using phones, we are using video phones in the future. Instead of driving regular cars, we are driving electric, self-driving, or flying cars. We fly to other continents in commercial jets now, so in the future we must be flying rockets instead. Even if “the future” is only twenty years from now, we imagine all these technological transformations have already taken place, therefore overestimating short-term progress.

Underestimating long-term progress is mostly a matter of simple math, because technological progress is often geometric rather than linear. Geometric progress means doubling (or some other multiplier) every time interval, so progress looks like 2, 4, 8, 16, 32, while linear means simple addition every time interval: 1, 2, 3, 4, 5, and so on. You can see how geometric is much faster than linear, especially over the long-term. The best example of this is computer technology, such as hard-drive capacity and processor speeds. Processor speeds have roughly doubled every eighteen months, so in the past forty-five years, processors have not become several times faster—they have become millions of times faster. Underestimation also occurs because game-changing new technologies are often missed.

However, while there is a general tendency to overestimate short-term and underestimate long-term progress, each technology follows its own pattern of progress. Therefore, it can be difficult to pick technological winners and losers. The problems we are trying to solve with technology may also be nonlinear, getting progressively harder to advance at the same rate. At first, we may pick the low-hanging fruit and progress may be rapid, but then further advances become increasingly difficult, with diminishing returns and perhaps even roadblocks. If we project early progress indefinitely, that will result in overestimating the strides we will make. But then geometric advances and game-changing innovations eventually catch up, causing us to underestimate long-term progress.

A humorous showcase of this fallacy is the short film produced by General Motors in 1956, which imagined the “modern driver” of 1976. They get everything wrong. The film was made to promote their gas turbine engine technology—remember those? Probably not, because they never came into wide use. Several car companies, most aggressively Chrysler, tried to develop a gas turbine engine to replace the internal combustion engine, and none succeeded.

The film also featured “auto control” in which the car was able to take over steering from the driver—in 1976, remember, about a half century too early. But in order to do this, the driver had to first enter the “electronic control lane” and then synchronize the car’s velocity and direction with the external control. This was all done with the help of people in control towers that lined the highway through radio communication.

Science fiction depictions of the future are also rife with this fallacy. In 1968, the movie 2001: A Space Odyssey chronicled a mission to Jupiter (still beyond our current technology), with crew members in cryosleep (also not possible) and featuring a fully artificially intelligent computer, the HAL 9000. These technologies are all at least fifty to a hundred years premature.

Professional futurists, such as Isaac Asimov, frequently fell for this fallacy. In 1964, he made predictions for 2014, fifty years in the future, for the world’s fair. His forecasts were published in the New York Times, although in fairness, he admits these are only “guesses.” He predicted:

It will be such computers, much miniaturized, that will serve as the “brains” of robots. In fact, the I.B.M. building at the 2014 World’s Fair may have, as one of its prime exhibits, a robot housemaid—large, clumsy, slow-moving but capable of general picking-up, arranging, cleaning and manipulation of various appliances. It will undoubtedly amuse the fairgoers to scatter debris over the floor in order to see the robot lumberingly remove it and classify it into “throw away” and “set aside.” (Robots for gardening work will also have made their appearance.)

What about energy?

And experimental fusion-power plant or two will already exist in 2014. (Even today, a small but genuine fusion explosion is demonstrated at frequent intervals in the G.E. exhibit at the 1964 fair.) Large solar-power stations will also be in operation in a number of desert and semi-desert areas—Arizona, the Negev, Kazakhstan. In the more crowded, but cloudy and smoggy areas, solar power will be less practical. An exhibit at the 2014 fair will show models of power stations in space, collecting sunlight by means of huge parabolic focusing devices and radiating the energy thus collected down to earth.

Again, these predictions are at least a half-century premature. In general, futurists need to be much more conservative in their estimates of short-term advancement. It seems like doubling or even tripling the estimated timeline is a reasonable rule of thumb. Anticipate roadblocks, blind alleys, and troubling hurdles, and your estimates will likely be closer to the mark.

Futurism Fallacy #2—Underestimating the degree to which past and current technology persists into the future. Corollary—assuming we will do things differently just because we can.

One particularly ambitious look forward was the 1967 film by the Philco-Ford Corporation imagining the world of 1999, starring a young Wink Martindale. Those thirty-two years were full of advancements with the advent of personal computers, the internet, and essentially a transition to digital technology.

The authors of this film could not see through the thick veil of these technological revolutions, so they relied heavily on their own hidden assumptions, falling for many of the futurism fallacies. They assumed that many aspects of daily life would change simply because future technology would allow for such change. Everything thirty-two years in the future has to be different, right? History has shown, however, that past technology persists into the future to an incredible degree.

In their depiction of a typical day in 1999, even the simple act of drying one’s hands at home had to be done using the most advanced technology possible, such as infrared lights and air blowers. Drying your hands on a towel seems too old school for “the future.” Communicating from the next room was done through video. In this future, all food is stored frozen in individualized portions and heated up in minutes by microwave to serve, replacing all cooking, except perhaps for special occasions. The central computer monitors nutritional and caloric needs and suggests the appropriate menu.

Asimov made similar predictions about cooking in the future (again from his 1964 world’s fair predictions):

Gadgetry will continue to relieve mankind of tedious jobs. Kitchen units will be devised that will prepare “automeals,” heating water and converting it to coffee; toasting bread; frying, poaching or scrambling eggs, grilling bacon, and so on. Breakfasts will be “ordered” the night before to be ready by a specified hour the next morning. Complete lunches and dinners, with the food semiprepared, will be stored in the freezer until ready for processing. I suspect, though, that even in 2014 it will still be advisable to have a small corner in the kitchen unit where the more individual meals can be prepared by hand, especially when company is coming.

The idea that we will be utilizing essentially the same culinary techniques in the future, despite the fact that we’ve been cooking food over heat for millennia, just doesn’t fit a futurist lens. But the reality is we still buy raw vegetables and cut them with knives on a wooden cutting board, and then steam them or stew them in a pot. My modern kitchen and the process I use to cook would be fully recognizable to someone from fifty, even a hundred, years ago depending on the recipe. In fact, I recently purchased some hand-forged kitchen knives. Sure, the appliances are all more efficient, and may have had some incremental functional improvements, but mostly they are the same. The biggest innovation is the microwave oven, which I, like most, use for heating, not for cooking.

Sometimes we do things the old-fashioned way because we want to, or because the simple way is already pretty close to optimal. Sometimes convenience isn’t the most important factor (that is another lazy assumption about the future—everything is about optimizing convenience). Recently, several people close to me purchased the automatic coffee makers where each cup is made from a prepackaged individual container of ground coffee. This method prioritizes ease, speed, and convenience, and these coffee makers became very popular. But after a couple of years there was a backlash, and all it took was being exposed again to a really well-brewed cup of coffee. Compounded with environmental concerns over the waste of all the plastic used in those individualized packages, suddenly the swill they were drinking out of convenience simply wasn’t good enough.

Now many of them (I don’t drink coffee, so I watched this play out from the sidelines) have swung back to the other end of the spectrum, prioritizing quality. They grind their beans fresh and may go through an elaborate process such as slowly pouring boiling water over those grounds in search of the perfect cup of coffee. They enjoy the ritual, and it builds their anticipation of flavorful enjoyment.

Old technology can be remarkably persistent. We still burn coal for energy. Our world is still largely made out of wood, stone, steel, ceramic, and concrete—all materials that have been used for thousands of years. Plastic is probably the one new material that has shaped our modern world, but not everything is made from plastic just because it can be.

None of this is to downplay the truly transformational technologies that make up our modern world and have changed our lives. But the future always seems to be a complex blend of the new and the old. The trick is predicting which things will change, and which will substantially stay the same.

Futurism Fallacy #3—Assuming there is one pattern of technological change or adoption. Rather, the future will be multifaceted.

Sometimes new technologies completely fail and fade away (like the gas turbine engine), sometimes they are adopted but fill a smaller niche than initially assumed (like microwave ovens), and sometimes they completely replace (except for nostalgic or historical purposes) the previous technology (like cars did to the horse and buggy). There is no one pattern.

We must also recognize that there are many competing concerns, and this is often why it is extremely difficult to predict how a new technology will play out. Convenience is not everything, and we do not widely adopt new technologies just because they are new. In addition, there are considerations of cost, quality, durability, aesthetics, fashion, culture, safety, and environmental effects. Even the concept of convenience itself can be multifaceted.

What tends to happen is that many technologies exist in parallel, each finding their niche where these combinations of factors make the most sense. I am writing this book on my computer, but sometimes I take notes by writing them down on a piece of paper. It’s just more convenient for some applications. Sometimes I listen to books on audio, sometimes I read them on my ebook, and sometimes I like the feel of a physical book in my hands.

We still use natural wood in home construction because of its cost and how easy it is to work with, and for a desired aesthetic. In fact, antiques have a high value partly for their rustic or quaint appearance in home interiors. Conversely, I may spring for artificial wood for my deck because of its weather resistance and lower maintenance.

I drive to work in a car that would mostly seem ordinary to a driver from the 1950s, but they would likely be blown away by my GPS and entertainment system. And yet, I still usually just listen to news on the radio.

Futurism Fallacy #4—Anticipating the end of history.

In his book, Predicting the Future, Nicholas Rescher points to a tendency to assume “the end of history”—that society reaches its equilibrium point, and once achieved we have endless peace and prosperity. In this utopian future, not just convenience but also leisure become everything. Well, history doesn’t stop, at least it hasn’t so far.

A good example of this fallacy is the 1920s film about the twenty-first century Looking Forward to the Future, set after the “War to end all wars” where there would be never-ending peace and prosperity, with increasing leisure time. It predicts people wearing electric belts to control their climate. Men (not women) would be outfitted with a utility belt that would contain, “telephone, radio, and containers for coins, keys, and candy for cuties.” Planes would be enormous, designed like luxury cruise ships, with lounges, dining areas, and activities. The farther back in time we go, the more outlandish our present “future” becomes.

This fallacy mostly stems from a lack of imagination—thinking that all of our current problems will be solved by technological advancements. Once that happens, we will have achieved a stable utopia. But what always has happened so far is that as we solve one problem, new problems emerge. Even the technology we develop to make our lives better can come with a suite of its own challenges—new resources become precious, power shifts, and new conflicts arise. When past futurists looked at the advent of the internal combustion engine, they did not imagine the challenges of global warming, or the rise of power centers in the deserts of the Middle East.

History does not end; it just keeps churning.

Futurism Fallacy #5—Extrapolating current motivations and priorities into the future. Corollary—it’s still not all about leisure and convenience.

This assumption of increased leisure time was not unreasonable a century ago, as the industrial revolution really did free developed societies from previous crushing drudgery. Machines taking over many of the worst repetitive, time-consuming, and dangerous tasks was a defining feature of that era. It stands to reason that they would extrapolate that trend into the future, so they did. This assumes that current trends will continue indefinitely into the future, but they rarely do.

In the United States, for example, the forty-hour workweek did not come about because of technological advancement. In fact, industrial factories made workers more productive, and therefore their hours of work more valuable. Achievement of a forty-hour workweek was the result of a labor fight, fought over a century, and finally achieved by federal law in 1940. Since then, the workweek had been stable but has been increasing recently with the rise of new forms of contract work that fall outside these regulations.



    “Steve Novella and crew are serious, knowledgeable nerds. They use their critical thinking skills to analyze what is and isn’t possible . . . or likely. Along with some remarkable insights, they’ve written a little science-based fiction of their own that is a delight to read. It’s a well-researched, well-reasoned reference book.”—Bill Nye, CEO, the Planetary Society
  •  "[A]n entertaining evaluation of futurism. . . The result is pop science done right.”—Publishers Weekly, starred review
  • "A gimlet-eyed look at the promises of technology and futurists past. . . An intriguing if bet-hedging work of futurology that calls into question the whole business of futurology itself.”—Kirkus Reviews
  • "A fun overview of both the current state of modern science and a general survey of the history of futurism.”—Booklist
  • "I think that anyone who has been fascinated with the future, as I assume most people have at one time or another, should check out The Skeptics Guide to the Future. . . [I]t gives such an interesting take and view in which we should all look at the future. The realist perspective created more hope in me because it felt like the trajectory presented was achievable because of the building blocks already around."—Cosmic Circus

    "A lively, engaging, and very timely guide to navigating a world rife with misinformation and pseudoscience. This book will give you the tools to ferret out nonsense and confront your own biases-and hopefully change a few minds along the way."—Jennifer Oullette, author of Me, Myself, and Why and The Calculus Diaries
  • "[Novella] pulls no punches in his attack on the misinformation, myths, and biases that surround us. Aided here by several writing associates, the author demonstrates his vast experience explaining the mechanisms of deception and the tactics used by pseudoscientists. Presented as "one giant inoculation against bad science, deception, and faulty thinking," the book succeeds superbly."—Kirkus (starred review)
  • "Empowering and illuminating, this thinker's paradise is an antidote to spreading anti-scientific sentiments. Readers will return to its ideas again and again."—Publishers Weekly (starred review)

On Sale
Sep 27, 2022
Page Count
432 pages

Dr. Steven Novella

About the Author

Dr. Steven Novella is an academic clinical neurologist at Yale University School of Medicine and is host and producer of "The Skeptics’ Guide to the Universe" (SGU). He also co-hosts "Alpha Quadrant 6," a science-fiction review show. He is the author of the bestselling book The Skeptics Guide to the Universe: How to Know What’s Really Real in a World Increasingly Full of Fake. Dr. Novella has made multiple appearances on NPR’s All Things Considered and is a frequent guest on radio talk shows and science podcasts. His television credits include The Dr. Oz Show, Penn & Teller Bullshit, 20/20, Inside Edition, The History Channel, The Unexplained on A&E, Ricki Lake, and Exploring the Unknown. When not podcasting, he also authors the popular and award-winning NeuroLogica blog and is senior editor of Science-Based Medicine, an influential medical blog dedicated to issues of science and medicine. Dr. Novella is the founder and president of the New England Skeptical Society, a fellow of the Committee for Skeptical Inquiry (CSI), and founding chairman of the Institute for Science in Medicine.

Bob Novella is a co-host of SGU and co-author ofThe Skeptics Guide to the Universe: How to Know What’s Really Real in a World Increasingly Full of Fake. He also blogs for SGU’s Rogues Gallery. Bob is founder and vice president of the New England Skeptical Society. He has written numerous articles that are widely published in skeptical literature and is a frequent guest on science and technology podcasts.

Jay Novella is a co-host of the SGU podcast, Chief Operations Officer at SGU Productions, and co-author of the bestselling book The Skeptics Guide to the Universe: How to Know What’s Really Real in a World Increasingly Full of Fake. Jay serves on the board of directors for the Northeast Conference on Science and Skepticism (NECSS), a yearly conference in its 14th year. He also is a producer and writer for the stage show A Skeptical Extravaganza of Special Significance. In his free time, Jay produces and hosts Alpha Quadrant 6, a science-fiction review show.

Learn more about this author