The Signals Are Talking

Why Today's Fringe Is Tomorrow's Mainstream


By Amy Webb

Formats and Prices




$23.99 CAD


  1. Trade Paperback $18.99 $23.99 CAD
  2. ebook $11.99 $15.99 CAD
  3. Audiobook Download (Unabridged)

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around March 6, 2018. This date is subject to change due to shipping delays beyond our control.

Amy Webb is a noted futurist who combines curiosity, skepticism, colorful storytelling, and deeply reported, real-world analysis in this essential book for understanding the future. The Signals Are Talking reveals a systemic way of evaluating new ideas bubbling up on the horizon-distinguishing what is a real trend from the merely trendy. This book helps us hear which signals are talking sense, and which are simply nonsense, so that we might know today what developments-especially those seemingly random ideas at the fringe as they converge and begin to move toward the mainstream-that have long-term consequence for tomorrow.

With the methodology developed in The Signals Are Talking, we learn how to think like a futurist and answer vitally important questions: How will a technology-like artificial intelligence, machine learning, self-driving cars, biohacking, bots, and the Internet of Things-affect us personally? How will it impact our businesses and workplaces? How will it eventually change the way we live, work, play, and think-and how should we prepare for it now?

Most importantly, Webb persuasively shows that the future isn’t something that happens to us passively. Instead, she allows us to see ahead so that we may forecast what’s to come-challenging us to create our own preferred futures.



The Instructions

A Futurist’s Playbook for Every Organization

WHAT WAS ONCE top-secret military technology has left the domain of government and is now sitting in my living room, with its batteries recharging. I’ve used it to take photos of my daughter’s kindergarten class field trips. It came in handy when I noticed a possible leak in our roof. I flew it after a big winter storm to survey whether my neighborhood streets had been cleared. Realizing they hadn’t, I streamed aerial footage to my neighborhood association and asked that they send a plow.

It’s a drone, and a rather unremarkable one at that. Just like many of the other consumer models available for purchase, it has four propellers and will fly autonomously along my preset waypoints.

In 2015, two drones operated by civilians trying to capture video inadvertently prevented firefighters from putting out a rapidly spreading California wildfire.1 As a result, the fire crossed over onto a freeway and destroyed a dozen vehicles. There were several incidents of drones flying around airports to shoot photos and video, too: in one case reported to the Federal Aviation Administration (FAA),2 a drone just missed the nose of a JetBlue flight. In another, a drone got in the way of a Delta flight trying to land.3

By the end of 2015, the FAA was estimating that a million drones would be sold and given as holiday presents that year4—but neither the FAA nor any other government agency had decided on regulations for how everyday Americans could use them. Close encounters with airplanes prompted conversations about whether or not the airspace should be regulated, which forced drone manufacturers and the aviation industry into uncomfortable conversations, since each has an economic stake in the future of unmanned vehicles (UMVs).

Drones were a fringe technology barreling toward the mainstream, and a lack of planning and foresight pitted dozens of organizations against each other. One proposal from Amazon called for a new kind of drone highway system in the sky, separating commercial UMVs from drones belonging to hobbyists, journalists, and the like. Hobbyists like me would be restricted to flying below an altitude of two hundred feet, while commercial delivery drones—including the fleet Amazon is planning to launch—would gain access to a zone between two hundred and four hundred feet overhead. The rest of the airspace would belong to planes.5

It certainly sounded like a reasonable plan, but it lacked context: namely, emerging trends from adjacent fields. No one involved in the proposals and debate considered how restricting the airspace might impact us in ways that have nothing to do with midair collisions. They dealt with an issue in the present-day, but didn’t go through the process of forecasting likely developments that would intersect with this plan in the future.

Let me walk you through how a futurist would address this problem. Since there are many issues involved, let’s analyze just one plausible scenario that connects a series of unrelated dots. It will, I believe, reveal why stopping to focus on flying altitudes alone, rather than mapping out the full trajectory of drones as a trend, would result in unintended changes in geopolitics and widespread environmental damage in the future.

If commercial drone lanes operate in an altitude range of two hundred to four hundred feet, a new twenty-five-story apartment building might require a special right-of-way easement, which could be costly and tedious to pursue. So it might be easier for architects to start building laterally. But who wants to walk the length of a football field just to get to a morning meeting? As it happens, ThyssenKrupp, a German engineering firm, has invented self-propelled elevators that can travel both horizontally and vertically.6 Rather than taking an elevator up twenty floors, you could take it across the expanse. With these conditions in place, a new kind of building, which I’ll call a “landscraper,” will start to occupy all that empty land covering much of the United States. Environmentalists will protest, arguing that soil displacement will flood local rivers and streams with sediment, killing off the plants that feed the fish, which in turn feed terrestrial wildlife. But if the drone-lane proposal is accepted, we would wind up with busy overhead highways. The only open space would be horizontal.

The result: a necessary shift in how our cities are built and maintained. The change would be felt less in places like New York City, where there is scant open land available, and more in less populated areas between the East and West coasts. Landscrapers would be developed in smaller cities across the Plains and Midwest, helping to catalyze new centers of business and innovation (where Google has started to lay fiber networks). Our thriving urban centers of the future will be San Antonio, Kansas City, and Oklahoma City. Established tax bases, congressional districts, and educational resources would be disrupted. Without proper advance city planning, these new hubs will suffer from traffic jams and a lack of sustainable basic civic resources, such as housing—issues that have already become significant problems in communities like Austin, Texas, and San Jose, Sunnyvale, and Santa Clara, California.

American farmers will be happy to sell their land, destroying big agricultural corporations like Monsanto, Dupont, and Land O’Lakes. Without American farms, we’ll find ourselves forced to become less than self-sufficient in food resources and more reliant on agricultural imports, changing the geopolitical power dynamic between the United States and countries such as China, Mexico, India, and Canada, which would become our primary fruit and vegetable providers.

All because in 2015, we thought it would be cool to fly an unmanned vehicle up into the air to take some pictures for our blogs and social feeds.

This future scenario won’t simply arrive, fully formed, as I’ve just described and as a futurist would forecast. Rather, it will evolve slowly over a period of years, and as various pieces fall into place, we would continue to track the trends and recalibrate our strategy. At first, all these developments will seem novel, unconnected, and random, like odd experiments or impossible theories hatched on the fringe of society. Without context, those points can appear disparate, unrelated, and hard to connect meaningfully. (Invisible drone highways in the sky? Landscrapers?) But over time, they will fit into patterns and come into focus: a convergence of multiple points that reveal a direction or tendency, a force that combines some human need and new enabling technology that will shape the future.

Futurists are skilled at listening to and interpreting the signals talking. It’s a learnable skill, and a process anyone can master. Futurists look for early patterns—pre-trends, if you will—as the scattered points on the fringe converge and begin moving toward the mainstream. They know most patterns will come to nothing, and so they watch and wait and test the patterns to find those few that will evolve into genuine trends. Each trend is a looking glass into the future, a way to see over time’s horizon. The advantage of forecasting the future in this way is obvious. Organizations that can see trends early enough to take action have first-mover influence. But they can also help to inform and shape the broader context, conversing and collaborating with those in other fields to plan ahead.

No one should plan for a future she cannot see. Yet that is exactly what’s happening every day in our boardrooms and legislative office buildings. Too often, leaders ignore the signals, wait too long to take action, or plan for only one scenario. Not only will first-movers create new strategies, thought leadership, hacks, or exploits to align with the trend, they are likely developing third and fourth iterations already. As a trend develops and advances, a vast network is being formed, connecting researchers to manufacturers, venture capital money to startups, and consumers with strange new technologies—such as a drone with such a sophisticated onboard computer that it can be sent on reconnaissance missions well past our line of sight. As is often the case with new technologies, those in leadership positions wait until they must to confront the future, which by now has already passed them by.

The paradox of the present is to blame: we are too fearful about the intricacies of technology, safety, and the needs of the various government agencies and equipment manufacturers to think more broadly about how technology like drones might emerge from the fringe to become our future mainstream.

I may be wrong, but I suspect that few, if any, leaders in organizations working on the future of drones today are following a futurist’s playbook, giving thought to traffic congestion in San Antonio, farmers in the Midwest, or our potential dependence on Chinese corn in the world we are creating for tomorrow.


We must dedicate time and effort to planning for the future. However, our fear and rejection of the unknown has been an ongoing thread throughout human history. The fact that we continue to struggle with this problem, from generation to generation, suggests either that Friedrich Nietzsche was right, and that we’re living the exact same life now that we’ve lived an infinite number of times in the past,7 or that we’ve internalized a belief that the future is something that happens to us, rather than something that we, in fact, create.

Our resistance to change is hardwired in the oldest, reptilian portion of our brains, which is located down by the brainstem and cerebellum. It’s that section that’s responsible for our automated vital functions, such as our heart rate and body temperature. It also controls our “fight-or-flight” response, which has preserved and protected humans throughout our evolution. When that system gets overwhelmed with a complex new concept or is forced to make a decision about an unfamiliar topic, it protests by causing us psychological distress, fear, and anxiety. Adrenaline floods our bodies so that we’re physically ready to fight or flee if we need to. Like breathing, our resistance to new technology happens automatically, without thought.

In 1970, social thinker Alvin Toffler theorized about a “future shock” in his groundbreaking book of the same name,8 arguing that the emerging computers and the race to space would cause disorientation and fragmentation within our society. British physicist and Nobel Prize winner Sir George Thomson posited that the nearest parallel of technological changes taking place in the late 1960s to early 1970s wasn’t the Industrial Revolution, but instead the “invention of agriculture in the Neolithic age.”9 At that same time, John Diebold, the American automation pioneer, warned that “the effects of the technological revolution we are now living through will be deeper than any social change we have experienced before.”10

Adapting to big, sweeping disruption or taking risks on unproven technology causes that part of our lower brains to kick into gear. It’s more comfortable for us to make incremental changes—we trick ourselves into feeling as though we’ve challenged the status quo in preparation for the future, without all that reptilian distress.

Our reptilian brains sometimes tempt us into denying that change is afoot in any meaningful way. Many prominent thinkers would disagree that this is the first time in human history when real, fundamental change is taking place within a single generation, and the driving force is technology. For example, economist Robert Gordon argued in The Rise and Fall of American Growth that our greatest innovations occurred between 1870 and 1970, and that that era’s level of American ingenuity and productivity cannot be repeated.11 Those one hundred years ushered in life-altering change that was immediately observable and uncomplicated: the discovery of penicillin eradicated many bacterial infections; Henry Ford’s assembly-line production brought automobiles to the masses; submarines took warfare below the oceans; robotic equipment replaced humans in factories; radio delivered the news in every American’s living room.

And yet, compared to that time period, the advancements of today are orders of magnitude more nuanced and complex, and without intentional effort, they are difficult to see. Take, for example, the quantum computer. This is an entirely new kind of system capable of solving problems that are computationally too difficult for our existing machines. The computer at your home or office can only process binary information expressed as 1s and 0s. In quantum computing, those 1s and 0s actually exist in two states (qubits) at once, allowing computations to be made in parallel. If you build two qubits, they hold four values simultaneously: 00, 01, 10, and 11.12

When a programmer needs to debug a system, she can write code that copies and extracts the values from the correct 1s and 0s. It’s straightforward. In a quantum system, those 1s and 0s form different combinations, and the very act of trying to observe that data as it is in transit changes its nature. Yes, quantum machines are computers—but they’re not like any computer you’ve seen before. Not in the way they look, or in how they operate, or in the functions they can perform.

You may never see a quantum computer, and even if you do, it will appear rather unremarkable—in the present day, it looks like a big enclosed server rack. The only remarkable aesthetic change in the farther future is that it will shrink in physical size. But you will benefit from the technology nonetheless: quantum computing will be used for encrypting your personal data and your credit card number when you’re shopping, in figuring out how to extract pollution from the air, and in designing new personalized drugs and predicting the spread of future public health epidemics.

A generation ago, a single computer took up an entire room—and Pluto was still a planet floating in a theoretical icy belt system beyond the orbit of Neptune. Today, you have access to more computing power in your little smartphone than all of the National Aeronautics and Space Administration (NASA) did when it sent Neil Armstrong, Buzz Aldrin, and Michael Collins to the moon. Your smartphone seems pedestrian because you are only exposed to the final product—you don’t see the underlying technology that powers it, and how that tech is evolving independent of the device itself. Yes, we send Tweets to reality TV shows. We Instagram no-makeup selfies. We allow our phones to track and monitor our levels of fitness, our whereabouts, our vital signs. And then we share our personal information with whoever’s interested, even with complete strangers whom we will never meet.

Just as many people discounted that early internet-connected phone I described in the Introduction, you may be tempted to argue that our smartphones are toys that cannot be compared to putting humans on the moon—not technological breakthroughs. However, the very technology that’s in your phone is being used to fundamentally alter the operations of most businesses, to perform life-saving medical tests in remote areas, and to change our political ideas and worldviews.

One of the reasons you don’t recognize this moment in time as an era of great transformation is because it’s hard to recognize change. Another reason: novelty has become the new normal. The pace of change has accelerated, as we are exposed to and adopt new technologies with greater enthusiasm and voracity each year. Consider the washing machine, a groundbreaking new technological innovation when it was introduced in the early 1900s. It took nearly three decades for more than 50 percent of Americans to buy them for their homes.13 In 1951, CBS broadcast the “Premiere,” the first show in color,14 and within fifteen years the majority of households had abandoned their black-and-white sets.15 Between 2007, when the first-generation iPhone was released, and 2015, more than 75 percent of Americans bought some kind of smartphone.16 In fact, 7 percent of us have now abandoned our landlines and traditional broadband services altogether.17

The year Toffler’s Future Shock was published, about 7,000 new products entered America’s supermarket shelves. Fifty-five percent of them hadn’t existed a decade previously.18 In 2014, 22,252 projects were successfully funded on Kickstarter.19 One of them came from a guy with an idea for a computerized watch, the Pebble. He raised $10 million from 69,000 individual backers and forced big, established companies like Apple and Samsung to hurry up and get their own products to market.20

We’ve even had to invent a new term for all the tech startups crossing the billion-dollar valuation threshold: “unicorns,” because investments on that scale had previously been just a myth. By mid-2015 there were 123 unicorns, with a total cumulative valuation of $469 billion.21 To put that incomprehensible number into perspective, Uber’s $51 billion valuation was equal at that time to the gross domestic product (GDP) of Croatia.22

The gravitational pull toward what’s new, what’s now, and what’s next has left us in a constant state of fight-or-flight. Paradoxically, we both worry about and look forward to the latest gadgets and tools. Overwhelmed with the sheer amount of new shiny objects, we don’t take the necessary step back to connect all the dots and to ask: How does one technology influence the other? What’s really going on? Are we missing a bigger and more important trend? What trajectory are we on, and does it make sense? These are questions futurists think about all the time. But when it comes to organizations, it’s only after a fringe technology moves into the mainstream that we suddenly raise concerns, attempt to join in, or realize it’s too late—and that an industry has been upended.

Because we lack this necessary dialogue on future forecasting, when it comes to technology-driven change, organizations are philosophically schizophrenic, arguing for and against contradictory positions. We may have initially lambasted Edward Snowden, who in 2013 leaked classified documents about cybersecurity and digital surveillance through the press, but with some distance has come appreciation. Political leaders, news organizations, and everyday people at one point called for Snowden’s arrest (and worse). Then we changed our minds. In a January 2014 editorial, the New York Times editorial board wrote: “Considering the enormous value of the information he has revealed, and the abuses he has exposed, Mr. Snowden deserves better than a life of permanent exile, fear and flight. He may have committed a crime to do so, but he has done his country a great service. . . . In retrospect, Mr. Snowden was clearly justified in believing that the only way to blow the whistle on this kind of intelligence-gathering was to expose it to the public and let the resulting furor do the work his superiors would not.”23

We don’t suffer from the “future shock” that Toffler warned us about as much as we suffer from ongoing disorientation. We are bewildered at the implications of technology because technology is becoming more pervasive in our everyday lives. From biohacking our genomes to robots that can repair themselves, it’s becoming more and more difficult to make informed decisions about the future.

But decisions must be made, and either subconsciously or with dedicated effort, each one of us is making thousands of them every single day, including two hundred on food alone.24 Which app should you build? Which new innovation should you try? Which startup should you back? In which direction should you pivot? Those are in addition to the more quotidian decisions, like which movie to watch on Netflix, what song to stream on Spotify, what dinner entrée to order from Seamless, or which of one of the 2,767 versions of the board game Monopoly to order from Amazon.25

We’ve made a devil’s pact, swapping convenience and efficiency for an ever-increasing tyranny of information and choice. Technology has forced us to either make poor decisions or make none at all, and it is causing or will eventually lead to cataclysmic, unwelcome disruption. During this period of intense technological change, we focus too narrowly on value chains rather than thinking about how what we’re doing fits into the bigger ecosystem.


As we marvel at the prospects of genomic editing, self-driving cars, and humanoid companions, we have to keep in mind that our present-day reality binds us to a certain amount of perceptual bias. Fight-or-flight may have kept our prehistoric ancestors from getting eaten by a saber-toothed tiger, but over time it has stunted our unique ability to daydream about and plan for a better future.

Without a guided process, we fall victim to the paradox of the present. We have a hard time seeing the future because we lack a shared point of reference rooted in our present circumstances. How could you explain to a Sicilian living through the plague in the Middle Ages that in just a few hundred years, not only would we have invented a simple shot to cure us of many diseases, but robots and lasers would help doctors perform open heart surgery? How could you have explained to Henry Ford, as he sent his first Model T through an assembly line, that his grandchildren would see the advent of self-driving, computerized, battery-powered cars? Do you think that in 1986, as Toyota’s fifty-millionth car came off the line,26 company chairman Eiji Toyoda would have believed that within a few decades the four biggest car companies wouldn’t be Toyota, Honda, General Motors, and Mazda, but instead Tesla, Google, Apple, and Uber? How could you articulate the concept of quantum computing—that the same information could both exist and not exist within a computer simultaneously—to Ada Lovelace, when she wrote the first algorithm ever carried out by a machine?

Without instructions as a guide, we face the same perceptual bias as all of the generations who came before us; we have a difficult time seeing how not only the far future will unfold but the near future as well. Organizations, communities, and we as individuals must cope with hundreds of first-time situations driven by technology at a pace unmatched in any other time in history. We experience these micro-moments on a near-daily basis: new mobile apps, new wearable fitness devices, new hacks, new ways to harass others on social media, new directives in how to “binge watch” the latest show on Netflix.

Novelty is the new normal, making it difficult for us to understand the bigger picture. We now inhabit a world where most of the information that has ever existed is less than ten years old. From the beginnings of human civilization until 2003, five exabytes of data were created. We are now creating five exabytes of data every two days.27 In fact, in the minute it took you to read that last sentence, 2.8 million pieces of content were shared on Facebook alone.28 On Instagram, 250,000 new photos were posted.29

A lack of information isn’t what is preventing us from seeing the future. Searching for drone on the visible web (the searchable, indexed part) returns 142 million results.30 There are hundreds of thousands of forum posts, spreadsheets, and comments about it on the hidden web, too—the deeper layers of the internet that do not show up on searches for a variety of reasons (they require a password, they can only be accessed using special software, they’re peer-to-peer networks, or they lack the code necessary for a search engine crawler to discover them). The Washington Post published 717 stories about drones during 2015 alone.31 The Brookings Institution published 65 white papers, op-eds, and blog posts about drones during that same time period.32 Barraged with ever more information, we must now interpret all this new knowledge and data we’re being fed and figure out how to make all of it useful. Exposure to more information tends to confuse rather than inform us. Thousands of drones are being flown all around the country. Lawmakers have access to plenty of information, and yet they don’t have a plan for the future.

Information overload hampers our ability to understand novelty when we see it. This tendency is especially pronounced when it comes to technology, where exciting new products launch daily. Joost, a much-hyped video service called a “YouTube killer” by tech reporters, raised $45 million in venture capital before launch.33 Color, a photo-sharing app created by two charismatic, popular denizens of Silicon Valley, raised $41 million as a prelaunch tech startup.34 AdKeeper raised $43 million before launch, billing itself as a new kind of digital coupon clipping service.35

In all three cases, the founders promised something unique. But novelty is a distraction, not a clear trend worth tracking. Joost’s investors lost all their money—the timing for streaming video wasn’t right in 2006. Color was a confusing product that consumers didn’t understand and that tech bloggers hated. AdKeeper’s pitch sounded interesting, but in practice no one wanted to save the banner ads they saw online. That’s $129 million in investment that evaporated, and I’ve only given you three examples.

The paradox of the present impairs our judgment when we’re looking for far- and near-future technologies. If we’re not mistaking trendy apps for bona fide trends, then the paradox tricks us into mistaking a wave of disruption as a once-in-a-lifetime occurrence, so we dismiss that disruption as a novel circumstance—when it’s anything but.



  • "Webb teaches us to listen...[she] combines well-researched, reader-friendly insights on Google, drones and artificial intelligence with a system of questions you can bring to your next strategy meeting..."—Chicago Tribune
  • "Will undoubtedly help leaders contemplate what lies ahead. Webb provides a logical way to sift through today's onslaught of events and information to spot coming changes in your corner of the world."—Kirkus Reviews
  • "[The Signals Are Talking] provides several brain-bending future possibilities...Webb's stellar reputation in this red-hot field should generate demand."—Booklist
  • "At this moment, it seems obvious that we could all stand to brush up on our skills as prognosticators. And not just so we can avoid being blindsided by seismic elections, but because technology promises to continue its disruptive march through our societies and economies. What will cabbies do when cars are self-driving, and what will warehouse workers do when robots can pick, pack, and ship without lunch breaks and health care benefits? Forget NAFTA; the shift is toward Silicon Valley. But where to start? The Signals Are Talking: Why Today's Fringe Is Tomorrow's Mainstream is a good place. Sitting somewhere between Nate Silver and The Tipping Point, Amy Webb's book provides a practical guide for leaders - at any level - in the age of Big Data, offering tools for picking out the 'true signal, a pattern that will coalesce into a trend with the potential to change everything' - and land on the right side of disruption."—Jon Foro, The Amazon Book Review (An Amazon Best Book of December 2016)
  • "The clear, insightful, and humorous Amy Webb has crafted a rare treasure: a substantive guide written in a narrative that's a delight to read. While most futurologists want guru status through a few Nostradamus-like visions that never materialize, Webb modestly reports with depth and discipline, and creates a system and tools we can all use to better navigate the future. Through her deep research, specific anecdotes and brilliant insights, she has performed the selfless but hugely valuable act of teaching us all to fish at the fringe."—Christopher J. Graves, Global Chair, Ogilvy Public Relations
  • "Amy Webb, with insight and a big dose of pragmatism, shows how to clearly see the next big disruption and then take action before it strikes"—Ram Charan, advisor to CEOs and corporate boards, author of The Attackers Advantage and co-author of Execution: The Discipline of Getting Things Done
  • "Forecasting the future is a challenging-and absolutely necessary-part of every leader's job. In this ambitious and timely book Amy Webb shows not only how to identify actual trends and surprises emerging from the fringes but-even more important-how to do something about them so you can thrive in the face of the unexpected."Craig Newmark, founder of Craigslist
  • "The best leaders will know how to listen for the future. Amy Webb's book tells you the signals to listen for-as well as the noise you should ignore. The signals are talking and leaders should listen."—Bob Johansen, Distinguished Fellow, Institute for the Future, author of Leaders Make the Future
  • "The renowned futurist Amy Webb zeroes in-with clarity, specificity and verve-on the indispensable skill for people in every industry: how to recognize and interpret the clues that reveal the next big thing."—Vivian Schiller, former president and CEO, National Public Radio

On Sale
Mar 6, 2018
Page Count
336 pages

Amy Webb

About the Author

Amy Webb advises CEOs of the world’s most-admired companies, three-star admirals and generals, and the senior leadership of central banks and intergovernmental organizations on the future of technology and science. A quantitative futurist, Amy is the CEO of the Future Today Institute, a leading foresight and management consulting firm. She is a professor of strategic foresight at New York University’s Stern School of Business and a Visiting Fellow at Oxford University’s Säid School of Business. She was elected a life member of the Council on Foreign Relations, is a member of the Bretton Woods Committee and serves as a Steward and Steering Committee member of the World Economic Forum. She was also a Delegate on the former U.S.-Russia Bilateral Presidential Commission, where she worked on the future of technology and international diplomacy. Amy was named by Forbes as “one of the five women changing the world,” honored as one of the BBC’s 100 Women of 2020 and is ranked by Thinkers50 as one of the most influential business minds in the world. Amy is award-winning author of The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity and The Signals are Talking: Why Today’s Fringe is Tomorrow’s Mainstream.

Learn more about this author