By Amy Webb
Formats and Prices
- Sale Price $3.99
- Regular Price $12.99
- Discount (69% off)
- Sale Price $3.99 CAD
- Regular Price $16.99 CAD
- Discount (77% off)
This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around March 5, 2019. This date is subject to change due to shipping delays beyond our control.
We like to think that we are in control of the future of “artificial” intelligence. The reality, though, is that we — the everyday people whose data powers AI — aren’t actually in control of anything. When, for example, we speak with Alexa, we contribute that data to a system we can’t see and have no input into — one largely free from regulation or oversight. The big nine corporations — Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple–are the new gods of AI and are short-changing our futures to reap immediate financial gain.
In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI — the people working on the system, their motivations, the technology itself — is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.
Much more than a passionate, human-centered call-to-arms, this book delivers a strategy for changing course, and provides a path for liberating us from algorithmic decision-makers and powerful corporations.
Discover More! Including giveaways, contests, and more.
BEFORE IT’S TOO LATE
Artificial intelligence is already here, but it didn’t show up as we all expected. It is the quiet backbone of our financial systems, the power grid, and the retail supply chain. It is the invisible infrastructure that directs us through traffic, finds the right meaning in our mistyped words, and determines what we should buy, watch, listen to, and read. It is technology upon which our future is being built because it intersects with every aspect of our lives: health and medicine, housing, agriculture, transportation, sports, and even love, sex, and death.
AI isn’t a tech trend, a buzzword, or a temporary distraction—it is the third era of computing. We are in the midst of significant transformation, not unlike the generation who lived through the Industrial Revolution. At the beginning, no one recognized the transition they were in because the change happened gradually, relative to their lifespans. By the end, the world looked different: Great Britain and the United States had become the world’s two dominant powers, with enough industrial, military, and political capital to shape the course of the next century.
Everyone is debating AI and what it will mean for our futures ad nauseam. You’re already familiar with the usual arguments: the robots are coming to take our jobs, the robots will upend the economy, the robots will end up killing humans. Substitute “machine” for “robot,” and we’re cycling back to the same debates people had 200 years ago. It’s natural to think about the impact of new technology on our jobs and our ability to earn money, since we’ve seen disruption across so many industries. It’s understandable that when thinking about AI, our minds inevitably wander to HAL 9000 from 2001: A Space Odyssey, WOPR from War Games, Skynet from The Terminator, Rosie from The Jetsons, Delores from Westworld, or any of the other hundreds of anthropomorphized AIs from popular culture. If you’re not working directly inside of the AI ecosystem, the future seems either fantastical or frightening, and for all the wrong reasons.
Those who aren’t steeped in the day-to-day research and development of AI can’t see signals clearly, which is why public debate about AI references the robot overlords you’ve seen in recent movies. Or it reflects a kind of manic, unbridled optimism. The lack of nuance is one part of AI’s genesis problem: some dramatically overestimate the applicability of AI, while others argue it will become an unstoppable weapon.
I know this because I’ve spent much of the past decade researching AI and meeting with people and organizations both inside and outside of the AI ecosystem. I’ve advised a wide variety of companies at the epicenter of artificial intelligence, which include Microsoft and IBM. I’ve met with and advised stakeholders on the outside: venture capitalists and private equity managers, leaders within the Department of Defense and State Department, and various lawmakers who think regulation is the only way forward. I’ve also had hundreds of meetings with academic researchers and technologists working directly in the trenches. Rarely do those working directly in AI share the extreme apocalyptic or utopian visions of the future we tend to hear about in the news.
That’s because, like researchers in other areas of science, those actually building the future of AI want to temper expectations. Achieving huge milestones takes patience, time, money, and resilience—this is something we repeatedly forget. They are slogging away, working bit by bit on wildly complicated problems, sometimes making very little progress. These people are smart, worldly, and, in my experience, compassionate and thoughtful.
Overwhelmingly, they work at nine tech giants—Google, Amazon, Apple, IBM, Microsoft, and Facebook in the United States and Baidu, Alibaba, and Tencent in China—that are building AI in order to usher in a better, brighter future for us all. I firmly believe that the leaders of these nine companies are driven by a profound sense of altruism and a desire to serve the greater good: they clearly see the potential of AI to improve health care and longevity, to solve our impending climate issues, and to lift millions of people out of poverty. We are already seeing the positive and tangible benefits of their work across all industries and everyday life.
The problem is that external forces pressuring the nine big tech giants—and by extension, those working inside the ecosystem—are conspiring against their best intentions for our futures. There’s a lot of blame to pass around.
In the US, relentless market demands and unrealistic expectations for new products and services have made long-term planning impossible. We expect Google, Amazon, Apple, Facebook, Microsoft, and IBM to make bold new AI product announcements at their annual conferences, as though R&D breakthroughs can be scheduled. If these companies don’t present us with shinier products than the previous year, we talk about them as if they’re failures. Or we question whether AI is over. Or we question their leadership. Not once have we given these companies a few years to hunker down and work without requiring them to dazzle us at regular intervals. God forbid one of these companies decides not to make any official announcements for a few months—we assume that their silence implies a skunkworks project that will invariably upset us.
The US government has no grand strategy for AI nor for our longer-term futures. So in place of coordinated national strategies to build organizational capacity inside the government, to build and strengthen our international alliances, and to prepare our military for the future of warfare, the United States has subjugated AI to the revolving door of politics. Instead of funding basic research into AI, the federal government has effectively outsourced R&D to the commercial sector and the whims of Wall Street. Rather than treating AI as an opportunity for new job creation and growth, American lawmakers see only widespread technological unemployment. In turn they blame US tech giants, when they could invite these companies to participate in the uppermost levels of strategic planning (such as it exists) within the government. Our AI pioneers have no choice but to constantly compete with each other for a trusted, direct connection with you, me, our schools, our hospitals, our cities, and our businesses.
In the United States, we suffer from a tragic lack of foresight. We operate with a “nowist” mindset, planning for the next few years of our lives more than any other timeframe. Nowist thinking champions short-term technological achievements, but it absolves us from taking responsibility for how technology might evolve and for the next-order implications and outcomes of our actions. We too easily forget that what we do in the present could have serious consequences in the future. Is it any wonder, therefore, that we’ve effectively outsourced the future development of AI to six publicly traded companies whose achievements are remarkable but whose financial interests do not always align with what’s best for our individual liberties, our communities, and our democratic ideals?
Meanwhile, in China, AI’s developmental track is tethered to the grand ambitions of government. China is quickly laying the groundwork to become the world’s unchallenged AI hegemon. In July 2017, the Chinese government unveiled its Next Generation Artificial Intelligence Development Plan to become the global leader in AI by the year 2030 with a domestic industry worth at least $150 billion,1 which involved devoting part of its sovereign wealth fund to new labs and startups, as well as new schools launching specifically to train China’s next generation of AI talent.2 In October of that same year, China’s President Xi Jinping explained his plans for AI and big data during a detailed speech to thousands of party officials. AI, he said, would help China transition into one of the most advanced economies in the world. Already, China’s economy is 30 times larger than it was just three decades ago. Baidu, Tencent, and Alibaba may be publicly traded giants, but typical of all large Chinese companies, they must bend to the will of Beijing.
China’s massive population of 1.4 billion citizens puts it in control of the largest, and possibly most important, natural resource in the era of AI: human data. Voluminous amounts of data are required to refine pattern recognition algorithms—which is why Chinese face recognition systems like Megvii and SenseTime are so attractive to investors. All the data that China’s citizens are generating as they make phone calls, buy things online, and post photos to social networks are helping Baidu, Alibaba, and Tencent to create best-in-class AI systems. One big advantage for China: it doesn’t have the privacy and security restrictions that might hinder progress in the United States.
We must consider the developmental track of AI within the broader context of China’s grand plans for the future. In April 2018, Xi gave a major speech outlining his vision of China as the global cyber superpower. China’s state-run Xinhua news service published portions of the speech, in which he described a new cyberspace governance network and an internet that would “spread positive information, uphold the correct political direction, and guide public opinion and values towards the right direction.”3 The authoritarian rules China would have us all live by are a divergence from the free speech, market-driven economy, and distributed control that we cherish in the West.
AI is part of a series of national edicts and laws that aim to control all information generated within China and to monitor the data of its residents as well as the citizens of its various strategic partners. One of those edicts requires all foreign companies to store Chinese citizens’ data on servers within Chinese borders. This allows government security agencies to access personal data as they wish. Another initiative—China’s Police Cloud—was designed to monitor and track people with mental health problems, those who have publicly criticized the government, and a Muslim ethnic minority called the Uighurs. In August 2018, the United Nations said that it had credible reports that China had been holding millions of Uighurs in secret camps in the far western region of China.4 China’s Integrated Joint Operations Program uses AI to detect pattern deviations—to learn whether someone has been late paying bills. An AI-powered Social Credit System, according to a slogan in official planning documents, was developed to engineer a problem-free society by “allow(ing) the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”5 To promote “trustworthiness,” citizens are rated on a number of different data points, like heroic acts (points earned) or traffic tickets (points deducted). Those with lower scores face hurdles applying for jobs, buying a home, or getting kids into schools. In some cities, high-scoring residents have their pictures on display.6 In other cities, such as Shandong, citizens who jaywalk have their faces publicly shared on digital billboards and sent automatically to Weibo, a popular social network.7 If all this seems too fantastical to believe, keep in mind that China once successfully instituted a one-child policy to forcibly cull its population.
These policies and initiatives are the brainchild of President Xi Jinping’s inner circle, which for the past decade has been singularly focused on rebranding and rebuilding China into our predominant global superpower. China is more authoritarian today than under any previous leaders since Chairman Mao Zedong, and advancing and leveraging AI are fundamental to the cause. The Belt and Road Initiative is a massive geoeconomic strategy masquerading as an infrastructure plan following the old Silk Road routes that connected China with Europe via the Middle East and Africa. China isn’t just building bridges and highways—it’s exporting surveillance technology and collecting data in the process as it increases the CCP’s influence around the world in opposition to our current liberal democratic order. The Global Energy Interconnection is yet another national strategy championed by Xi that aims to create the world’s first global electricity grid, which it would manage. China has already figured out how to scale a new kind of ultra-high-voltage cable technology that can deliver power from the far western regions to Shanghai—and it’s striking deals to become a power provider to neighboring countries.
These initiatives, along with many others, are clever ways to gain soft power over a long period of time. It’s a brilliant move by Xi, whose political party voted in March 2018 to abolish term limits and effectively allowed him to remain president for life. Xi’s endgame is abundantly clear: to create a new world order in which China is the de facto leader. And yet during this time of Chinese diplomatic expansion, the United States inextricably turned its back on longstanding global alliances and agreements as President Trump erected a new bamboo curtain.
The future of AI is currently moving along two developmental tracks that are often at odds with what’s best for humanity. China’s AI push is part of a coordinated attempt to create a new world order led by President Xi, while market forces and consumerism are the primary drivers in America. This dichotomy is a serious blind spot for us all. Resolving it is the crux of our looming AI problem, and it is the purpose of this book. The Big Nine companies may be after the same noble goals—cracking the code of machine intelligence to build systems capable of humanlike thought—but the eventual outcome of that work could irrevocably harm humanity.
Fundamentally, I believe that AI is a positive force, one that will elevate the next generations of humankind and help us to achieve our most idealistic visions of the future.
But I’m a pragmatist. We all know that even the best-intentioned people can inadvertently cause great harm. Within technology, and especially when it comes to AI, we must continually remember to plan for both intended use and unintended misuse. This is especially important today and for the foreseeable future, as AI intersects with everything: the global economy, the workforce, agriculture, transportation, banking, environmental monitoring, education, the military, and national security. This is why if AI stays on its current developmental tracks in the United States and China, the year 2069 could look vastly different than it does in the year 2019. As the structures and systems that govern society come to rely on AI, we will find that decisions being made on our behalf make perfect sense to machines—just not to us.
We humans are rapidly losing our awareness just as machines are waking up. We’ve started to pass some major milestones in the technical and geopolitical development of AI, yet with every new advancement, AI becomes more invisible to us. The ways in which our data is being mined and refined is less obvious, while our ability to understand how autonomous systems make decisions grows less transparent. We have, therefore, a chasm in understanding of how AI is impacting daily life in the present, one growing exponentially as we move years and decades into the future. Shrinking that distance as much as possible through a critique of the developmental track that AI is currently on is my mission for this book. My goal is to democratize the conversations about artificial intelligence and make you smarter about what’s ahead—and to make the real-world future implications of AI tangible and relevant to you personally, before it’s too late.
Humanity is facing an existential crisis in a very literal sense, because no one is addressing a simple question that has been fundamental to AI since its very inception: What happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? What happens when those decisions are biased toward market forces or an ambitious political party? The answer is reflected in the future opportunities we have, the ways in which we are denied access, the social conventions within our societies, the rules by which our economies operate, and even the way we relate to other people.
This is not a book about the usual AI debates. It is both a warning and a blueprint for a better future. It questions our aversion to long-term planning in the US and highlights the lack of AI preparedness within our businesses, schools, and government. It paints a stark picture of China’s interconnected geopolitical, economic, and diplomatic strategies as it marches on toward its grand vision for a new world order. And it asks for heroic leadership under extremely challenging circumstances. Because, as you’re about to find out, our futures need a hero.
What follows is a call to action written in three parts. In the first, you’ll learn what AI is and the role the Big Nine have played in developing it. We will also take a deep dive into the unique situations faced by America’s Big Nine members and by Baidu, Alibaba, and Tencent in China. In Part II, you’ll see detailed, plausible futures over the next 50 years as AI advances. The three scenarios you’ll read range from optimistic to pragmatic and catastrophic, and they will reveal both opportunity and risk as we advance from artificial narrow intelligence to artificial general intelligence to artificial superintelligence. These scenarios are intense—they are the result of data-driven models, and they will give you a visceral glimpse at how AI might evolve and how our lives will change as a result. In Part III, I will offer tactical and strategic solutions to all the problems identified in the scenarios along with a concrete plan to reboot the present. Part III is intended to jolt us into action, so there are specific recommendations for our governments, the leaders of the Big Nine, and even for you.
Every person alive today can play a critical role in the future of artificial intelligence. The decisions we make about AI now—even the seemingly small ones—will forever change the course of human history. As the machines awaken, we may realize that in spite of our hopes and altruistic ambitions, our AI systems turned out to be catastrophically bad for humanity.
But they don’t have to be.
The Big Nine aren’t the villains in this story. In fact, they are our best hope for the future.
Turn the page. We can’t sit around waiting for whatever might come next. AI is already here.
Ghosts in the Machine
MIND AND MACHINE: A VERY BRIEF HISTORY OF AI
The roots of modern artificial intelligence extend back hundreds of years, long before the Big Nine were building AI agents with names like Siri, Alexa, and their Chinese counterpart Tiān Māo. Throughout that time, there has been no singular definition for AI, like there is for other technologies. When it comes to AI, describing it concretely isn’t as easy, and that’s because AI represents many things, even as the field continues to grow. What passed as AI in the 1950s—a calculator capable of long division—hardly seems like an advanced piece of technology today. This is what’s known as the “odd paradox”—as soon as new techniques are invented and move into the mainstream, they become invisible to us. We no longer think of that technology as AI.
In its most basic form, artificial intelligence is a system that makes autonomous decisions. The tasks AI performs duplicate or mimic acts of human intelligence, like recognizing sounds and objects, solving problems, understanding language, and using strategy to meet goals. Some AI systems are enormous and perform millions of computations quickly—while others are narrow and intended for a single task, like catching foul language in emails.
We’ve always circled back to the same set of questions: Can machines think? What would it mean for a machine to think? What does it mean for us to think? What is thought? How could we know—definitively, and without question—that we are actually thinking original thoughts? These questions have been with us for centuries, and they are central to both AI’s history and future.
The problem with investigating how both machines and humans think is that the word “think” is inextricably connected to “mind.” The Merriam-Webster Dictionary defines “think” as “to form or have in the mind,” while the Oxford Dictionary explains that it means to “use one’s mind actively to form connected ideas.” If we look up “mind,” both Merriam-Webster and Oxford define it within the context of “consciousness.” But what is consciousness? According to both, it’s the quality or state of being aware and responsive. Various groups—psychologists, neuroscientists, philosophers, theologians, ethicists, and computer scientists—all approach the concept of thinking using different approaches.
When you use Alexa to find a table at your favorite restaurant, you and she are both aware and responsive as you discuss eating, even though Alexa has never felt the texture of a crunchy apple against her teeth, the effervescent prickles of sparkling water against her tongue, or the gooey pull of peanut butter against the roof of her mouth. Ask Alexa to describe the qualities of these foods, and she’ll offer you details that mirror your own experiences. Alexa doesn’t have a mouth—so how could she perceive food the way that you do?
You are a biologically unique person whose salivary glands and taste buds aren’t arranged in exactly the same order as mine. Yet we’ve both learned what an apple is and the general characteristics of how an apple tastes, what its texture is, and how it smells. During our lifetimes, we’ve learned to recognize what an apple is through reinforcement learning—someone taught us what an apple looked like, its purpose, and what differentiates it from other fruit. Then, over time and without conscious awareness, our autonomous biological pattern recognition systems got really good at determining something was an apple, even if we only had a few of the necessary data points. If you see a black-and-white, two-dimensional outline of an apple, you know what it is—even though you’re missing the taste, smell, crunch, and all the other data that signals to your brain this is an apple. The way you and Alexa both learned about apples is more similar than you might realize.
Alexa is competent, but is she intelligent? Must her machine perception meet all the qualities of human perception for us to accept her way of “thinking” as an equal mirror to our own? Educational psychologist Dr. Benjamin Bloom spent the bulk of his academic career researching and classifying the states of thinking. In 1956, he published what became known as Bloom’s Taxonomy, which outlined learning objectives and levels of achievement observed in education. The foundational layer is remembering facts and basic concepts, followed in order by understanding ideas; applying knowledge in new situations; analyzing information by experimenting and making connections; evaluating, defending, and judging information; and finally, creating original work. As very young children, we are focused first on remembering and understanding. For example, we first need to learn that a bottle holds milk before we understand that that bottle has a front and back, even if we can’t see it.
This hierarchy is present in the way that computers learn, too. In 2017, an AI system called Amper composed and produced original music for an album called I AM AI. The chord structures, instrumentation, and percussion were developed by Amper, which used initial parameters like genre, mood, and length to generate a full-length song in just a few minutes. Taryn Southern, a human artist, collaborated with Amper to create the album—and the result included a moody, soulful ballad called “Break Free” that counted more than 1.6 million YouTube views and was a hit on traditional radio. Before Amper could create that song, it had to first learn the qualitative elements of a big ballad, along with quantitative data, like how to calculate the value of notes and beats and how to recognize thousands of patterns in music (e.g., chord progressions, harmonic sequences, and rhythmic accents).
Creativity, the kind demonstrated by Amper, is the pinnacle of Bloom’s Taxonomy, but was it merely a learned mechanical process? Was it an example of humanistic creativity? Or creativity of an entirely different kind? Did Amper think about music, the same way that a human composer might? It could be argued that Amper’s “brain”—a neural network using algorithms and data inside a container—is maybe not that different from Beethoven’s brain, made up of organic neurons using data and recognizing patterns inside the container that is his head. Was Amper’s creative process truly different than Beethoven’s when he composed his Symphony no. 5, the one which famously begins da-da-da-DUM, da-da-da-DUM before switching from a major to a minor key? Beethoven didn’t invent the entire symphony—it wasn’t completely original. Those first four notes are followed by a harmonic sequence, parts of scales, arpeggios, and other common raw ingredients that make up any composition. Listen closely to the scherzo, before the finale, and you’ll hear obvious patterns borrowed from Mozart’s 40th Symphony, written 20 years earlier, in 1788. Mozart was influenced by his rival Antonio Salieri and friend Franz Joseph Hayden, who were themselves influenced by the work of earlier composers like Johann Sebastian Bach, Antonio Vivaldi, and Henry Purcell, who were writing music from the mid-17th to the mid-18th centuries. You can hear threads of even earlier composers from the 1400s to the 1600s, like Jacques Arcadelt, Jean Mouton, and Johannes Ockeghem, in their music. They were influenced by the earliest medieval composers—and we could continue the pattern of influence all the way back to the very first written composition, called the “Seikilos epitaph,” which was engraved on a marble column to mark a Turkish gravesite in the first century. And we could keep going even further back in time, to when the first primitive flutes made out of bone and ivory were likely carved 43,000 years ago. Even before then, researchers believe that our earliest ancestors probably sang before they spoke.1
Our human wiring is the result of millions of years of evolution. The wiring of modern AI is similarly based on a long evolutionary trail extending back to ancient mathematicians, philosophers, and scientists. While it may seem as though humanity and machinery have been traveling along disparate paths, our evolution has always been intertwined. Homo sapiens
- "Rather than questioning the character of thinking machines, futurist Amy Webb turns a critical eye on the humans behind the computers. With AI's development overwhelmingly driven by nine tech powerhouses, she asks: Is it possible for the technology to serve the best interests of everyone?"—Wired
- "Webb's assessments are based on analyses of patent filings, policy briefings, interviews and other sources. She paints vivid pictures of how AI could benefit the average person, via precision medicine or smarter dating apps...Her forecasts are provocative and unsettlingly plausible."—Science News
- "Instead of predicting the future, Webb lays out scenarios for optimistic, pragmatic, and catastrophic outcomes -- all extrapolated from current facts. However impractical you may find the idea of a common Apple-Amazon operating system named Applezon, considering potential scenarios is a fantastically healthy exercise, because anyone who tells you they know how AI is going to turn out is lying."—VentureBeat
- "We need to get Amy Webb to campus. This is one of those cases where organizing discussions of the book would be great - but not enough. We would be wise to engage with Webb directly... We need more books like The Big Nine that are critical of higher education for reasons beyond politics (liberal bias) or costs... The Big Nine is an essential book for anyone interested in a global perspective around the role of companies and governmental policies in determining technological change."—InsideHigherEd
- "Webb is a first-rate storyteller...she has poured a lifetime of researching, writing, and conversing--in a word, thinking--into her masterwork."—Law.com
- "The Big Nine is provocative, readable, and relatable. Amy Webb demonstrates her extensive knowledge of the science driving AI and the geopolitical tensions that could result between the US and China in particular. She offers deep insights into how AI could reshape our economies and the current world order, and she details a plan to help humanity chart a better course."—AnjaManuel, Stanford University, cofounder and partner RiceHadleyGates
- "The Big Nine is an important and intellectually crisp work that illuminates the promise and peril of AI. Will AI serve its three current American masters in Washington, Silicon Valley and Wall Street or will it serve the interests of the broader public? Will it concentrate or disperse economic and geopolitical power? We can thank Amy Webb for helping us understand the questions and how to arrive at answers that will better serve humanity than our current path. The Big Nine should be discussed in classrooms and boardrooms around the world."—Alec Ross, author of TheIndustries of the Future
- "The Big Nine makes bold predictions regarding the future of AI. But unlike many other prognosticators, Webb sets sensationalism aside in favor of careful arguments, deep historical context, and a frightening degree of plausibility."—Jonathan Zittrain, George Bemis Professor of International Law and professorof Computer Science, Harvard University
- "The Big Nine is thoughtful and provocative, taking the long view and most of all raising the right issues around AI and providing a road map for an optimistic future with AI."—Peter Schwartz, senior vice president, Salesforce.com, and author of The Art of the Long View
- "Webb's potential scenarios for specific futures are superb, providing detailed visions for society to avoid as well as achieve."—JohnC. Havens, executive director, IEEE Global Initiative on Ethics of Autonomousand Intelligent Systems
- "Her writing is very clear and accessible, and the interesting analogies she uses to illustrate what may occur when algorithms make decisions for us make for compelling reading. This fascinating look at how AI will continue to revolutionize human experiences in unimaginable ways will appeal to anyone interested in AI, human-computer interactions, and machine learning in the private and public sectors."—Booklist
PRAISE FOR THE SIGNALS ARE TALKING:
- "Webb teaches us to listen...[she] combines well-researched, reader-friendly insights on Google, drones and artificial intelligence with a system of questions you can bring to your next strategy meeting..."—Chicago Tribune
"Webb provides a logical way to sift through today's onslaught of events and information to spot coming changes in your corner of the world."
"Sitting somewhere between Nate Silver and The Tipping Point, Amy Webb's book provides a practical guide for leaders - at any level - in the age of Big Data, offering tools for picking out the 'true signal, a pattern that will coalesce into a trend with the potential to change everything' - and land on the right side of disruption."
—Jon Foro, The Amazon Book Review (An Amazon Best Book of December 2016)
"The clear, insightful, and humorous Amy Webb has crafted a rare treasure: a substantive guide written in a narrative that's a delight to read. While most futurologists want guru status through a few Nostradamus-like visions that never materialize, Webb modestly reports with depth and discipline, and creates a system and tools we can all use to better navigate the future. Through her deep research, specific anecdotes and brilliant insights, she has performed the selfless but hugely valuable act of teaching us all to fish at the fringe."
—Christopher J. Graves, Global Chair, Ogilvy Public Relations
"Amy Webb, with insight and a big dose of pragmatism, shows how to clearly see the next big disruption and then take action before it strikes."
—Ram Charan, advisor to CEOs and corporate Boards, author of The Attacker's Advantage and co-author of Execution: The Discipline of Getting Things Done
- On Sale
- Mar 5, 2019
- Page Count
- 336 pages