The Loop

How Technology Is Creating a World Without Choices and How to Fight Back


By Jacob Ward

Formats and Prices




$15.99 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around January 25, 2022. This date is subject to change due to shipping delays beyond our control.

This eye-opening narrative journey into the rapidly changing world of artificial intelligence reveals the dangerous ways AI is exploiting the unconscious habits of our minds, and the real threat it poses to humanity: "The best book I have ever read about AI" (New York Times bestselling author Roger McNamee).

Artificial intelligence is going to change the world as we know it. But the real danger isn't some robot that's going to enslave us: It's our own brain. Our brains are constantly making decisions using shortcuts, biases, and hidden processes—and we're using those same techniques to create technology that makes choices for us. In The Loop, award-winning science journalist Jacob Ward reveals how we are poised to build all of our worst instincts into our AIs, creating a narrow loop where each generation has fewer, predetermined, and even dangerous choices.

Taking us on a world tour of the ongoing, real-world experiment of artificial intelligence, The Loop illuminates the dangers of writing dangerous human habits into our machines. From a biometric surveillance state in India that tracks the movements of over a billion people, to a social media control system in China that punishes deviant friendships, to the risky multiple-choice simplicity of automated military action, Ward travels the world speaking with top experts confronting the perils of their research. Each stop reveals how the most obvious patterns in our behavior—patterns an algorithm will use to make decisions about what's best for us—are not the ones we want to perpetuate.

Just as politics, marketing, and finance have all exploited the weaknesses of our human programming, artificial intelligence is poised to use the patterns of our lives to manipulate us. The Loop is call to look at ourselves more clearly—our most creative ideas, our most destructive impulses, the ways we help and hurt one another-so we can put only the best parts of ourselves into the thinking machines we create.



IT’S BEEN HARD to watch it all happen so fast.

When I began writing this book, artificial intelligence was still a largely academic topic. Finding real-world instances in which AI had been commercially deployed at a scale large enough that it was playing on human instincts in invisible, dangerous ways (that’s my thesis, if you hadn’t caught it) required a lot of digging. Honestly, when I set out to write this book, AI’s negative effects on behavior were what a journalist would derisively call “speculative.”

A little less than a year after The Loop was originally published OpenAI released ChatGPT, and the world responded exactly as all the experts I’d interviewed had taught me to worry about. Our instincts, from our tendency to assume systems we don’t understand are smarter than they are to our instinct to turn a profit before we really know what we have, were quickly on full display as soon as we had something like ChatGPT in front of us.

Just as OpenAI put ChatGPT into the public’s hands, people began using it to cheapen human creativity. Students used it to cheat. CNET used it to write news stories. I have a collection of TikTok and Instagram posts in which entrepreneurs speak about the power of using the technology to plagiarize transcripts of the most popular YouTube videos on a particular subject and regurgitate an alternate version, delivered by a new person in a new video, to make advertising money—a literal loop of plagiarism.

Money began to exert its unique gravity on the situation very quickly. A couple months after ChatGPT hit the market, Microsoft, which had sunk a billion dollars into OpenAI, announced it was incorporating ChatGPT into its search engine Bing.

And journalists began asking questions like mine very quickly, God bless them. Nilay Patel, editor-in-chief of The Verge, asked Microsoft CEO Satya Nadella whether building a product on a pattern-recognition system that only produces a “greatest-hits” version of the past would create, well, an inescapable loop.

“If more and more people are producing more and more AI content and that becomes the base layer that you’re training against… eventually, the amount of original content in the ecosystem begins to wither,” he said. “Is that a feedback loop that you’re worried about?”

“Absolutely,” Nadella answered. But he didn’t elaborate.

Instead, he told an anecdote about his daughter’s use of AI in school and how she was, in fact, learning by having it write things for her. “Let’s give ourselves a little permission to think about what is original content,” he said, before pointing out, in a way I assume he meant to be reassuring but I found chilling, that he “would be unemployable but for the red squiggly in Microsoft Word.”

Microsoft’s moves with ChatGPT caused Google, which had worried for years about the reputational and legal risks of incorporating AI into its core product, to go ahead and announce it would be updating its search engine with a new ChatGPT-like interface, called Bard, built on its own large language model. Faced with a direct threat to its search business, the main way the company makes money, Google abandoned its concerns and went for it.

Meanwhile other forms of so-called generative AI—the kind like ChatGPT that reads huge amounts of data, develops useful rules from them, and then generates content on demand—blew up at the same time. In 2022 OpenAI released Dall-E, an image generator, and companies began to build on that engine. The appetite for on demand art turned out to be vast and expensive to satisfy.

David Holz, CEO of art-making service Midjourney, built his product in part on Dall-E’s model. He told me forty-seven GPUs—each one costing seven thousand dollars—are required for every request his service receives. I asked him, “How many daily users do you have?” He didn’t answer the question directly, but he told me the service averages forty requests for pictures of cats each minute. “And that’s just the cats,” he said.

Holz also shared a pair of interesting notions which turn out to sit in impossible opposition. The first was that he’d created Midjourney because, as he described it, “I think there’s a lot of problems in the world to solve right now, but in many ways the problems are too big for our tools.” He’d built Midjourney, he said, to help create the sort of solutions that access human creativity for big challenges like climate change and human conflict. “The goal of my journey was to build this AI lab that could focus on these three pillars of creation: reflection, imagination, and coordination. And if we could build up some powerful-enough infrastructure in each of the three areas, we could change how we make things and empower people to make things in such a great way that we could actually remake the world.”

As we talked about how people used his product—the things they asked it to make—he admitted that a loop of visual predictability and unpleasant desires kept forming, however. And he and his team had to intentionally nudge the AI away from it. “The purely capitalist way would be trying to give the people what they want,” he said. But he had put a prohibition on pornography and violence, and he had found he could only entrust the AI with about half of the decision-making power over the kind of art it created. “If we let [the AI] cater too much to the tastes of users, it gets into cliches,” he told me. “So, we let it go halfway to what the users want, and then we put in a little of our own biases of what we think looks good.” When humans are given open access to AI it seems to reveal that their collective creativity may not be as deep as we’d hoped.

There are enormous threats taking form which weren’t yet a thing when I started writing this book. One study from the Epoch research group suggests by the year 2026 ChatGPT and its ilk will have essentially “run out” of useful new data on which to train. It will have read all high-quality writing on the Internet and nothing novel enough will likely be added past that point to change the rules it has already learned.

Now researchers inside the big companies are talking about creating “synthetic data”—representations of real humans which, through simulation, can train AI in new ways. This is a flavor of The Loop I never saw coming. Not only does it allow companies to skirt the burgeoning privacy laws in California and Europe that prohibit the collection of personal data without consent, but it also creates a closed ecosystem of training patterns that could accelerate the flattening (not to mention the mutating) of human behavior like nothing we’ve seen before.

It’s distressing to detail all these developments just a year after this book first published. My hope was to help us all get in front of this, but that may not be my role after all. Perhaps the best I can hope for is that this book becomes a helpful primer in the psychological, social, political, and financial conditions that make us uniquely vulnerable to misusing AI on one another at a time when we’re sorting out all its implications on the fly.

I have also found common cause since this book published, and I have had the opportunity to speak with many folks whose instincts around AI are strong, smart, and could lead the way to something better. I just wish I’d met them sooner.

I recently spoke with a pair of lawyers via phone about their new law firm which consults Fortune 500 companies about AI liability. I mentioned I had read a study which concluded most CTOs at large companies didn’t understand and didn’t care to learn how AI was integrated into their products. These two lawyers wanted to change that willful ignorance, and they were already succeeding by showing corporate executives the enormous problems they’d face in court if they blindly handed sensitive tasks like job recruitment or loan decisions to black-box AI systems. Judges have grown more attuned to the “explainability problem” you’ll read more about in this book. That’s good news.

State governments have started deploying AI for true life-improving purposes rather than for cost savings. Vermont has begun a program that uses simple pattern-recognition systems to pair immigrants and others seeking work with vacant jobs, from school bus driver to snowplow operator. That’s good news, too.

And in January 2023, almost exactly a year after this book hit the shelves, I traveled to Rogue River, Oregon, to speak with a high school class using ChatGPT to deconstruct literature.

Kelly Gibson, an AP English and drama teacher there, led a discussion with students about what ChatGPT was and what it represented. The students seemed to understand that ChatGPT simply predicts the next word in a sentence; that it’s a “greatest-hits” machine, a gifted imitator. Gibson’s students seemed to believe AI threatened conventional creativity, not to mention the authenticity of homework. “If I were a teacher I wouldn’t want it in my class,” one boy said.

But Gibson later mentioned that assuming ChatGPT is only for cheating is an older person’s instinct. “My students in general tend to be extremely focused on problem-solving,” she said. “They don’t want you to ask them, ‘What job do you want to have in the future?’ They want to know what problems they want to solve in the future, what they want to go out and conquer, what they want to fix in our world.” In our discussion about ChatGPT, the students had a touching faith in the value of authentic human creativity.

“The things that are handmade, handcrafted, are always going to be worth more,” said senior Victoria Rillo. She theorized AI would become the new normal, sure, but “the more we put up the normal, the more value goes up in those personalized items,” she said. If her generation hangs on to that idea, that’s good news as well.

The trouble is, of course, while I’ve encountered thousands of people scrambling to make money off AI, I’ve also encountered only a handful determined to resist it. If the market for human effort won’t support those who take the time to paint, or compose, or interview job applicants in person, then those holdouts will have to capitulate and hand the work to the dominant tool set. There are already dozens of companies offering to replace this or that “menial job” with AI. That impulse has already created seemingly irresistible market forces. And those forces are growing stronger. I am as worried as I was when I wrote this book. Perhaps more so.

POOR PLANNING ONCE stranded me at a car testing facility in a rural corner of New England. There was no available taxi service to take me to the airport for my flight home. It was a crisp fall day and night was quickly approaching. A man working at the facility noticed my panic and volunteered to drive me to the airport.

He was very kind to offer to drive me but obviously and understandably irritated by it. We drove in silence for the first half hour. A few efforts at conversation faltered. And then, when I asked him about his family, he began to talk.

I won’t be specific about who he was or where we were since this was a private conversation, but its lessons have been with me for a long time, and I want to share them with you.

He described a daughter who, in his words, had “had a rough go of it.” It began, he said, when she was a middle-schooler. The two of them were watching television on the couch, him shelling peanuts, when she suddenly went into anaphylactic shock and couldn’t breathe. He raced her to the emergency room, and it turned out she had the rare type of peanut allergy where simply being in the same room as the allergen causes one’s airway to close.

But this was the 1980s, before medical practitioners were trained in patient-centric procedures or the variety of lived experiences, and the doctor who saw her said something thoughtless. He told her that food was now her enemy and that she had to be careful around it. By modern standards this was an appalling thing to tell a teenager and, according to her father, it helped set off a lifetime of trouble around food.

But that wasn’t the worst of it.

He described visiting the principal of the high school she was about to attend, explaining that peanuts were deadly to his daughter. He suggested that other parents should not send their kids to school with peanuts so his daughter could stay safe.

The principal apologized and said banning peanuts was out of the question. Peanut butter and jelly, he pointed out, was a standard lunch. But he could offer another solution. “We’ll put her at a table in the lunchroom alone,” he suggested, “and surround that table with four empty tables so she’ll be protected.”

And that is how this young woman experienced high school: afraid of food, eating alone, shunned by design.

I mention this story because it is so clearly dated, hailing from a time when we hadn’t yet refined the social contract to accommodate the vulnerabilities of a handful of people so everyone can be part of the community. This young woman’s experience was so cartoonishly awful and so pointlessly cruel, especially when millions of students like my own children attend schools where nuts of all kinds are now prohibited. We solved the problem of students’ exposure to life-threatening allergens, and we can also solve the current problem of AI.

I recognize the engines of profit and innovation will always move faster than the engines of resistance and regulation. However, I have enormous faith in our society’s ability to dig through and sort out what we want and don’t want from the market. It’s just that it takes a while. You’ll meet many people in this book: people already feeling the ostracizing effects of being evaluated by an algorithm; people who recognize access to capital, justice, and housing shouldn’t be decided by the cheapest possible off-the-shelf technology; people who have played a part in automating human choice and regret it. For all these people, and for all the kids we’ve made the wrong choices for in the past, we need to recognize The Loop as it forms around us, and protect one another from it.

—Jacob Ward

March 2023



WHENEVER OUR FUTURE on this planet looks bleak, we can’t help but think about other planets. We’ve spilled over our place in nature, and we can’t seem to get along well enough to agree on our shared salvation. Let’s go somewhere else and start over. How far away is the next habitable planet, anyway?

Interplanetary scientists despise the notion of translating light-years to a speed that you and I can grok as a number, the way we put a number to the speed of our own cars, so it always falls to journalists like me to thumbnail a best guess as to just how long it might take to carry us that same distance. Bear with me, please.

Our current rocket technology can propel a ship through space at speeds of roughly 20,000 miles per hour. By the standards you and I are used to, that’s incredibly fast. In our own atmosphere the friction against the surrounding air at that speed would melt through any material we’ve invented and would incinerate you and me before we could even strike up a conversation about where we were going and what we wanted to build when we got there. But let’s use that speed as a benchmark, because in space that speed is a terribly slow rate to cross the enormous distances between planets.

Mars, whose orbit is directly adjacent to our own, is the most survivable other planet in our solar system. But that’s not saying much. Sure, other planets are more horrible. Piercing the thirty-mile layer of clouds that surrounds Jupiter, our rocket’s engines would begin to either choke or over-fire in its flammable nightmare of hydrogen and helium, and then would die completely as they hit the liquid form of the same stuff roughly 13,000 miles past the cloud cover that keeps us from knowing anything about what’s beneath that poisonous ocean. That crew would drown (or maybe drown and burn, unheard of on Earth) without ever making it out of the ship.

Mars is comparatively pleasant. For one thing, there’s stable footing. And a nice day on Mars might actually feel nice. With the sun high in the sky, you’d enjoy temperatures as high as 68° F, a clear August afternoon in San Francisco or Johannesburg. But if you happened to exit the ship at one of the poles, at night, the temperature could be less than −200° F, cold enough to not only instantly kill you, but also almost instantly freeze your limbs into brittle branches that would shatter under a hammer blow. And let’s not forget that even in the balmiest regions of the planet, there’s nothing to breathe, so you’re not getting far on even the most pleasant day. You’d bound perhaps fifty yards in the low gravity before you could no longer hold your breath, then hypoxia would disorient you, and you couldn’t make it back to the oxygen of the ship. You’d wind up unconscious and twitching, your heart would stop, and the red dust would slowly settle around your corpse.

That’s why scientists and journalists alike are so excited about exoplanets, the term for planets beyond our solar system that seem to offer the possibility of a livable atmosphere and surface. Humanity has been treated in the last few years to a steady stream of optimistic fantasy destinations emanating from the now-defunct Kepler space telescope. Kepler essentially squinted out into deep space to see how light from distant stars bent around intervening planets too far away to image in any detail. Depending on how the light goes around a planet on its way to us, astrophysicists can calculate not only the size of that planet, but how far it is from the source of light, meaning we can determine whether the relation between the planet’s size and the distance to its star possibly indicates that planet might host some sort of atmosphere.

The Kepler mission measured the light from roughly 150,000 stars and found several hundred planets whose ratio of size and star-distance makes them candidates—just candidates, but a real possibility—for human respiration and occupancy. Land, walk around, build a cabin out of whatever materials exist, just imagine! And when we consider the vast distances of space, the closest exoplanets are, in fact, relatively close.

But before we pop the champagne and pour our savings into SpaceX, let’s think about what it takes to get to another planet. A trip to Mars, for instance, is comparatively brief. Depending on where it and Earth are in their orbits, the journey could be made in between three hundred and four hundred days. But humans have never traversed open space for that amount of time. The journey to the moon currently takes about seventy-two hours, and astrophysicists and medical experts quietly point out in private conversation that it’s a miracle none of the two dozen people who went to the moon died during the trip. A trip to Mars would involve exposing the crew to the dangers of deep space for roughly a full year. And those dangers go on and on. Deadly amounts of radiation permeate everything in the deep blackness between planets. Space is full of dirt and grit that could disable the ship. (A whole field of astrophysics studies the interstellar medium and has shown that if you held a gloved white hand out the window of the ship as one does on the highway, it would come back blackened by, well, whatever it is that’s out there.) Also consider that if a mishap killed the ship and the crew, the event would be torturously broadcast, on time delay, to the whole of Earth, presumably killing off our species’ desire to travel to Mars in the process.

And even if all goes well for the crew, that’s a long time confined together in a space no bigger than a vacation rental, as all of us who spent the pandemic year locked in with family, roommates, or alone know too well. In fact, before the coronavirus made test cases of us all, psychologists and logisticians who worried about a Martian crew driving each other nuts spent time observing actual astronauts confined in these sorts of tiny spaces for the duration, either of a simulated trip to Mars or a stay on the planet. And it hasn’t gone well. On almost every sardine-style simulation someone has suffered serious injury or illness. A simulated Martian habitat on a military base on the island of Hawaii has seen a half-dozen such missions over the years, including one where a crew member had to withdraw for medical reasons (the mission’s organizers haven’t publicly revealed what it was). Seeking to learn from the experience, the crew pretended to treat their missing member as dead and enacted setting a fake body out on the simulated Martian tundra, where it would be perfectly preserved for a journey back to Earth for burial. In the final Hawaiian mission, the simulation was compromised when one of the crew was electrified by live wiring, and earthly paramedics had to come inside and drive the crew member away in an ambulance. But putting aside the physical danger of living isolated on Mars, the missions have revealed that… people get weird. “You can select a crew all you want, get the right fit and mix, but there’s too many variables when it comes to human beings,” a psychologist for the mission told the Atlantic. “It’s just really hard to predict how we’re going to perform in all situations.”

That’s just Mars. It’s next door to us, cosmically speaking. Now imagine how weird we’d become trying to reach the nearest exoplanet.

Let’s imagine we’re standing together on the launch pad at NASA’s Cape Canaveral facility near Orlando, and staring up at the stars together. As I write this, the last constellation above the horizon is Centaurus. The centaur’s front hoof is a bright star. In fact, it’s three stars—a pair called Alpha Centauri A and B, and, dimmest of the trio, Proxima Centauri. Here, look through this telescope. See? You can tell them apart. But what we can’t see is that there is, in fact, a planet circling the faint light of Proxima Centauri. Man, I wish we could see it. Because that planet, Proxima Centauri b, is the nearest known exoplanet to Earth.

We have no idea what life would be like on Proxima Centauri b, or what the place even looks like. There may be many reasons that it just won’t work for human habitation. It’s possible that stellar winds may push so much radiation across its surface that we’d all be poisoned before we got the first shelter built, and those same winds may have stripped away any breathable atmosphere, meaning we’d have to live underground. It’s also possible that the planet’s orbit of Proxima Centauri happens at such a cadence that one side of the planet permanently faces the sun, meaning half of the planet is always daylit, and the other is always in darkness.

But let’s stay hopeful. Let’s imagine that it’s a perfectly habitable place, with warm winds and a liquid ocean and strange, vivid landscapes of rock and vegetation and alien snow. Let’s go there!

First, the good news. Proxima Centauri b is only 4.2 light-years away. That means that light, the fastest thing we know of, at roughly 186,000 miles per second, would take only 4.2 years to streak from our planet to Proxima Centauri b’s weird, wild shores. For photons, that’s a very short trip.

The bad news is that for humans, it’s a very long trip. We don’t travel at that speed. Not even close. We’ll need much more time to get there. In fact, it’s so much time that no one who has ever set foot on Earth will ever set foot on Proxima Centauri b.

If we were to board a spacecraft and ride it from the outer edge of our atmosphere all the way to Proxima Centauri b, you and I, who boarded the ship fit and trim, chosen as we were from billions of applicants, would die before the voyage reached even 1/100th of the intervening distance. It’s such an outrageously long journey that a human life span is just a tiny fraction of the time it will take.

Here’s the napkin math. At a speed of 20,000 miles per hour—the speed of our top-performing modern rockets—4.2 light-years translates to more than 130,000 years of space travel.

One hundred thirty thousand years. This means that the time involved to reach our closest exoplanet neighbor would crush us, and our children, and their children into dust a thousand times over before anyone had a chance to breathe alien air.

Could we put ourselves in some sort of coma for the journey, as the characters in 2001, Alien, and Interstellar do? Trauma surgeons, experimenting with the same concept that inspired Ted Williams’s family to freeze his head for possible transplant in the future, are currently experimenting with procedures that can revive a semi-frozen patient after two hours without a pulse. But we’re a long way from freezing ourselves for as long as this might take. Using current technology, anywhere from 900 to 1,300 human generations would pass on the way to Proxima Centauri b before the ship arrives. Generations. So how will we ever get there? A generation ship.

First proposed in varying forms by early rocket pioneers, science fiction writers, and astrophysicists with a few beers in them, the general notion is this: get enough human beings onto a ship, with adequate genetic diversity among us, that we and our fellow passengers cohabitate as a village, reproducing and raising families who go on to mourn you and me and raise new children of their own, until, thousands of years after our ship leaves Earth’s gravity, the distant descendants of the crew that left Earth finally break through the atmosphere of our new home.

I once had dinner with an evolutionary biologist, and I asked him what it is about Darwin’s theory that the human mind has most trouble seeing clearly. “It’s the outrageously long periods of time involved,” he said without hesitation. “We’re just not equipped to be able to imagine the amount of time evolution takes.”

That inability to see evolution’s progress on the vast plain of time also means that planning for ongoing communal life aboard a single spaceship is largely beyond our natural gifts. I pat myself on the back when I get out ahead of birthday presents for my children. The ability to plan an environment that will keep my great-grandchildren alive and happy enough to reproduce with the great-grandchildren of my colleagues is an entirely separate matter.


  • “A fascinating survey of the known spectrum of human biases…rebuts the Silicon Valley-esque assumption that A.I. will always do good.”
 —Cathy O'Neil, New York Times
  • "A fantastic, groundbreaking new book."—Ali Velshi, MSNBC
  • “Scary stuff…this book has it all [and] the “how to fight back” part is very important.”

    Hoda Kotb, The Today Show
  • "A brilliant explanation of how artificial intelligence turned to the dark side, it’s that rare book that explains a complicated subject — AI — in language anyone can understand, while simultaneously providing the context that every policy maker and citizen will require to deal with it. If AI is to get back on track, then Ward will be the guide."—Roger McNamee, bestselling author of Zucked
  • "Fascinating and unsettling. The future is here and Jacob Ward's lively narrative takes us headlong into the dangerous foray of artificial intelligence into our very core as humans."—Cecilia Kang, bestselling author of An Ugly Truth
  • The Loop is about the unconscious patterns of human behavior and the even less conscious patterns in the software of machine-learning algorithms. Jacob Ward argues, precisely and elegantly, that those magisteria overlap—digital algorithms that take advantage of wobbly human habits increasingly determine how society works, and that may well make us all less happy and less free. When a reporter with Ward’s brains and experience warns you about something this serious, you should listen.”—Adam Rogers, bestselling author of Proof and Full Spectrum
  • “Ward is a thoughtful reporter who has spent the past decade chronicling the rise of a new set of tools pioneered by social scientists, AI researchers and technology companies. The Loop is simultaneously his powerful account of these forces and a wakeup call to remind us that we retain the capability to escape the silicon tendrils of those who seek to exploit our unconscious tendencies.”—John Markoff, Pulitzer Prize-winning author of Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
  • "A salutary effect of growing disillusionment with tech has been a shelf of excellent critiques. But The Loop is a strong entry in the canon...AI represents perhaps the ultimate shiny object. But Ward penetrates to the dark vacancy at its core."

    San Francisco Chronicle
  • “[The Loop] combines a remarkable synthesis of a mountain of behavioral science research about the human mind, and a travelogue through the world of artificial intelligence history and current practice.”

    Alexis Madrigal, KQED (NPR) Forum
  • On Sale
    Jan 25, 2022
    Page Count
    304 pages
    Hachette Books

    Jacob Ward

    About the Author

    Jacob Ward is technology correspondent for NBC News, and previously worked as a science and technology correspondent for CNN, Al Jazeera, and PBS. The former editor-in-chief of Popular Science magazine, Ward writes for The New Yorker, Wired, and Men's Health. His ten-episode Audible podcast series, Complicated, discusses humanity's most difficult problems, and he's the host of a four-hour PBS documentary series, "Hacking Your Mind," that introduces a television audience to the fundamental scientific discoveries in human decision making and irrationality. In 2018, he was a Berggruen Fellow at Stanford University’s Center for Advanced Study in the Behavioral Sciences. He lives in Oakland, California.

    Learn more about this author