The Reality Game

How the Next Wave of Technology Will Break the Truth

Contributors

By Samuel Woolley

Formats and Prices

Price

$28.00

Price

$35.00 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around January 7, 2020. This date is subject to change due to shipping delays beyond our control.

Fake news posts and Twitter trolls were just the beginning. What will happen when misinformation moves from our social media feeds into our everyday lives?

Online disinformation stormed our political process in 2016 and has only worsened since. Yet as Samuel Woolley shows in this urgent book, it may pale in comparison to what’s to come: humanlike automated voice systems, machine learning, “deepfake” AI-edited videos and images, interactive memes, virtual reality, and more. These technologies have the power not just to manipulate our politics, but to make us doubt our eyes and ears and even feelings.

Deeply researched and compellingly written, The Reality Game describes the profound impact these technologies will have on our lives. Each new invention built without regard for its consequences edges us further into this digital dystopia.

Yet Woolley does not despair. Instead, he argues pointedly for a new culture of innovation, one built around accountability and especially transparency. With social media dragging us into a never-ending culture war, we must learn to stop fighting and instead prevent future manipulation. This book shows how we can use our new tools not to control people but to empower them.

Excerpt

Explore book giveaways, sneak peeks, deals, and more.

Tap here to learn more.



AUTHOR’S NOTE

THE CONCEPT OF FAKE NEWS BURST ONTO THE GLOBAL scene in 2016 following the rise of blatantly false news stories and the flow of digital garbage during the presidential election in the United States. The specter of “fake news” was further fanned by suspicious rumors of smear campaigns against Russian athletes that arose during the summer Olympics in Rio de Janeiro and by misinformation about the Zika virus, which continued to spread in Brazil and elsewhere. The term “fake news” was quickly co-opted, though, by the powers that be. The very people who produced the junk content known by this moniker reclaimed the phrase as a means of undermining legitimate journalism, as a crutch to attack inconvenient scientific findings, or as a means to refute factual stories about their own misdeeds. The term “fake news” itself became a tool for spreading fake news.

With this in mind, I need to explain how I use a couple of terms and definitions that are important to the coming chapters and the arguments I make here. First, I try not to use the phrase “fake news.” Instead, I use the term “misinformation,” by which I mean the accidental spread of false content or “disinformation,” by which I mean the purposeful spread of false content. I sometimes refer to “false news” or “junk news,” and when I do I mean articles constructed to look like news that are not actually true, because they lack facts or verifiability. These types of articles, like the infamous pieces that came from the bogus Denver Guardian during the 2016 US election, are created with an intent to mislead, confuse, or, at times, make money. I do not use “fake news” because the phrase has been repurposed as a tool to target articles and reports by actual journalists who write things with which thin-skinned politicians, litigious business executives, or incensed regular folks do not agree.

I refer to “computational propaganda” often. My colleagues and I originally came up with the term to refer to the use of automated tools (like Twitter bots) and algorithms over social media in attempts to manipulate public opinion. In this book I use the term more broadly to refer to the use of digital tools—from Facebook to augmented reality (AR) devices—to spread politically motivated information. Computational propaganda includes using social media to anonymously attack journalists in order to stop them from reporting. It includes leveraging digital voice systems designed to sound like humans to call voters over the phone and tell them lies about the opposition. It also includes using artificial intelligence (AI) and social bots—automated programs built to mimic people online—to fake human communication in order to trick the online algorithms that curate and prioritize our news.

Finally, I often talk about democracy and human rights. When I talk about “democracy,” I am talking about democratic values: liberty, equality, justice, and so forth. I am not advocating for US-style democratic governance or for any other hybrid democratic-republican-parliamentary-presidential system. When I talk about “human rights,” I have in mind the definition by the United Nations, which defines “human rights” as:

the rights inherent to all human beings, regardless of race, sex, nationality, ethnicity, language, religion, or any other status. Human rights include the right to life and liberty, freedom from slavery and torture, freedom of opinion and expression, the right to work and education, and many more. Everyone is entitled to these rights, without discrimination.1

I argue that we should bake the values of democracy and human rights into our technology. We must prioritize equality and freedom in the tools we build so that the next wave of devices will not be used to further damage the truth.




1

TRUTH IS NOT TECHNICAL

Your Real, My Fake

“Oxford University? That’s a school for stupid people,” said Rodrigo Duterte, president of the Philippines. It was July 24, 2017, and Duterte had just given his State of the Nation Address. During a press conference following the event, a reporter had asked him about a recent research paper from Oxford University.1 The paper in question detailed the social media propaganda expenses of various governments around the globe and claimed that the Filipino president spent approximately $200,000 for a social media army whose goal was to viciously defend him against critics.2 Duterte admitted to the assembled crowd that he had, in fact, spent more than this amount for such purposes during his presidential campaign. He denied, though, that he continued to do so. He made this argument despite evidence to the contrary, cited in the Oxford paper, from the award-winning Filipino news outlet Rappler.3 Maria Ressa, founder and editor of the publication, wrote that his regime continued to fund malicious digital propaganda and trolling campaigns against dissenters. Duterte, like many other world leaders, had turned social media into a tool for public manipulation.

I was the director of research for the Oxford team that drew Duterte’s ire. Our group, the Computational Propaganda Project, was based at the university’s Oxford Internet Institute. Our work was focused on explaining the use of social media as a tool for molding public opinion, hacking truth, and silencing protest. We detailed how automated Twitter “bot” profiles and trending algorithms were being used to influence people during pivotal political events. My colleagues and I wanted to uncover who was behind these underhanded campaigns and determine how they were spreading disinformation. More than anything, we wanted to know why they were doing what they were doing. What did they think they were achieving? It was not the first time we had struck a nerve with someone in a position of power through our research, but it was the first time a world leader had called us out specifically.

Soon after Duterte’s attack on Oxford, Rappler produced a short video that explained how a variety of powerful political groups around the world, like the Duterte regime, used sites like Twitter, YouTube, and Facebook to troll their opposition (post deliberately offensive or incendiary online comments) and amplify spin campaigns.4 The video said that these groups used bots and fake profiles “to create an alternative reality for people to believe in.” Duterte’s attack on Oxford, defaming the university and its research, was a parallel strategy for gaming the truth. He, like Narendra Modi in India, Donald Trump in the United States, and Jair Bolsonaro in Brazil, was combining ad hominem attacks, skewed logic, and social media tools to create a distorted version of what was real and what was fake.

The Next Wave of Technology and Disinformation

Though the past can tell us a great deal about what is to come, society must now pivot from concerns about digital “information operations” during past events and begin to look to the future. It is true that countries around the globe have experienced unprecedented levels of manipulation over social media in recent years. These changes to the way we communicate have weakened democracies and strengthened authoritarian regimes. Nevertheless, we need to take heed of something new on the horizon. The next wave of technology—from artificial intelligence (AI) to virtual reality (VR)—will bring about a slew of newer and even more potent challenges to reality and the truth.

Although advances in artificial intelligence have created more effective methods for parsing data and prioritizing content for users on social media, they have also, and perhaps more concerningly, fundamentally changed how we spread information and who does the spreading. They have opened up an online world where the distinction between human and machine is increasingly blurry.

Manipulative social media advertisements during elections are certainly concerning, but what about political indoctrination in a virtual social media world? We cannot look away from this development, because advances in our digital tools are bringing about big changes to communication technology and society writ large. The next wave of technology will enable more potent ways of attacking reality than ever. In the humble words of Bachman-Turner Overdrive, “You ain’t seen nothing yet.”

For the better part of the last decade, I have been researching the ways in which propagandists leverage our technology and media systems. I have seen a rapid shift in how we perceive social media: once seen as exciting tools for connecting, communicating, and organizing, they are now often thought of as malicious platforms for spreading false news, political misinformation, and targeted harassment. And I am still witnessing efforts by some groups to control the messages we receive online. But every day I also learn about new initiatives and new technologies for pushing back and for prioritizing quality journalism, fact, and science over informational garbage.

In this book, I am going to tell you what I know. I’m going to talk through the recent history of political manipulation using digital tools, discuss how things look right now, and make educated guesses about what will come next. I’m also going to outline how we can respond and how we can reclaim our digital spaces. It’s going to take work.

The “Assault” on Reality and the Truth

If you don’t fund the State Department fully, then I need to buy more ammunition ultimately.

—Former Secretary of Defense James Mattis

It took a few years of studying computational propaganda for me to come to a simple but important revelation: technology is what people make of it. In the spring of 2016, I was in Austin for the South by Southwest (SXSW) conference to give a talk on how social media can be used to game elections. After the presentation, I went out to a downtown bar near the conference center with some friends and colleagues. It was full of the odd mixture of people you get at an event like SXSW: techies, politicians, musicians, filmmakers, students, businesspeople, and so forth. At one point later in the night, and after several drinks, a man who had attended my talk came up to me. Sporting a pinstriped three-piece suit, very gelled hair, and lots of gold jewelry, he was dressed like a combination of a Wall Street banker and a member of the mob.

He told me that he was intrigued by my talk and had never heard of chatbots (automated profiles built to mimic real people) being used on social media to spread political content. He said that he worked in communications for “a government” and that he had just been tasked with taking over its social media operations. He was deliberately vague about all of this, and I never did find out where he was from, beyond somewhere in the “Indian Ocean region.” I did learn that he had a proposition for me: Would I be interested in helping him build an army of bots to boost his government’s image over social media? I laughed out loud. I had just given a talk about the perils of doing exactly this kind of thing, and this guy was almost guilelessly trying to get me to do precisely the opposite. Unsurprisingly, I emphatically told him no. We left it there and went our separate ways.

On another, very different, occasion, I was approached by a curator from the Victoria and Albert Museum, an art and design museum in London. He was putting together an exhibit on the future of design and wanted to know if I could build some kind of Twitter bot to go in it. The idea I landed on, along with another collaborator, was to build a socially oriented bot that would automatically share content on how its bot brethren could be used in politics and other social discussions online. It could also, to a degree, chat with people about politics and life. This bot, under the account name @futurepolitica1, would be transparent about its “botness” as it deliberately sought to educate people about the political misuse of technology.

The takeaway from these two separate stories is that a bot—or a VR program, a human-sounding “digital assistant,” or a physical robot—can be built either to control channels of communication or to liberate those same channels. The tools that are already here, and those that are coming, can be harnessed for war or for peace, for propaganda or for art. How these tools are used depends on who is behind the digital wheel. Most democratic nations can agree upon absolutely unalienable human rights, but when it comes to how technology is used to manipulate, consensus is more difficult to reach. That is because the problems we face are not simply technical but social.

When I first started looking into how social media bots were being used to, say, defame activists online in the Middle East, it was easy for me to get hung up on the idea that these seemingly smart machines were automatically sending out cascades of harassment and spin. When I dug deeper, though, I realized that the vast majority of these campaigns were technologically rudimentary. The bots being used were simple to build, simple to launch, and simple in their communication. They repeated the same attacks and used the same hashtags over and over. The real problem was the people who launched the bots, and the people who paid for them. They were the conniving ones who came up with the idea of using bots to create the illusion of large-scale public online campaigns. It was humans who figured out that they could generate false hashtag trends on Twitter—there for everyone to see and click on via the site’s sidebar—by using armies of bots to hugely boost the numbers of times a hashtag was shared.

Shifts in Technology, Shifts in Society

In 1991 the company Virtuality Group released the first networked and multi-player VR system for public use, the Virtuality 1000 series. Users experienced the platform through a bulky stereoscopic helmet and handheld joysticks, and a handful of arcades offered the public playtime with the new system. Systems for home use cost up to a whopping $73,000—just shy of $140,000 in today’s dollars.5 In the handful of decades since, VR has become much more accessible. Today you can pick up a Samsung Gear VR headset, which pairs with Samsung smartphones, for around $50. Yesterday’s VR experiences offered blocky, low-resolution, simulation games like Dactyl Nightmare—a multi-level game not unlike the original Donkey Kong. Today’s VR is plugged into budding social networks through apps like Facebook Spaces that are immersive and much more realistic. And VR is now being used for political and indoctrination purposes. Governments around the world are even beginning to use these systems to “train” ideal citizens.6

It is an understatement to say that things are changing, technologically and socially. The political bots and social media advertising campaigns that propagandists and political campaigns used to clobber reality during the 2016 US presidential campaign are becoming more sophisticated. They still require human guidance to be effective, but they are becoming steadily more automated—and more powerful. If we don’t adapt to these changes, we run the risk of the global public completely losing trust in any information they encounter online.

Some researchers and pundits have suggested that social media and the internet have become the latest tools of war, that Facebook, YouTube, and Twitter have been weaponized by the powerful.7 They argue that countries now use these digital weapons to attack one another in a battle of likes, retweets, and comments and that whoever wins on the virality front wins the day. It’s true that groups in positions of power—militaries and governments among them—now use online communication platforms to spread propaganda and attack their opposition. Examples of these tactics abound, including, of course, the Russian influence campaign in the 2016 US election. But this isn’t the whole story.

No media tool, from a book to a virtual simulation, is a weapon in and of itself. Social media are not actual weapons, and they aren’t just used in information warfare. Widespread social problems created by national and global spikes in polarization and nationalism are primarily that—social. Online efforts to dupe people into donating money to scammers or false news campaigns designed to make money through clicks and views are economically driven. Campaigns to sway people’s votes by using Twitter to falsely make a politician or idea seem more popular are political.

If we think of computational propaganda and other misuses of social media and technology simply as warfare, then we will fail to effectively address other underlying and complex issues. It is a combination of social, economic, and political problems that spurs manipulative uses of social media in the first place. There is more going on here than just the desire to do battle; this is more than simply a fight between those with access to troops and tanks. To solve the underlying issues we must not think in terms of defense and offense, but rather in terms of diplomacy and human rights. We must acknowledge that what we face is a broad and deep societal issue as well as one driven by new technology.

Reddit, Gab, Periscope, WhatsApp, WeChat, KakaoTalk, Instagram—all of these sites or applications, and hundreds of others like them, are social networking services or social media. Virtual reality and augmented reality are, similarly, immersive media tools. All of these function as communication technologies. They are vessels for spreading information. The idea that any of these technologies, or any of the artificial intelligence or machine-learning (ML) capabilities that underpin them, can be weaponized exaggerates fear about pieces of code while overlooking the human role in uses of technology for purposes neutral, good, or evil.

Tools are not sentient—they do not act on their own. There is always a person behind a Twitter bot, a designer behind a VR game. A bot is just a way of automating and scaling what a human does online. Social media websites were designed by the Mark Zuckerbergs and Jack Dorseys of the world in order to connect people and, in so doing, make money. Many people, and not just their creators, thought that these new platforms would be phenomenal tools for advancing democracy. They would allow activists in Egypt to communicate about a revolution against an authoritarian regime. They would facilitate organization between journalists breaking a story on global rings of corruption. But—and here lies the failure of these platforms and those who are supposed to regulate them—they could also be used to control people, to harass them, and to silence them.

It is not just governments that have figured this out. Well-resourced actors, including politicos and corporations, special interest groups, intelligence agencies, and wealthy individuals, also use social media in attempts to manipulate not only what we read, see, hear, and watch online but also how we feel and what we believe. It is undoubtedly people with access to lots of money, time, and know-how who use social media most successfully to influence politics and social life. They’re also the ones who are best able to manipulate the variety of emergent technologies, from deepfake videos to deep learning (DL), for their own selfish means and ends. But regular people and small far-right and far-left political groups have also figured out how to game trends on Twitter and control conversations on Facebook to achieve their own goals. There has been an opening up of who can sway public opinion and how they can do it.

We need to act now to prevent the misuse of tomorrow’s technology. This books walks through the past, present, and future of how computer- and internet-based tools are used to undermine reality and the truth. There are lots of stories in here about how we got to where we are, but there are also many stories about things that aren’t yet in the news, that have not yet provoked a congressional hearing. There is also serious discussion about the potential problems posed by the use of new and future tools—alongside proposed solutions to these problems.

This book does not paint a doom-and-gloom picture of our technological world. It isn’t a treatise on how technology companies screwed up or on how the addiction to social media of one particularly egotistical politician changed history. I talk about these things, but I focus much more on a variety of new media technologies and what we can do to ensure that they are used to build up the tenets of democracy rather than undermine them. This book takes an informed and cautiously optimistic approach to addressing the problems at hand. The truth is not broken yet. But the next wave of technology will break the truth if we do not act.

We live in a time when the quest to control reality has become something of a game, one mired in the ability to exploit the latest communication technologies in efforts to prioritize one notion of reality over another. That game is mostly played by the political elite and by disproportionately vocal extremist groups. We do not, however, have to play by their rules.

From Propaganda to Computational Propaganda

Jamal Khashoggi, the journalist murdered in late 2018 under extremely suspicious circumstances in the Saudi Arabian consulate in Istanbul, lived through the shift from the old world of propaganda to the new technological era of bending social reality. He, like other reporters around the world, saw Twitter and other social media networks become arenas for spreading the latest news and information. He and his colleagues also eventually realized that these tools were simultaneously being co-opted by governments—including Saudi Arabia—for their own Machiavellian purposes.

Khashoggi, publicly a cautious critic of Saudi policies, left his home country after experiencing a spate of harassment online and offline. Before leaving, he had been banned by the Saudi royal family from writing publicly or making media appearances.8 The government there, like many other governments around the globe, still worked to exert control over all forms of media, but the Saudi government had also broadened its propaganda horizons. Khashoggi was also told not to use Twitter. In exile, once he had taken up a position as a columnist for the Washington Post, he defied that directive. But his personal and professional life online, and consequently aspects of his offline life, became untenable. According to the New York Times, Khashoggi experienced an orchestrated and tireless social media trolling campaign in the months leading up to his murder.9 A team of Saudi “image makers” worked to defame and attack the journalist at every turn.

The trolls acted, according to the Times, at the behest of Saudi crown prince Mohammed bin Salman. Thousands of posts on Twitter targeted Khashoggi and his closest colleagues with vitriol and threats while simultaneously building up the Saudi government. Just before he was beaten and strangled to death, Khashoggi’s online life had—by all accounts—become a living hell. He could not log onto Twitter without being barraged with disinformation, harassment, and hate. After the journalist’s death, a similarly planned propaganda campaign worked to contradict allegations that the crown prince had ordered the killing. Armies of both bot-driven automated Twitter profiles and human-led accounts were instrumental in defaming and tearing down someone who, according to his friends and colleagues, was a tireless and fair-minded journalist.

The rise of digital disinformation and online political harassment—what I call “computational propaganda,” Facebook calls “information operations,” and most people call “fake news”—is a new way to manipulate people by using automated online tools and tactics.10 It is used to target journalists, like Khashoggi, but it’s also used to target politicians, public figures, and the general public. During the 2016 US election, numerous such online attacks, originating from both Russia and inside the United States, were used in attempts to manipulate the American people. Similar campaigns have been conducted around the world, orchestrated by world leaders and fringe political groups, from Duterte’s troll machine in the Philippines to bin Salman’s image polishers in Saudi Arabia.

While powerful political groups, from governments to militaries, still run the best-resourced and most pervasive campaigns, others have begun to adopt computational propaganda in their own amplification and suppression efforts. Even that outspoken person we all know on platforms like Twitter, Instagram, or Facebook can pay an illicit bot builder on a website such as Fiverr to get 1,000 or 10,000 automated accounts to amplify their rants about current events. But even as the political noise on social media becomes unbearable, things are changing. The tactics of computational propaganda are progressing and new tools are emerging. Trolling campaigns and botnets (groups of bots) are becoming more subtle and harder to track. Politicos are now beginning to seize upon advances in artificial intelligence to leverage the already widening rifts in society for political gain. They deploy smart technology to do the dirty work of campaigns: AI-doctored videos, increasingly individualized online political advertising campaigns, and facial recognition technology are among the tools used for these ends.

Propaganda, in and of itself, is certainly nothing new. The idea of manipulating how people think—and what people think about—has been around since at least ancient Greece.11 The Greek origin myths and the legacy of the gods—of Zeus sitting atop Mount Olympus dictating weather patterns and striking down wayward mortals—were used to make grand political claims and lend legitimacy to dynasties.12 In more recent conflicts, and during many elections in contemporary history, propaganda has played a key role in molding behavior and belief. The Cold War spurred a unique and memorable barrage of both Soviet and US propaganda.13 Airborne leaflet propaganda—dropping purposefully crafted information on unsuspecting crowds from planes—is a form of psychological propaganda that originated as far back as World War I and continues to be employed today over war-torn regimes (in Syria, for instance).14

In some ways we are experiencing Cold War propaganda strategies today, amplified by powerful technology. But it’s important to underscore the aspects of computational propaganda that are distinct from the propaganda of yesterday. What began as warfare tactics have become the political communication methods of the guy next door. Most obviously, this new version of manipulative information can be automated and is often completely anonymous.

Genre:

  • "What makes 'The Reality Game' worth a read is Woolley's focus on the upcoming wave of new technologies; arguable deep fakes; virtual reality, and machine learning."—Engineering and Technology
  • "A well-informed cautionary tale on alarming issues that show no signs of abating as disinformation continues to proliferate."
    Kirkus Reviews
  • "While the rest of us have been processing our shock at the fake news crisis, Sam Woolley has been anticipating what's coming next. This a mind-blowing and essential book for a future that's practically already here, whether we know it or not. This book scares the hell out of me, but if we listen to Woolley's wake-up call, then I also have hope."—Jane McGonigal, author of Reality is Broken: Why Games Make Us Better and How They Can Change the World
  • Long before most of the world had ever heard the terms 'disinformation,' 'trolls,' 'bots,' or 'fake news,' Sam Woolley had begun to systematically study these new, destructive forces eroding democracy in the digital age. The Reality Game synthesizes his deep and original knowledge on this subject into a readable and compelling book. Scholars of computational propaganda, policymakers in Washington, Brussels, and the Silicon Valley, as well as all citizens of the world concerned about truth, facts, and democracy must read this book.—Michael McFaul, Professor of Political Science at Stanford University and former US ambassador to Russia
  • "This is a crucial book for understanding online misinformation, disinformation, and outright propaganda. Like a good doctor, Sam Woolley has given us an excellent diagnosis of this problem and laid out a treatment plan. Now it's up to politicians and the public to heed his timely advice."—Tim O'Reilly, founder and CEO of O'Reilly Media

On Sale
Jan 7, 2020
Page Count
272 pages
Publisher
PublicAffairs
ISBN-13
9781541768253

Samuel Woolley

About the Author

Dr. Samuel Woolley is a writer and researcher specializing in the study of automation/AI, emergent technology, politics, persuasion, and social media. He is an assistant professor in the School of Journalism and program director for computational propaganda research at the Center for Media Engagement, both at the University of Texas at Austin. Prior to joining UT, Woolley founded and directed the Digital Intelligence Lab at the Institute for the Future, a 50-year-old think tank based in the heart of Silicon Valley. He also cofounded and directed the research team at the Computational Propaganda Project at the Oxford Internet Institute, University of Oxford. He has written on political manipulation of technology for a variety of publications including Wired, the Atlantic Monthly, Motherboard/VICE, TechCrunch, theGuardian, Quartz and Slate. His research has been featured in publications such as the New York Times, the Washington Post, and the Wall Street Journal and on The Today Show, 60 Minutes, and Frontline. His work has been presented to members of NATO, the US Congress, the UK Parliament, and to numerous private entities and civil society organizations. His PhD is from the University of Washington. He tweets from @samuelwoolley.

Learn more about this author