Dawn of the Code War

America's Battle Against Russia, China, and the Rising Global Cyber Threat

Contributors

By John P. Carlin

By Garrett M. Graff

Read by Kevin Stillwell

Formats and Prices

Format

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around October 16, 2018. This date is subject to change due to shipping delays beyond our control.

The inside story of how America’s enemies launched a cyber war against us-and how we’ve learned to fight back

With each passing year, the internet-linked attacks on America’s interests have grown in both frequency and severity. Overmatched by our military, countries like North Korea, China, Iran, and Russia have found us vulnerable in cyberspace. The “Code War” is upon us.

In this dramatic book, former Assistant Attorney General John P. Carlin takes readers to the front lines of a global but little-understood fight as the Justice Department and the FBI chases down hackers, online terrorist recruiters, and spies. Today, as our entire economy goes digital, from banking to manufacturing to transportation, the potential targets for our enemies multiply. This firsthand account is both a remarkable untold story and a warning of dangers yet to come.

Excerpt

INTRODUCTION

The Code War

MY FIRST INTERACTION with Barack Obama’s presidential campaign in 2008 was explaining to them that their computers had been hacked by the Chinese government. Eight years later, one of my final cases as assistant attorney general for national security was chasing Russia’s attempts to influence the presidential election through hacking the Democratic National Committee and Hillary Clinton’s campaign. In between, I was privileged to serve with those on the front lines of the international fight to secure the internet, helping to combat online not just China and Russia, but also Iran, North Korea, terrorists, organized crime groups, and even lone hackers. Yet even as I left office in 2016, it was clear that the nation’s efforts against hackers remained insufficient.

We thought the nation had awoken to the cyberthreat after North Korea’s attack on Sony; we thought it had happened after the hacking of 22 million of the federal government’s personnel records. But even in those final months of the administration in 2016, the national security apparatus debated what it should say publicly about the Russian hacks—and how soon it should speak. The answer—unfortunately—was too little, too late. And even now, after the damage and the effect are clear, there’s no sign that the hacks caused any policymakers in Washington to change course as radically as we need to ensure our security going forward. That practiced ignorance is hardly a new invention. Way back in 2012, one of my Justice Department colleagues, Christopher Painter, who had dedicated years of his life to fighting cyberthreats, grew frustrated with the number of “wake-up calls” he’d lived through. He said then that cybersecurity was infected by “a wake-up call with a snooze button.” As he explains it, “You would have, at least early on, a number of incidents which people would get very excited about. There would be a lot of publicity around them. They make an impact for a short period of time and then they would fade away.”1

That pattern continues. In the year following the attack on the 2016 election, we saw the hacking of Equifax—wherein the personal, intimate life details of effectively every adult American were stolen—and word came, too, of a new type of security vulnerability in the Intel chips that power today’s technology that affects nearly every device manufactured since 1995. The scale of these problems should make it clear that ignoring or wishing away cybersecurity concerns cannot be the answer. This game is being played under the table every day by governments, criminals, and other online adversaries—yet it’s one that increasingly is having an impact on our daily lives and our personal security.

Cybersecurity isn’t just a wonky IT issue. Poor security online represents a genuine threat to the American way of life—one that will only accelerate as more of our day-to-day lives move online, into the cloud, and into the digital world. Cybersecurity, it turns out, is key to modern life. It’s essential to the way we bank, shop, learn. Increasingly, it’s a necessity for the way we drive, heat our homes, and even vote. There is no longer such a thing as e-commerce, only commerce. Protecting our digital lives is no longer just about ensuring we don’t lose our family pictures—it’s about protecting our values, our health, our culture, and our democracy. The attacks of the last decade by nation-states, organized crime groups, and even individual hackers threaten to undermine trust not just in our institutions but also in the very information that powers our society, from financial and medical records to the news that informs our society.

Cyberspace got its start on a street in Vancouver. In the early 1980s, writer William Gibson was walking down Vancouver’s Granville Street, the Canadian city’s neon-lit, undersized version of the Las Vegas Strip. Gibson had started writing science fiction just a few years earlier and had been trying to evolve his thinking and the genre past the outer space fascination of his youth.2 The spaceship didn’t capture his imagination. “I was painfully aware that I lacked an arena for my science fiction,” he recalled later. His early work to that point had focused on the interactions of humans and technology—so-called cybernetics, the science of how communications and automatic control systems work in both machines and living things.

As Gibson proceeded down the brightly lit but seedy Granville, past the fading theaters, the pizza stores, strip clubs, and pawnshops, he passed a video arcade—and inspiration arrived. Looking inside, he realized he was staring into another world; the kids were totally enveloped by the blinking lights and beeping of their primitive plywood arcade games. “I could see the physical intensity of their postures, how rapt the kids inside were,” he later recounted during an interview with Whole Earth Review. He felt he could see the “photons coming off the screens into the kids’ eyes, neurons moving through their bodies, and electrons moving through the video game.” Sure, it was only Pac-Man or Space Invaders, but these machines transported the players to another dimension. As Gibson said, “These kids clearly believed in the space games projected. Everyone I know who works with computers seems to develop a belief that there’s some kind of ‘actual space’ behind the screen, someplace you can’t see but you know is there.”

Yet the moment when computers would be a real part of daily life still seemed far away; the computers he knew at the time were the “size of the side of a barn.” Then, Gibson passed a bus stop with an advertisement for Apple Computers, the upstart technology firm led by wunderkind Steve Jobs. He stopped again and stared at the life-sized businessman in the ad. As Gibson recalled, the businessman’s neatly cuffed arm was holding an Apple II computer, which was vastly smaller than the side of a barn. “‘Everyone is going to have one of these,’ I thought, and ‘everyone is going to want to live inside them,’” he recalled. “Somehow I knew that the notional space behind all of the computer screens would be one single universe.”

But what to call this new thing, this new place where we would live our future lives? Gibson sat down with a Sharpie and a yellow legal pad to brainstorm. He hated his first two ideas: infospace and dataspace. Then on his third try, inspiration hit: cyberspace. It was perfect—it meant something and also nothing. “All I knew about the word ‘cyberspace’ when I coined it, was that it seemed like an effective buzzword. It seemed evocative and essentially meaningless. It was suggestive of something, but had no real semantic meaning, even for me, as I saw it emerge on the page,” he recalled. The term appeared in his short story “Burning Chrome” in 1982 and hit the mainstream when he used it in 1984 in his debut novel, Neuromancer, a book I devoured as a child.

In Neuromancer, Gibson introduced the term by writing, “Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts…. A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data.”

Over the coming years that was almost exactly what cyberspace came to define—a world where the virtual and the physical met and, increasingly, reshaped the other. The collective hallucination that would drive so much of society by the turn of the century could be glimpsed in children of that era. Many of us who ended up on the leading edge of cybersecurity and cybercrime issues were first hooked by science fiction and video games as kids. I, for one, was almost literally one of the kids in Gibson’s Vancouver arcade. Taken on a family ski trip to Canada around the same time, I ended up with $10 worth of quarters and lost the afternoon immersed in video games at an arcade rather than on the slopes. For Shawn Henry, who later led the FBI’s Cyber Division, it was the magic of computers on display on Star Trek. For Steven Chabinsky, a lawyer who later worked alongside me at the FBI, it was when a cousin got a Radio Shack TRS-80 in 1979 and let him start playing a then cutting-edge text-based game, Adventure, a game all but forgotten today that figures prominently in the memories of many early computer pioneers. As Chabinsky recalls, “There was no graphics, of course, back in these days. You had to type out directions to turn right. And then it says: ‘A nasty elf has come at you: What do you do?’ And you say: ‘Fight elf.’ And it says: ‘Elf killed you.’ I just thought this was remarkable. It was, to me, artificial intelligence.”3 Later, in high school, Chabinsky worked every day after school to save money to buy an Apple II+.

Gibson’s definition of a new world also helped establish one of the consistent trends of this new world: it was the science fiction writers, the fantasists, who pointed the way toward the future we were building in the real world. From Neuromancer and The Shockwave Rider to the movie WarGames to TV series like Black Mirror to the novel and 2018 movie Ready Player One—about the coming world of artificial intelligence—we saw the threats and challenges that would someday end up on the government’s plate first played out in fiction.

As it turned out, I devoted much of my career to securing this amorphous, evolving space—trying to figure out how to impose the laws and rules of the physical world on the ever-shifting virtual one. The answer wasn’t always obvious; how, after all, do you police a place you can’t see but you know is there?

Today, it’s impossible to truly capture the cost of cybercrime. It’s not like the early days of the FBI when you could just total up the cost of the nation’s stolen cars or add up the amount of money that walked out the front door with bank robbers like John Dillinger. Instead, there’s both a real cost—the actual dollars stolen from bank accounts, businesses, and individuals—and a more subtle cost—the value of the ideas, designs, and intellectual property compromised and stolen by hackers and used to their own advantage. There’s also an enormous amount of lost productivity as tens of millions of people are forced to cancel and change credit cards and deal with the hassles that arise from the theft of their personal information. Any way that you calculate it, almost any even conservative estimate ranks annual cybercrime losses in the hundreds of billions of dollars—particularly when you start to factor in the cost of destructive attacks that “brick” computers and force companies to replace sometimes thousands or even tens of thousands of machines.

In February 2018, the Center for Strategic and International Studies and the security firm McAfee said they calculated cybercrime’s annual cost at around $600 billion a year—a number larger than the GDP of all but about 20 of the world’s countries, larger even than economic powerhouses such as Sweden (about $500 billion), Thailand ($410 billion), or Poland (about $500 billion). Put another way, cybercriminals steal more than all of the work that all 95 million Egyptians create over the course of a year. Globally, economists believe that the internet generates between $2 trillion and $3 trillion a year of the world’s GDP. That means that perhaps as much as one-fifth of the internet’s total value is disappearing due to cybertheft each year.4 It’s a number that we would find unacceptable in any other sector of the economy—and we should find it just as unacceptable in the digital economy.5

My own experience in two particular cybercrime cases—the take-down of the GameOver Zeus botnet and the indictment of Chinese army officials for economic espionage—perfectly illustrated the challenges of understanding the impact and total losses from cybercrime.

We knew that the Russian and Eastern European hackers running the GameOver Zeus botnet had stolen tens of millions in actual money; the FBI had stopped counting when they calculated 100 million US dollars and 100 million euros; we could see where the money was missing—including a single theft of $6.9 million on November 6, 2012, and a single US bank that lost $8 million over just 13 months. We talked with the small businesses that had been crushed by their losses. FBI Special Agent Sara K. Stanley, who interviewed a dozen GameOver Zeus victims across Iowa and Nebraska, recalls that her conversations were heartbreaking. “It made it so much more human,” she says. “When you’re talking about a bank or a business in rural Iowa, that really affects you. For a lot of people, their trust in the banking system was really affected.” For those smaller businesses, the GameOver Zeus losses were crippling. While the government protects individuals from being responsible for losses stemming from bank fraud, no such provision exists for businesses. The thefts easily wiped out a year’s profits or more.

Most Americans have little understanding of the dramatic economic rise of China, nor how much of that growth was powered by the theft of American secrets—both in basic technologies, like computing and solar panels, and in the military’s adoption of cutting-edge fighter and naval technologies. In barely two generations, China has leapfrogged from effectively a 19th-century agrarian economy to a cutting-edge, 21st-century powerhouse that, depending on the measurement, is either the largest or second largest in the world. American technological research-and-development dollars have unintentionally given China a leg up on almost every facet of that transformative economic growth.

American workers are already competing against Chinese versions of the very same products they originally invented, and if someday the United States and China end up in a military conflict, America’s soldiers, sailors, airmen, and marines will find themselves fighting against their own technology. General Keith Alexander, who once headed the National Security Agency (NSA) and US Cyber Command, has explained for years that China’s electronic pillaging of US trade secrets represents the “greatest transfer of wealth in history,” totaling upward of $250 billion a year. It’s a staggering number, and one that has been playing out inside our corporate, university, and military computer networks for more than a decade. “It is clear that China not only is the global leader in using cyber methods to steal intellectual property, but also accounts for the majority of global intellectual property theft,” said Dr. Larry M. Wortzel, a former US Army intelligence officer and member of a commission that advises Congress on Chinese–US economic and security matters.6

But what were the long-term costs of the Chinese thefts? What about the costs for the companies that find themselves undercut in the Asian market—or even find lower-priced goods dumped back in the American market? What about the long-term price of China building up its own economy on the shoulders and backs of American innovation? What about the ultimate cost in lives of American servicemen and -women if we find ourselves in a military conflict and have our own stolen technologies used against us? Ever since the industrial revolution and Eli Whitney’s invention of the cotton gin, the US economy has thrived because we innovate faster—and better—than any country on earth. Over the last two decades, though, we’ve seen that lead, our nation’s core spirit, threatened and undermined by foreign powers stealing digitally that which they would never dare to steal in real life. It’s paramount to America’s economy that we ensure that countries around the world compete on an even playing field—that countries are competing based on their innovation, not benefiting by robbing others.

We do know that cumulatively cyberthefts come at a real cost to Americans, especially, because the United States is the most connected and most advanced economy in the world. Studies have calculated that we lose about 200,000 jobs a year due to cybertheft, roughly an entire average month’s worth of job creation in 2016.7 That’s the entire population of Des Moines, Iowa, or Birmingham, Alabama, going unemployed or losing their job each year because of digital theft, piracy, and espionage. Europe faces its own large losses, perhaps as many as 150,000 jobs a year, the entire population of Oxford, England. The security cost for companies in today’s environment is not minimal either; Greg Rattray, a former air force officer who helped pioneer the fight against nation-states online, today serves as the head of cybersecurity at JPMorgan Chase, where he oversees a sprawling effort that spends more than $2 million each day on digital security.

The United States remains uniquely powerful—and uniquely vulnerable—in cyberspace. For now. Thanks to the government’s original investment during the Cold War in building a decentralized communications network that, it hoped, could help survive a nuclear war, much of the original internet and computer revolution happened in the United States.

What we think of today as the rise of the computer really was two separate evolutions: one focused on large-scale corporate and government computer use, centered on the East Coast among defense companies and early tech giants such as IBM, as well as a more organic personal computing revolution, centered on the West Coast around Stanford, Berkeley, and what would become Silicon Valley.

The two computing revolutions came with vastly different philosophies. The ethos of East Coast computers was solidly establishment, with deep ties to MIT, Harvard, and the Pentagon, whereas the West Coast was solidly 1960s counterculture.* It was a movement that was deeply distrustful of governmental power, a reaction of an era that saw the exposure of J. Edgar Hoover’s domestic spying, Watergate, the Church Committee, and the passage of the 1974 Privacy Act to restrict government information gathering. Another key West Coast voice, Stewart Brand, of the Whole Earth Catalog, gave his colleagues a rallying cry: “Information wants to be free.”

Those two revolutions blended together online in the 1980s and exploded in the 1990s as the World Wide Web began to transform the way Americans gathered information, shopped, traveled, and led their daily lives.

Even well into the 2000s, the United States continued to dominate online: in 2007, Director of National Intelligence Mike McConnell was shown a chart from the internet company VeriSign that traced how 80 percent of the world’s digital traffic passed through US wires and servers. That four out of every five bits and bytes came through America in 2007 actually represented a marked decline from the earlier days of the internet. In the 1990s, Richard Clarke, then the White House cyber coordinator, was told that 80 percent of the world’s internet traffic passed through just two buildings in the United States: known as Metropolitan Area Exchanges—MAE West and MAE East—the two little-known coastal buildings brought together internet connections from around the world. They had been created in the 1980s and 1990s as the US government transitioned the backbone of the internet to the private sector; little forethought had been put into where they went, and their creators little understood how critical they’d become. The eastern one had been planned by a group of engineers over lunch in 1992 at a Mexican restaurant and originally located in a walled-off corner of an underground parking garage in Vienna, Virginia.8 It outgrew the parking garage quickly—it soon handled fully half of the world’s entire internet traffic—but remained effectively hidden in plain sight, moving to the fifth floor of a nondescript office building in nearby Tysons Corner. The western exchange was located inside the 15-story Market Post Tower in downtown San Jose, California.

Few of the original creators of the internet understood just how integral it would become to modern life—that the decisions they made in setting up a primitive network among a small group of trusted and known colleagues would lead, down the road, to a technological transformation that would become ubiquitous in daily life, with first hundreds of millions and then billions of users. The rise of the “internet of things” will only accelerate these connections: by 2020, there may be as many as 20 billion devices connected to the internet.9

During the early era of the internet, security often remained an afterthought and authentication procedures were almost unheard of. The early internet connected a small community of like-minded engineers and scientists who intrinsically trusted each other. At every stage of the internet’s growth, we have systematically underestimated the future threat for these systems to be exploited by unethical players.

Partly that gap was intentional. Securing things correctly can be slow, expensive, time-consuming, and annoying to users. “The fundamental problem is that security is always difficult, and people always say, ‘Oh, we can tackle it later,’ or, ‘We can add it on later.’ But you can’t add it on later,” recalled Peter G. Neumann, who has tracked computer security problems since 1985. “You can’t add security to something that wasn’t designed to be secure.”10 Too often, programmers simply push a product quickly to market and then update holes and vulnerabilities as they’re pointed out. It’s so common that it has its own name: patch and pray.

David D. Clark, who was the internet’s chief protocol architect in the 1980s, recalled that when he recorded the seven key goals of the original internet inventors, they outlined that the system must support multiple types of communication services and networks, be easy to use, and be cost-effective. But “security” was nowhere on the list.11

Yet even as the wonders of the internet led to the frenzied dot-com bubble of the late 1990s, we began to see the cost of early shortcuts. The Y2K bug—a problem that arrived just as I came of age as a lawyer—was a conscious decision made early in the history of computer programming to save an extra two digits in date codes. It made sense back during an era when punch cards could only store a limited number of characters, and, even though it was identified as early as 1958 as a future problem, the practice continued through the 1970s because memory remained expensive, costing as much as $1 a byte. For each individual company at the time, the trade-off seemed worth it in the moment. “It was the fault of everybody, just everybody,” said computer pioneer Bob Bemer, who was one of the first to identify the looming glitch.12 Ultimately, fixing the Y2K bug cost US companies and the US government an estimated $100 billion.13 Twenty years after Clark’s original paper on the internet’s goals, he’d revised the list. When in 2008 he was asked by the National Science Foundation to imagine a new internet, he put at the top of his list of goals one thing: security.14

America’s early lead online allowed us to remain at the forefront of technology; the world’s technology titans—and the largest companies of the last ten years—are, for now, still mostly US companies—Facebook, Google, Amazon, Apple, and others. Apple’s iPhone and Google’s Android operating systems dominate nearly all of the world’s cell phones. Yet, today, the internet is increasingly global—two out of every five users today are in Asia, with hundreds of millions more in China and India still waiting to be connected. By 2008, China could lay claim to being the internet’s largest online user base, with nearly a quarter million new Chinese users joining the digital age each day; in 2017, official estimates held that over 50 percent of the country, about 731 million people, had access to the internet.15 China’s online shopping powerhouse Alibaba did $25 billion in business in 2017 on the country’s Singles’ Day, its equivalent of Black Friday.16 China is leading aggressively on new just-around-the-corner technologies, such as artificial intelligence and quantum computers, each of which will herald both new economic opportunities and huge security risks.

We’ve experienced a huge transformation online over the last decade—a transformation reshaping the lives of every American—as the world shifts from analog to digital. We stand on the cusp of a societal transformation no less profound than the one at the turn of the last century, which saw the industrialization of an agrarian economy and the shift from the horse-driven buggy to the motorcar. It’s an era that has already begun reshaping the global economy in ways that we’re only just beginning to understand, yet it’s one that’s happening faster and more broadly in society than we realize.

The United States faces an inflection point when it comes to the internet’s effect on daily life. What has enriched our economy and quality of life for the past several decades may start to hurt us more than help us—unless we confront its cybersecurity challenges. In a speech I helped research and craft when I worked for him, FBI Director Robert Mueller said in 2007, “In the days of the Roman Empire, roads radiated out from the capital city, spanning more than 52,000 miles. The Romans built these roads to access the vast areas they had conquered. But, in the end, these same roads led to Rome’s downfall, for they allowed the invaders to march right up to the city gates.”

The technology revolution that powered the nation’s growth for the last forty years turned the United States into the envy of the world. Just like the Roman road network of two millennia ago, the internet connects us to the world. Empowered by advances in technology such as cheap storage, increased bandwidth, miniaturized processors, and cloud architecture, we’re rapidly extending internet connectivity throughout our lives.

Genre:

  • "John Carlin has written a crucial book- for practitioners and laymen alike-about the evolution, impacts, and implications of the abuses we've all witnessed, and many have personally experienced, in the cyber domain. Cyber is yet another example of the dual-edged nature of technology: huge benefit to mankind on one hand, and the potential for great harm on the other. And, unique to this book, is the historical description of how we have tried to respond to the harmful activities that occur all too frequently in the cyber domain. An interesting read, with vivid detail. John represents a superb amalgam of legal insight and great writing skill. A must read in my view."—James Clapper, New York Times bestselling author and formerDirector of National Intelligence
  • "This book is thrilling, important, and deeply fascinating. Cybersecurity is key to modern life: an imperative for us as a nation and each of us personally. It's about protecting our personal data, our businesses, and our democracy. John Carlin has been on the front lines, defending us against attacks from China, North Korea, Russia, Syria, and criminal gangs. The riveting stories of these secret battles for our digital safety teach us much about what America can-and must-do to protect itself."—Walter Isaacson, New York Times bestselling author of LeonardoDa Vinci
  • "By turns electrifying, illuminating, inspirational, and difficult to put down, [Dawn of the Code War] describes how 'criminals, terrorists, and spies' have used the Internet for their gain, and how the U.S. government along with international allies, has assessed and addressed these threats... Similar in energy to Carl Bernstein's All the Presidents Men, it informs of current cyberthreats while offering stirring success stories and cautions about the future of the code war... A deeply intriguing look into cybersecurity threats facing the United States that will fascinate anyone interested in technology and/or political intrigue." Library Journal
  • "Given the threats Carlin enumerates, including election hacking and the theft of intelligence files, responses "created and refined in real-time" are increasingly necessary-but not forthcoming. Given the lack of developed policy, if you're alarmed by the thought of Russian election tampering in 2016, you're likely to be even more so come the midterms-and by this dire book."—Kirkus Reviews

On Sale
Oct 16, 2018
Publisher
Hachette Audio
ISBN-13
9781549170959

John P. Carlin

About the Author

John P. Carlin is the former Assistant Attorney General for National Security under Barack Obama, where he worked to protect the country against international and domestic terrorism, espionage, cyber, and other national security threats. A career federal prosecutor and graduate of Harvard Law School, John has spent much of the last decade working at the center of the nation’s response to the rise of terrorism and cyber threats, including serving as National Coordinator of the Justice Department’s Computer Hacking and Intellectual Property (CHIP) program, as an Assistant United States Attorney for the District of Columbia, and as chief of staff to then-FBI Director Robert Mueller.

Today, Carlin is the global chair of the risk and crisis management practice for the law firm Morrison & Foerster. He is also chair of the Aspen Institute’s Cybersecurity & Technology Program and a sought-after industry speaker on cyber issues as well as a CNBC contributor on cybersecurity and national security issues.

Garrett M. Graff is an award-winning journalist who has spent nearly a decade covering national security. He also serves as executive director of the Aspen Institute’s Cybersecurity & Technology Program. A regular writer for WIRED, Bloomberg BusinessWeek, and a former editor of both Washingtonian and POLITICO Magazine, he has an extensive background in journalism and in technology.

His oral history of Air Force One during 9/11 is under development as a movie by MGM and his April 2017 WIRED cover story about the FBI’s hunt for an infamous Russian hacker has also been optioned for television. His most recent book is Raven Rock: The Story of the U.S. Government’s Secret Plan to Save Itself — While the Rest of Us Die.

Learn more about this author

Garrett M. Graff

About the Author

Garrett Graff is the editor-in-chief of The Washingtonian. He is the author of The First Campaign: Globalization, the Web, and the Race for the White House (FSG, 2007) and the founding editor of the FishbowlDC.com, the first blog to cover the White House press briefings.

Learn more about this author