We See It All

Liberty and Justice in an Age of Perpetual Surveillance

Contributors

By Jon Fasman

Formats and Prices

Price

$28.00

Price

$35.00 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around January 26, 2021. This date is subject to change due to shipping delays beyond our control.

This investigation into the legal, political, and moral issues surrounding how the police and justice system use surveillance technology asks the question: what are citizens of a free country willing to tolerate in the name of public safety?

As we rethink the scope of police power, Jon Fasman’s chilling examination of how the police and the justice system use the unparalleled power of surveillance technology—how it affects privacy, liberty, and civil rights—becomes more urgent by the day. Embedding himself within police departments on both coasts, Fasman explores the moral, legal, and political questions posed by these techniques and tools.

By zeroing in on how facial recognition, automatic license-plate readers, drones, predictive algorithms, and encryption affect us personally, Fasman vividly illustrates what is at stake and explains how to think through issues of privacy rights, civil liberties, and public safety. How do these technologies impact how police operate in our society? How should archaic privacy laws written for an obsolete era—that of the landline and postbox—be updated?

Fasman looks closely at what can happen when surveillance technologies are combined and put in the hands of governments with scant regard for citizens’ civil liberties, pushing us to ask: Is our democratic culture strong enough to stop us from turning into China, with its architecture of control?
 

Excerpt

PREFACE

ON THE WEST SIDE OF CHICAGO AVENUE BETWEEN THIRTY-SEVENTH AND Thirty-Eighth Streets in South Minneapolis, there sits a long, low brick building housing a few small businesses. The building’s south side faces a patch of asphalt that in ordinary times could serve as a small parking lot; aside from two rows of glass-block windows near the top, the south side is solid brick. When I saw it, on a balmy July day, the building had been painted purple, with bluish clouds on top and vibrant yellow sunflowers below. Between the flowers and the clouds, in swooping block letters, was painted, “You Changed the World, George.”

That building sits down the street from the Cup Foods store, where on May 25, 2020, a Minneapolis police officer slowly choked George Floyd to death, in broad daylight and in full view of spectators and their cellphone cameras, as three of his fellow officers looked on. Millions of people saw the horrific video of the incident, which went viral on May 27.

I’m writing this sentence on the afternoon of July 12, just a couple of hours after my visit to the intersection now known as “George Floyd Square,” so I don’t know whether or how the weeks of protests that followed Floyd’s murder affected, or perhaps even determined, the presidential election—still months ahead as I write, but in the past as you read. Nor do I know what lasting reforms they will have wrought in American policing. But on this warm afternoon, the muralist seems to have gotten it right: George Floyd did change the world.

A couple of doors down from the purple building, I saw a boarded-up shopfront window covered in signs calling to “Defund & Dismantle the Police.” A row of flowers hanging on a clothesline framed a piece of laminated paper urging people to “creatively imagine a world without police.” On a bus shelter a few blocks away, a sign showed a balance, with a police car weighing down a school, a hospital, a bus, and houses, and asked, “Why are we fighting over crumbs for the things we need? #DefundMPD.”

Minneapolis’s city council pledged to abolish the city’s police department, and while as I write that hasn’t happened, that they even passed such a measure at all shows how much the ground has shifted. Politicians, particularly Democrats, used to be terrified of appearing soft on crime. City politicians never wanted to get on the wrong side of the police and their unions. Now many otherwise left-wing, pro-union activists and politicians cast police unions as impediments to reform, and openly discuss curtailing their collective-bargaining power.

Two things link all of these developments. First, they involve a long-overdue rethinking of the rules around American policing. I don’t agree with all of them. I would rather see a world with good, well-regulated police than no police, for instance, and in my view, the city council acted precipitously in vowing to abolish the Minneapolis Police Department without a full consideration of what comes next. But as a society, we have asked the police to do too much. To the extent that “defund the police” means “redirect some share of the police budget to social service workers better equipped to handle many of the problems that, by default, police handle today,” I’m all for it. To the extent that it means “wantonly reduce police budgets for political reasons,” I’m against it. Vindictive, grandstanding policy is almost always bad policy.

Second, the protests escalated and spread across the world only after I submitted the manuscript for this book, on May 30, 2020. Hence this preface.

It’s true that the protests did not directly concern surveillance technology, the main subject of my book. But George Floyd’s apprehension and murder were recorded on security and cellphone cameras—both of which I discuss in these pages. During the protests that followed, cellphone cameras captured police officers assaulting unarmed, peaceful protesters and driving SUVs through crowds. In many instances, they shot rounds of nonlethal rubber bullets at journalists (nonlethal just means they don’t kill you; they still injured numerous people, some permanently, simply for engaging in First Amendment–protected activity). Thomas Lane, one of the four officers present at Floyd’s death, bolstered his motion to have his aiding and abetting charges tossed with transcripts from officers’ body-worn camera (bodycam) footage.

The eighty-two pages of the transcript make for horrific reading. Those transcripts showed that Floyd was, as he said, “scared as fuck,” pleading with officers, “Please, don’t shoot me.” He told officers that he couldn’t breathe fifty-one times, at one point saying, “I’ll probably just die this way.” Without bodycams and cellphone cameras, Floyd’s death may have gone unnoticed by the wider world.

That points to an important benefit to camera footage that I perhaps paid too little attention to in my chapter on the topic. I was mainly concerned with the question of whether being filmed—by bodycams or cellphones—improves officer behavior, principally as measured by use-of-force and other civilian complaints. On that question, the social science is ambiguous: some studies have found that outfitting officers with bodycams results in fewer use-of-force complaints; others have found that doing so produces no noticeable changes.

But an immediate, universal, and objectively observable reduction in use-of-force complaints is neither the sole benefit of bodycams nor the only measure by which to judge them. In conjunction with social media, they also publicize bad behavior, and in so doing galvanize social and political pressure to stop it.

This may ultimately be how bodycams drive change: not because police chiefs and other senior officers review footage and decide to change policy, or even because individual officers change their behavior because they know they are being filmed (though of course many might), but because millions of people see something unjust and gruesome, and take to the streets to make sure it doesn’t happen to someone else. Had I delivered my manuscript just a few weeks later, I would have more strongly emphasized this point.

I would also have written more about police use of bulk location data and social media monitoring to surveil protesters. On June 24, Lee Fang reported in The Intercept—which does particularly good work on surveillance tech—that the FBI “modified an agreement” with Venntel, a firm that aggregates and sells cellphone location data, and “signed an expedited agreement to extend its relationship with Dataminr,” an AI firm that monitors social media.

Just over two weeks later, Sam Biddle, also writing in The Intercept, reported that “Dataminr relayed tweets and other social media content about the George Floyd and Black Lives Matter protests directly to police, apparently across the country”—despite Twitter’s terms of service ostensibly prohibiting “the use of Twitter data and the Twitter APIs by any entity for surveillance purposes, or in any other way that would be inconsistent with our users’ reasonable expectations of privacy. Period.” Among the tweets about which Dataminr alerted the Minneapolis Police Department was one that read simply, “Peaceful protest at Lake & Lyndale.”

A Dataminr spokesperson told The Intercept that “alerts on an intersection being blocked are news alerts, not monitoring protests or surveillance.” That distinction seems fuzzy and artificial. The real distinction between “news alerts” and surveillance, it seems to me, is who gets alerted. When I cover a protest, for instance, I’ll talk to people, ask their names and other identifying information, but they are at no risk of state-run violence or further intrusive monitoring from me. That’s not true when a police officer does the same thing (not to mention that people can more easily refuse to speak to me, or tell me to get lost, than they can a police officer).

It may not be reasonable to expect privacy for tweets, which are of course public, but it also seems reasonable to expect that if a company says it won’t allow third parties to exploit its data for surveillance purposes, then it won’t allow a company’s AI to monitor social media feeds on a scale impossible for humans, or to forward the tweets of peaceful protesters—people engaged in activities protected by the First Amendment—to police.

I was gratified to see, in the wake of the protests, that IBM stopped selling facial-recognition products, and that Amazon put a one-year moratorium on police use of its facial-recognition technology. Amazon hoped its moratorium might “give Congress enough time to implement appropriate rules” around the technology. Having covered Congress for the past few years, I have to admit that Amazon’s hope seems wildly misplaced, though I hope a new Congress and administration prove me wrong. IBM also called for “a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

A couple of weeks after these firms announced their decisions, Detroit’s police chief admitted during a public meeting that the facial-recognition software his department was using misidentified subjects “96% of the time.” This was a striking revelation, and it bolstered the case for police departments to reject this uniquely dangerous technology. It also bolstered the case for everyone concerned about civil liberties to vigorously protest it. I do not think I overhype its dangers in this book—to my mind it represents a bigger threat to our privacies and liberties than any other technology I discuss—but I did underestimate how responsive tech firms that sell it would be. Activists, take note!

In early June, when the protests were at their peak, I wrote a long piece on police reform for The Economist. In the course of my reporting, I spoke to Barry Friedman, who runs New York University’s Policing Project. His favored reform was simple: better regulation. “It’s remarkable,” he told me, “that most states don’t have clear laws on use of force. Instead, every agency has its own use-of-force policy. The same is true with surveillance technology. It’s remarkable that we just leave it to the police themselves.”

In no other aspect of life would we even consider that approach. We don’t tell banks, “Just figure out how you want to run things; I’m sure it will all work out.” (Yes, thank you, I remember 2008, and I agree that America is often too lax when it comes to enforcing the rules of finance. But at least they exist.) Regulating the police, however, requires confrontation. That does not mean it should be done with hostility or anger, but people will have to push politicians to set rules and penalties for an entity that has long been left to regulate itself.

Before George Floyd’s murder, that seemed a heavy lift for most of America. Oakland did it early and well, as you’ll read in the book’s last chapter, and some other cities in the Bay Area followed suit. A few other places, mainly coastal or college towns, passed some sort of surveillance-oversight legislation. But the protests—and more importantly, politicians’ responsiveness to them, both rhetorically and in policy changes—suggest that I underestimated Americans’ readiness for police reform and regulation. I hope it’s not a passing fancy.




PROLOGUE

“A perfect architecture of control”

SOMETIMES THE FUTURE REVEALS ITSELF AS THE PRESENT.

On February 10, 2020, I flew from New York to Las Vegas. The Democratic primary campaign had begun in earnest. As The Economist’s Washington correspondent and a co-host of our new American politics podcast, Checks and Balance, I was on the road nearly every week from just after New Year’s Day until Covid-19 shut everything down. I had been in New Hampshire the previous week and Iowa before that, and I was about to spend three days in Nevada, followed by five in South Carolina—after a brief stop at home to do some laundry and make sure my wife and kids still recognized me and hadn’t changed the locks.

Delta’s cross-country and international flights both leave from the same terminal at JFK. Nearly all the gates had a placard advertising Delta’s facial-recognition boarding—blue, tall, and thin, with an outline of a face captured by the four corners of a digital camera’s viewfinder. “ONE LOOK AND YOU’RE ON,” it blared above the picture. A banner just below read, “You can now board your flight using Delta Biometrics, a new way to more seamlessly navigate the airport.” And, in tiny type at the bottom, down at foot level: “Boarding using facial recognition technology is optional. Please see an agent with any questions or for alternative procedures. Visit delta.com to view our Privacy Policy.”

About eight months earlier I had flown to Quito through Delta’s biometric terminal in Atlanta. Facial recognition was a strange novelty: the cameras weren’t working; most people boarding my flight, including me, walked right past them and had their tickets checked by a flight attendant. But apparently it worked well enough—or soon would work well enough—for Delta to roll it out more aggressively. I had noticed facial-recognition gates in Minneapolis and Detroit; in late 2019, Delta announced it would install them in Salt Lake City. Some 93 percent of customers boarded without issue, Delta said in a press release, and 72 percent preferred it to standard boarding.

Signs of Delta’s ambition can be found in the banner text below the picture, which mentions not just boarding, but the ability to “more seamlessly navigate the airport.” And indeed, Delta’s press release touted its “curb-to-gate facial recognition”: you can use your face to check in for your flight, check luggage, go through security, and board your plane. It’s all very convenient. If you’ve traveled internationally, the airlines already have your passport picture on file—either in their own database or accessible in one built by Customs and Border Protection.

My feeling about this program is clear: I opt out, and I hope, after reading this book, that you will too. Facial recognition imperils our civil liberties. Volunteering to use it normalizes it. The more you choose to use it, the more it will be used in ways and places that you do not choose. Whenever you have the option to avoid using it, you should take that option; you should do everything you can to slow facial recognition’s spread.

After I posted a picture of Delta’s placard online, a friend commented that at least Delta gave passengers the choice: he had flown Singapore Airlines to Tokyo and was required to board using facial recognition. That was relatively new: until July 2017, I was based in Singapore for The Economist and flew Singapore Airlines several times each month. They did not then use facial recognition. Future as present.

When I landed in Las Vegas, I found messages from several friends asking me what I thought of that day’s episode of The Daily, the New York Times’s daily news podcast. I hadn’t heard it, but on their advice I listened as I drove between meetings. It was the audio version of a terrific, terrifying story that Kashmir Hill had broken a couple of weeks earlier.1 Hill has been reporting on tech and privacy for about a decade; she is a perceptive, thoughtful, and engaging writer and a terrific reporter—one of the rare few whose stories tend to get better as they get longer.

This particular piece concerned a company called Clearview AI that had developed a facial-recognition app. When a user snaps and uploads a picture of anyone they see, the app tells them who that person is, thanks to Clearview’s database of more than three billion images scraped from the public domain, including Facebook, YouTube, and other widely used sites. That’s more than seven times as big as the FBI’s facial-recognition database.

To put it another way, if you are an American, there is a one in two chance that you’re in an FBI-accessible database. If you are a person in a first-world country, you’re probably in Clearview’s. Anyone who has Clearview’s app on their phone can learn in just a few seconds who you are, and with a little sleuthing they can find out much more: your address, employer, friends, family members, and any other information about you that may be online.

As I write, hundreds of law enforcement agencies use it, as do some private companies (Clearview declined to say which ones). Some of Clearview’s investors—and their friends—have used it as well: John Catsimatidis, a grocery-store magnate, happened to see his daughter on a date with someone he didn’t recognize.2 He asked a waiter to take the guy’s picture, which he then ran through Clearview. Within seconds, the app told him who his daughter was eating dinner with—a venture capitalist from San Francisco. Catsimatidis also used it in his stores to identify ice-cream thieves. (“People were stealing our Haagen-Dazs,” he complained. “It was a big problem.”)

Police love it; they say it helps them identify suspects quickly. But that convenience alone should not determine a product’s worth or legality. There are many things—such as indefinite detention without charge and repealing habeas corpus—incompatible with a free and open society that would make the job of law enforcement easier.

And while police are the main customers now, nothing is stopping Clearview from selling its app to anyone who wants to buy it. Indeed, the founder of a firm that was one of Clearview’s earliest investors seems resigned to this possibility. He told Hill, “I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy.… Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.” If backing a technology that produces “a dystopian future or something” for everyone on earth makes him richer, then bring on the dystopian future.

Facebook and other social media sites ban others from scraping their images; Clearview did it anyway. Eric Schmidt, Google’s former chairman, said in 2011 that facial recognition was “the only technology that Google has built and, after looking at it, we decided to stop,” because it could be used “in a very bad way.”3

Clearview’s founder, Hoan Ton-That, displayed no such qualms. On The Daily, he did not sound evil; he sounded smug, callow, and indifferent. When Hill asked him about the implications of creating technology that “would herald the end of public anonymity,” she wrote, he “seemed taken aback”: “I’ll have to think about that,” he replied. One would have hoped he had done so before he invented his nightmare app.

Today, too much of our privacy, and too many of our civil liberties, depend on the whims of people like Ton-That. I’m sure that Mark Zuckerberg did not sit in his dorm room at Harvard dreaming of building a platform to help Russia undermine American democracy, but that’s what he did. He more likely dreamt of building something great, and world-changing—of being successful, of making his mark on the world. He did all that too. And today, he has a fiduciary responsibility to his investors to maximize their returns. He has no such obligation to the rest of us. If our civil liberties imperil the profits of tech entrepreneurs and firms that sell surveillance technology, they are free to choose the profits every time.

That is not because they are bad people. After all, even the CEOs of crunchy, green companies probably also care more about maximizing returns than about strangers’ civil liberties. But Clearview AI’s technology, and the panoptic powers of modern surveillance technology more broadly, when combined with low-cost and permanent digital storage—particularly in an era of resurgent authoritarianism and institutional weakness in developed countries—imperil our democracy in a way that we’ve never before seen. To put it another way: our liberty isn’t threatened by people buying more ethically produced yogurt or BPA-free water bottles; it is by people buying more of what Clearview is selling.

It is our responsibility to speak up for ourselves, our civil liberties, and the sort of world we want to see. Is it one in which any stranger can snap a picture of us and know everything about us? If not, it is up to us to prevent that world from emerging. In the coming chapters,

I hope to show you why and how.

This book grew out of a series of articles I reported and wrote for The Economist in the first half of 2018 about how technology is changing the justice system—in particular, police work, prisons, and sentencing.4 I chose to focus on police and tech because police are perhaps the most tangible and familiar representatives of state power. If I told you that the National Security Agency, or the Chinese government, had surveillance technology that could overhear and store every conversation we had on our cellphones, or track all of our movements, you might be outraged, but you would probably not be surprised. I hope it would both outrage and shock you, however, to learn that every police department has that ability, with virtually no oversight about how it’s used. By writing about the police, I am really writing about state power.

When I was reporting, facial recognition was still largely theoretical to most people—it was not part of their lived experience. Some police departments had launched modest trial programs. A few small ones used it for limited purposes—Washington County, Oregon, began using it in 2017 to identify suspects. Today it’s in airport terminals. Even if Clearview were to fold tomorrow, another firm would probably do what they are doing.

Some critics argue the technology is unreliable, and it is—particularly, in America and Europe, for people of color. But that’s not really the point. Yes, facial recognition is dangerous when it’s unreliable, because it risks getting innocent people arrested. But it’s dangerous when it’s reliable, too, because it lets governments track us in public anytime. And it is getting more and more reliable.

License plate readers can track our cars, and they can go on as many police cars and city-owned light and telephone poles as political whim and budgets will allow. Devices that mimic cellphone towers, tricking our phones into revealing who we’ve called, what we’ve texted, and what websites we’ve searched, can now fit in the trunk of a car.

The architecture and infrastructure of a surveillance state are here. We’ve seen what it looks like in China, which now has more state-owned surveillance cameras than America has people. It has used its capacity to repress free speech and expression, monitor dissidents, and keep more than a million Muslims in modern-day concentration camps—and, when not locked up, under permanent, crushing surveillance.

The single most incisive and unsettling question I heard in the year I spent reporting for this book came from Catherine Crump, a law professor at the University of California, Berkeley, who directs the school’s Samuelson Law, Technology and Public Policy Clinic and codirects the Berkeley Center for Law and Technology. “We can now have a perfect architecture of control,” she told me—as China does. “What democratic practices do we need to not become China?” This book is a modest effort to answer that question.




1

TECHNOLOGY AND DEMOCRACY

How much state surveillance and control are you willing to tolerate in the name of public safety?

I’M IN THE FRONT SEAT OF A POLICE SUV. BEHIND THE WHEEL IS A precinct supervisor, the profoundly good-natured Lieutenant Leo Carrillo, born and raised Down Neck, in Newark’s historically Portuguese Ironbound district. Twenty years a Newark cop, his knowledge of the city is encyclopedic. Mark DiIonno is in the back. DiIonno is the police department’s public information officer, but before that he spent twenty-six years writing for the Newark Star-Ledger. He’s tough, smart, and streetwise, with a gruff and warm manner and a solid turn of phrase. Those qualities together give him the enviable impression of having sprung fully formed from a Damon Runyon story.

We’ve been driving around for about four hours, with every corner rattling free a raft of stories and memories from both men: here’s the pizza parlor where Willie Johnson was shot; there’s the corner where a mother went down to buy milk for her three kids and got clipped in a running gun battle; my parents used to take us shopping here, now it’s a warehouse that has seen better days; Hawk’s Lounge is boarded up—looks like they’re renovating it—it used to be pretty rough.

It’s late Friday afternoon, a week before the summer solstice, and Newark is preening. New Jersey’s biggest city has a lousy reputation. Unless you’re lucky enough to know the city well, you probably think of it, if you think of it at all, as a punchline or an object of sympathy. Perhaps it conjures up a similar set of associations as Detroit or Youngstown: violence, postindustrial decay, and abandonment. Newark is Rust Belt in spirit, economy, and circumstance, if not location: it deindustrialized along with other northeastern manufacturing towns, and nothing has really replaced the industry that left.

But what nobody ever tells you, and you can’t understand until you put in some time here, is how beautiful it can be—especially on one of those June afternoons so long and mild that you decide to put aside, even just temporarily, all of your quarrels with God and your fellow man and bask in the liquid gold. The art deco buildings downtown look like they’ve been bathed in honey. You wouldn’t want to skip through the overgrown vacant lots without a hazmat suit and thick-soled boots, but from across the street they really do look like little squares of sylvan paradise, with all the weeds in flower and bees droning lazily above.

Driving past Weequahic Park—a three-hundred-acre expanse in the city’s south ward, designed by the sons of the man who designed New York’s Central Park—we hear young kids laughing and playing tag. The surrounding area is tidy and cozy, a suburb built in the days of streetcars and modest incomes: two-story homes separated from each other by little strips of lawn, a car in every driveway.

Genre:

  • "If you want to understand the stakes and the landscape of surveillance in your life—yes, yours right now—We See It All is an outstanding place to start. Fasman walks his readers through a meticulously balanced review of how police, corporations, local businesses, governments, and ordinary people conspire to exchange real privacy for the feeling of safety. An evocative storyteller, Fasman lays out his case that, because government regulation lags impossibly behind technological advances, the only salve for our predicament is collective awareness. And collective action. The writing is sober and sobering. And, though the recent fires of Minneapolis, Atlanta, Portland, and the nation have not centered squarely on surveillance, Fasman argues convincingly that the next ones very well might."—Phillip Atiba Goff, co-founder and CEO of the Center for Policing Equity and professor of African American studies and psychology at Yale University
  • "This powerful, engrossing book will challenge your assumptions about persistent surveillance. Jon Fasman makes a clear case for civil liberties and explains how our laws and public safety infrastructure must keep pace with the advancement of technology. It's a must-read for anyone interested in the future and the unintended consequences of artificial intelligence, data, encryption and recognition technology." —Amy Webb, founder of The Future Today Institute, author of The Big Nine and The Signals are Talking
  • “Jon Fasman has given us a stellar account of the use of surveillance technologies by the police. It’s comprehensive, even-handed, informative, and fun to read."—Barry Friedman, Jacob D. Fuchsberg Professor, New York University School of Law
  • "An urgent examination of police-state intrusions on the privacy of lawful and law-abiding citizens."—Kirkus Reviews
  • "A deeply reported and sometimes chilling look at mass surveillance technologies in the American justice system....This illuminating account issues an essential warning about the rising threat to America's civil liberties."
    Publishers Weekly

On Sale
Jan 26, 2021
Page Count
288 pages
Publisher
PublicAffairs
ISBN-13
9781541730670

Jon Fasman

About the Author

Jon Fasman is the Washington correspondent of The Economist, having previously been South-East Asia bureau chief and Atlanta correspondent.  In addition to his work for The Economist, he is also the author of two novels, both published by The Penguin Press: The Geographer’s Library, was a New York Times bestseller in 2005 and has been translated into more than a dozen languages; and The Unpossessed City, which was published in autumn of 2018, was a finalist for the New York Public Library’s Young Lions Fiction AwardFasman resides in Westchester County, N.Y.

Learn more about this author