Go to Hachette Book Group home
Join the Club!
Use code DAD23 for 20% off + Free shipping on $45+ Shop Now!
To Save Everything, Click Here
The Folly of Technological Solutionism
Formats and Prices
- ebook $11.99 $15.99 CAD
- Trade Paperback $17.99 $22.99 CAD
This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around March 5, 2013. This date is subject to change due to shipping delays beyond our control.
Also available from:
In the very near future, “smart” technologies and “big data” will allow us to make large-scale and sophisticated interventions in politics, culture, and everyday life. Technology will allow us to solve problems in highly original ways and create new incentives to get more people to do the right thing. But how will such “solutionism” affect our society, once deeply political, moral, and irresolvable dilemmas are recast as uncontroversial and easily manageable matters of technological efficiency? What if some such problems are simply vices in disguise? What if some friction in communication is productive and some hypocrisy in politics necessary? The temptation of the digital age is to fix everything — from crime to corruption to pollution to obesity — by digitally quantifying, tracking, or gamifying behavior. But when we change the motivations for our moral, ethical, and civic behavior we may also change the very nature of that behavior. Technology, Evgeny Morozov proposes, can be a force for improvement — but only if we keep solutionism in check and learn to appreciate the imperfections of liberal democracy. Some of those imperfections are not accidental but by design.
Arguing that we badly need a new, post-Internet way to debate the moral consequences of digital technologies, To Save Everything, Click Here warns against a world of seamless efficiency, where everyone is forced to wear Silicon Valley’s digital straitjacket.
To my parents
"In an age of advanced technology,
inefficiency is the sin against the Holy Ghost."
inefficiency is the sin against the Holy Ghost."
Silicon Valley is guilty of many sins, but lack of ambition is not one of them. If you listen to its loudest apostles, Silicon Valley is all about solving problems that someone else—perhaps the greedy bankers on Wall Street or the lazy know-nothings in Washington—have created.
"Technology is not really about hardware and software any more. It's really about the mining and use of this enormous data to make the world a better place," Eric Schmidt, Google's executive chairman, told an audience of MIT students in 2011. Facebook's Mark Zuckerberg, who argues that his company's mission is to "make the world more open and connected," concurs. "We don't wake up in the morning with the primary goal of making money," he proclaimed just a few months before his company's rapidly plummeting stock convinced all but its most die-hard fans that Facebook and making money had parted ways long ago. What, then, gets Mr. Zuckerberg out of bed? As he told the audience of the South by Southwest festival in 2008, it's the desire to solve global problems. "There are a lot of really big issues for the world to get solved and, as a company, what we are trying to do is to build an infrastructure on top of which to solve some of these problems," announced Zuckerberg.
In the last few years, Silicon Valley's favorite slogan has quietly changed from "Innovate or Die!" to "Ameliorate or Die!" In the grand scheme of things, what exactly is being improved is not very important; being able to change things, to get humans to behave in more responsible and sustainable ways, to maximize efficiency, is all that matters. Half-baked ideas that might seem too big even for the naïfs at TED Conferences—that Woodstock of the intellectual effete—sit rather comfortably on Silicon Valley's business plans. "Fitter, happier, more productive"—the refreshingly depressive motto of the popular Radiohead song from the mid-1990s—would make for an apt welcome sign in the corporate headquarters of its many digital mavens. Technology can make us better—and technology will make us better. Or, as the geeks would say, given enough apps, all of humanity's bugs are shallow.
California, of course, has never suffered from a deficit of optimism or bluster. And yet, the possibilities opened up by latest innovations make even the most pragmatic and down-to-earth venture capitalists reach for their wallets. After all, when else will they get a chance to get rich by saving the world? What else would give them the thrill of working in a humanitarian agency (minus all the bureaucracy and hectic travel, plus a much better compensation package)?
How will this amelioration orgy end? Will it actually accomplish anything? One way to find out is to push some of these nascent improvement efforts to their ultimate conclusions. If Silicon Valley had a designated futurist, her bright vision of the near future—say, around 2020 or so—would itself be easy to predict. It would go something like this: Humanity, equipped with powerful self-tracking devices, finally conquers obesity, insomnia, and global warming as everyone eats less, sleeps better, and emits more appropriately. The fallibility of human memory is conquered too, as the very same tracking devices record and store everything we do. Car keys, faces, factoids: we will never forget them again. No need to feel nostalgic, Proust-style, about the petite madeleines you devoured as a child; since that moment is surely stored somewhere in your smartphone—or, more likely, your smart, all-recording glasses—you can stop fantasizing and simply rewind to it directly. In any event, you can count on Siri, Apple's trusted voice assistant, to tell you the truth you never wanted to face back then: all those madeleines dramatically raise your blood glucose levels and ought to be avoided. Sorry, Marcel!
Politics, finally under the constant and far-reaching gaze of the electorate, is freed from all the sleazy corruption, backroom deals, and inefficient horse trading. Parties are disaggregated and replaced by Groupon-like political campaigns, where users come together—once—to weigh in on issues of direct and immediate relevance to their lives, only to disband shortly afterward. Now that every word—nay, sound—ever uttered by politicians is recorded and stored for posterity, hypocrisy has become obsolete as well. Lobbyists of all stripes have gone extinct as the wealth of data about politicians—their schedules, lunch menus, travel expenses—are posted online for everyone to review.
As digital media make participation easier, more and more citizens ditch bowling alone—only to take up blogging together. Even those who've never bothered to vote in the past are finally provided with the right incentives—naturally, as a part of an online game where they collect points for saving humanity—and so they rush to use their smartphones to "check in" at the voting booth. Thankfully, getting there is no longer a chore; self-driving cars have been invented for the purpose of getting people from place to place. Streets are clean and shiny; keeping them that way is also part of an elaborate online game. Appeals to civic duty and responsibility to fellow citizens have all but disappeared—and why wouldn't they, when getting people to do things by leveraging their eagerness to earn points, badges, and virtual currencies is so much more effective?
Crime is a distant memory, while courts are overstaffed and underworked. Both physical and virtual environments—walls, pavements, doors, log-in screens—have become "smart." That is, they have integrated the plethora of data generated by the self-tracking devices and social-networking services so that now they can predict and prevent criminal behavior simply by analyzing their users. And as users don't even have the chance to commit crimes, prisons are no longer needed either. A triumph of humanism, courtesy of Silicon Valley.
And then, there's the flourishing new "marketplace" of "ideas." Finally, the term "marketplace" no longer feels like a misnomer; cultural institutions have never been more efficient or responsive to the laws of supply and demand. Newspapers no longer publish articles that their readers are not interested in; the proliferation of self-tracking combined with social-networking data guarantees that everyone gets to read a highly customized newspaper (down to the word level!) that yields the highest possible click rate. No story goes unclicked, no headline untweeted; customized, individual articles are generated in the few seconds that pass between the click of a link and the loading of the page in one's browser.
The number of published books has skyrocketed—most of them are self-published—and they are perfectly efficient as well. Many even guarantee alternative endings—and in real time!—based on what the eye-tracking activity of readers suggests about their mood. Hollywood is alive and kicking; now that everyone wears smart glasses, a movie can have an infinite number of alternative endings, depending on viewers' mood at a given moment as they watch. Professional critics are gone, having been replaced first by "crowds," then by algorithms, and finally by customized algorithmic reviews—the only way to match films with customized alternative endings. The edgiest cultural publications even employ algorithms to write criticism of songs composed by other algorithms. But not all has changed: just like today, the system still needs imperfect humans to generate the clicks to suck the cash from advertisers.
This brief sketch is not an excerpt from the latest Gary Shteyn-gart novel. Nor is it dystopian science fiction. In fact, there is a good chance that at this very moment, someone in Silicon Valley is making a pitch to investors about one of the technologies described above. Some may already have been built. A dystopia it isn't; many extremely bright people—in Silicon Valley and beyond—find this frictionless future enticing and inevitable, as their memos and business plans would attest.
I, for one, find much of this future terrifying, but probably not for the reasons you would expect. All too often, digital heretics like me get bogged down in finding faults with the feasibility of the original utopian schemes. Is perfect efficiency in publishing actually attainable? Can all environments be smart? Will people show up to vote just because they are playing a game? Such skeptical questions over the efficacy of said schemes are important, and I do entertain many of them in this book. But I also think that we, the heretics, also need to take Silicon Valley innovators at their word and have just a bit more faith in their ingenuity and inventiveness. These, after all, are the same people who are planning to scan all the world's books and mine asteroids. Ten years ago, both ideas would have seemed completely crazy; today, only one of them does.
So perhaps we should seriously entertain the possibility that Silicon Valley will have the means to accomplish some of its craziest plans. Perhaps it won't overthrow the North Korean regime with tweets, but it could still accomplish a lot. This is where the debate ought to shift to a different register: instead of ridiculing the efficacy of their means, we also need to question the adequacy of the innovators' ends. My previous book, The Net Delusion, shows the surprising resilience of authoritarian regimes, which have discovered their own ways to profit from digital technologies. While I was—and remain—critical of many Western efforts to promote "Internet freedom" in those regimes, most of my criticisms have to do with the means, not the ends, of the "Internet freedom agenda," presuming that the ends entail a better climate for freedom of expression and more respect for human rights. In this book, I have no such luxury, and I question both the means and the ends of Silicon Valley's latest quest to "solve problems." I contend here that Silicon Valley's promise of eternal amelioration has blunted our ability to do this questioning. Who today is mad enough to challenge the virtues of eliminating hypocrisy from politics? Or of providing more information—the direct result of self-tracking—to facilitate decision making? Or of finding new incentives to get people interested in saving humanity, fighting climate change, or participating in politics? Or of decreasing crime? To question the appropriateness of such interventions, it seems, is to question the Enlightenment itself.
And yet I feel that such questioning is necessary. Hence the premise of this book: Silicon Valley's quest to fit us all into a digital straightjacket by promoting efficiency, transparency, certitude, and perfection—and, by extension, eliminating their evil twins of friction, opacity, ambiguity, and imperfection—will prove to be prohibitively expensive in the long run. For various ideological reasons to be explained later in these pages, this high cost remains hidden from public view and will remain so as long as we, in our mindless pursuit of this silicon Eden, fail to radically question our infatuation with a set of technologies that are often lumped together under the deceptive label of "the Internet." This book, then, attempts to factor in the true costs of this highly awaited paradise and to explain why they have been so hard to account for.
Imperfection, ambiguity, opacity, disorder, and the opportunity to err, to sin, to do the wrong thing: all of these are constitutive of human freedom, and any concentrated attempt to root them out will root out that freedom as well. If we don't find the strength and the courage to escape the silicon mentality that fuels much of the current quest for technological perfection, we risk finding ourselves with a politics devoid of everything that makes politics desirable, with humans who have lost their basic capacity for moral reasoning, with lackluster (if not moribund) cultural institutions that don't take risks and only care about their financial bottom lines, and, most terrifyingly, with a perfectly controlled social environment that would make dissent not just impossible but possibly even unthinkable.
The structure of this book is as follows. The next two chapters provide an outline and a critique of two dominant ideologies—what I call "solutionism" and "Internet-centrism"—that have sanctioned Silicon Valley's great ameliorative experiment. In the seven ensuing chapters, I trace how both ideologies interact in the context of a particular practice or reform effort: promoting transparency, reforming the political system, improving efficiency in the cultural sector, reducing crime through smart environments and data, quantifying the world around us with the help of self-tracking and lifelogging, and, finally, introducing game incentives—what's known as gamification—into the civic realm. The last chapter offers a more forward-looking perspective on how we can transcend the limitations of both solutionism and Internet-centrism and design and employ technology to satisfy human and civic needs.
Now, why oppose such striving for perfection? Well, I believe that not everything that could be fixed should be fixed—even if the latest technologies make the fixes easier, cheaper, and harder to resist. Sometimes, imperfect is good enough; sometimes, it's much better than perfect. What worries me most is that, nowadays, the very availability of cheap and diverse digital fixes tells us what needs fixing. It's quite simple: the more fixes we have, the more problems we see. And yet, in our political, personal, and public lives—much like in our computer systems—not all bugs are bugs; some bugs are features. Ignorance can be dangerous, but so can omniscience: there is a reason why some colleges stick to need-blind admissions processes. Ambivalence can be counterproductive, but so can certitude: if all your friends really told you what they thought, you might never talk to them again. Efficiency can be useful, but so can inefficiency: if everything were efficient, why would anyone bother to innovate?
The ultimate goal of this book, then, is to uncover the attitudes, dispositions, and urges that comprise the solutionist mind-set, to show how they manifest themselves in specific projects to ameliorate the human condition, and to hint at how and why some of these attitudes, dispositions, and urges can and should be resisted, circumvented, and unlearned. For only by unlearning solutionism—that is, by transcending the limits it imposes on our imaginations and by rebelling against its value system—will we understand why attaining technological perfection, without attending to the intricacies of the human condition and accounting for the complex world of practices and traditions, might not be worth the price.
Solutionism and Its Discontents
"In the future, people will spend less time trying to get technology
to work . . . because it will just be seamless. It will just be there.
The Web will be everything, and it will also be nothing.
It will be like electricity. . . If we get this right, I believe we
can fix all the world's problems."
to work . . . because it will just be seamless. It will just be there.
The Web will be everything, and it will also be nothing.
It will be like electricity. . . If we get this right, I believe we
can fix all the world's problems."
blinds us to questions of our ongoing responsibilities
for what we built yesterday."
—PAUL DOURISH AND SCOTT D. MAINWARING
Have you ever peeked inside a friend's trash can? I have. And even though I've never found anything worth reporting—not to the KGB anyway—I've always felt guilty about my insatiable curiosity. Trash, like one's sex life or temporary eating disorder, is a private affair par excellence; the less said about it, the better. While Mark Zuckerberg insists that all activities get better when performed socially, it seems that throwing away the garbage would forever remain an exception—one unassailable bastion of individuality to resist Zuckerberg's tyranny of the social.
Well, this exception is no more: BinCam, a new project from researchers in Britain and Germany, seeks to modernize how we deal with trash by making our bins smarter and—you guessed it—more social. Here is how it works: The bin's inside lid is equipped with a tiny smartphone that snaps a photo every time someone closes it—all of this, of course, in order to document what exactly you have just thrown away. A team of badly paid humans, recruited through Amazon's Mechanical Turk system, then evaluates each photo. What is the total number of items in the picture? How many of them are recyclable? How many are food items? After this data is attached to the photo, it's uploaded to the bin owner's Facebook account, where it can also be shared with other users. Once such smart bins are installed in multiple households, BinCam creators hope, Facebook can be used to turn recycling into a game-like exciting competition. A weekly score is calculated for each bin, and as the amounts of food waste and recyclable materials in the bins decrease, households earn gold bars and leaves. Whoever wins the most bars and tree leaves, wins. Mission accomplished; planet saved!
Nowhere in the academic paper that accompanies the BinCam presentation do the researchers raise any doubts about the ethics of their undoubtedly well-meaning project. Should we get one set of citizens to do the right thing by getting another set of citizens to spy on them? Should we introduce game incentives into a process that has previously worked through appeals to one's duties and obligations? Could the "goodness" of one's environmental behavior be accurately quantified with tree leaves and gold bars? Should it be quantified in isolation from other everyday activities? Is it okay not to recycle if one doesn't drive? Will greater public surveillance of one's trash bins lead to an increase in eco-vigilantism? Will participants stop doing the right thing if their Facebook friends are no longer watching?
Questions, questions. The trash bin might seem like the most mundane of artifacts, and yet it's infused with philosophical puzzles and dilemmas. It's embedded in a world of complex human practices, where even tiny adjustments to seemingly inconsequential acts might lead to profound changes in our behavior. It very well may be that, by optimizing our behavior locally (i.e., getting people to recycle with the help of games and increased peer surveillance), we'll end up with suboptimal behavior globally, that is, once the right incentives are missing in one simple environment, we might no longer want to perform our civic duties elsewhere. One local problem might be solved—but only by triggering several global problems that we can't recognize at the moment.
A project like BinCam would have been all but impossible fifteen years ago. First, trash bins had no sensors that could take photos and upload them to sites like Facebook; now, tiny smartphones can do all of this on the cheap. Amazon didn't have an army of bored freelancers who could do virtually any job as long as they received their few pennies per hour. (And even those human freelancers might become unnecessary once automated image-recognition software gets better.) Most importantly, there was no way for all our friends to see the contents of our trash bins; fifteen years ago, even our personal websites wouldn't get the same level of attention from our acquaintances—our entire "social graph," as the geeks would put it—that our trash bins might receive from our Facebook friends today. Now that we are all using the same platform—Facebook—it becomes possible to steer our behavior with the help of social games and competitions; we no longer have to save the environment at our own pace using our own unique tools. There is power in standardization!
These two innovations—that more and more of our life is now mediated through smart sensor-powered technologies and that our friends and acquaintances can now follow us anywhere, making it possible to create new types of incentives—will profoundly change the work of social engineers, policymakers, and many other do-gooders. All will be tempted to exploit the power of these new techniques, either individually or in combination, to solve a particular problem, be it obesity, climate change, or congestion. Today we already have smart mirrors that, thanks to complex sensors, can track and display our pulse rates based on slight variations in the brightness of our faces; soon, we'll have mirrors that, thanks to their ability to tap into our "social graph," will nudge us to lose weight because we look pudgier than most of our Facebook friends.
Or consider a prototype teapot built by British designer-cum-activist Chris Adams. The teapot comes with a small orb that can either glow green (making tea is okay) or red (perhaps you should wait). What determines the coloring? Well, the orb, with the help of some easily available open-source hardware and software, is connected to a site called Can I Turn It On? (http://www.caniturniton.com), which, every minute or so, queries Britain's national grid for aggregate power-usage statistics. If the frequency figure returned by the site is higher than the baseline of 50 hertz, the orb glows green; if lower, red. The goal here is to provide additional information for responsible teapot use. But it's easy to imagine how such logic can be extended much, much further, BinCam style. Why, for example, not reward people with virtual, Facebook-compatible points for not using the teapot in the times of high electricity usage? Or why not punish those who disregard the teapot's warnings about high usage by publicizing their irresponsibility among their Facebook friends? Social engineers have never had so many options at their disposal.
Sensors alone, without any connection to social networks or data repositories, can do quite a lot these days. The elderly, for example, might appreciate smart carpets and smart bells that can detect when someone has fallen over and inform others. Even trash bins can be smart in a very different way. Thus, a start-up with the charming name of BigBelly Solar hopes to revolutionize trash collecting by making solar-powered bins that, thanks to built-in sensors, can inform waste managers of their current capacity and predict when they would need to be emptied. This, in turn, can optimize trash-collection routes and save fuel. The city of Philadelphia has been experimenting with such bins since 2009; as a result, it cut its center garbage-collecting sorties from 17 to 2.5 times a week and reduced the number of staff from thirty-three to just seventeen, bringing in $900,000 in savings in just one year.
Likewise, city officials in Boston have been testing Street Bump, an elaborate app that relies on accelerometers, the now ubiquitous motion detectors found in many smartphones, to map out potholes on Boston's roads. The driver only has to turn the app on and start driving; the smartphone will do the rest and communicate with the central server as necessary. Thanks to a series of algorithms, the app knows how to recognize and disregard manhole covers and speed bumps, while diligently recording the potholes. Once at least three drivers have reported bumps in the same spot, the bump is recognized as a pothole. Likewise, Google relies on GPS-enabled Android phones to generate live information about traffic conditions: once you start using its map and disclose your location, Google knows where you are and how fast you are moving. Thus, it can make a good guess as to how bad the road situation is, feeding this information back into Google Maps for everyone to see. These days, it seems, just carrying your phone around might be an act of good citizenship.
The Will to Improve (Just About Everything!)
That smart technology and all of our social connections (not to mention useful statistics like the real-time aggregate consumption of electricity) can now be "inserted" into our every mundane act, from throwing away our trash to making tea, might seem worth celebrating, not scrutinizing. Likewise, that smartphones and social-networking sites allow us to experiment with interventions impossible just a decade ago seems like a genuinely positive development. Not surprisingly, Silicon Valley is already awash with plans for improving just about everything under the sun: politics, citizens, publishing, cooking.
Alas, all too often, this never-ending quest to ameliorate—or what the Canadian anthropologist Tania Murray Li, writing in a very different context, has called "the will to improve"—is shortsighted and only perfunctorily interested in the activity for which improvement is sought. Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!—this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.
I call the ideology that legitimizes and sanctions such aspirations "solutionism." I borrow this unabashedly pejorative term from the world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions—the kind of stuff that wows audiences at TED Conferences—to problems that are extremely complex, fluid, and contentious. These are the kinds of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that "solutionists" have defined them; what's contentious, then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching "for the answer before the questions have been fully asked." How problems are composed matters every bit as much as how problems are resolved.
Solutionism, thus, is not just a fancy way of saying that for someone with a hammer, everything looks like a nail; it's not just another riff on the inapplicability of "technological fixes" to "wicked problems" (a subject I address at length in The Net Delusion
- On Sale
- Mar 5, 2013
- Page Count
- 432 pages