Tech Turncoats

At the end of September, Facebook announced that hackers had compromised the accounts of more than 50 million users of its website. Government officials, privacy experts, and company employees are still sorting through the implications. The issue is another in a series of high-profile events that have made some of the Internet's most important pioneers question its place in today's society. The current issue of Briefings highlights the rise of tech's "turncoats."

See the new issue of Briefings magazine, available at newsstands and online.

A quarter-century ago, when the Internet seemed like a fun, free “bridge to the 21st century” (as politicians called it), Ethan Zuckerman was working at a web start-up with a problem: how to scale the company up from “cool idea” to “sustainable business.” After five years trying one thing and another, he and his fellow pioneers finally hit on a solution investors liked: targeted advertising. The site, Tripod.com—a web-hosting platform and early social network—would deliver ads based on the information users were sharing on their sites.

There was a hitch, though. Advertisers worried that their brands might appear to endorse whatever screwball content appeared on the same page. After a carmaker complained about its message appearing next to a message sexual in nature, Zuckerman wrote some code. It allowed an ad to appear in a new, separate window.

That’s right: He’d invented the pop-up ad—aka, as he himself describes it, “one of the most hated tools in the advertiser’s toolkit.” He never meant to irritate, confuse and hobble millions of web surfers. But no one could have foreseen how decisions made in those scrappy early days of the modern web would lead, years later, to outcomes nobody wanted. “I’m sorry,” Zuckerman later wrote. “Our intentions were good.”

At the time he published those words in 2014, Zuckerman stood out like a flip phone in an Apple store. Back then the industry’s vibe was, “We cracked the code, you’re welcome, world,” (as writer Alec Berg, creator of the HBO satire “Silicon Valley,” has described it). Astute outsiders criticized the assumptions and structures of digital life (Zuckerman, who is director of the Center for Civic Media at MIT, had become one of them). But Facebook, Amazon, Google, Apple, Microsoft, Twitter, Uber, Instagram, SnapChat, LinkedIn and many other digital firms simply and constantly cooed that they weren’t just making money; they were making a better world—freer, more equal, more open, more understanding, more connected. The public seemed, mostly, to agree.

 

No more.

Over the past year, doubts about this dogma turned from a trickle into a global tidal wave. Suddenly fear, anxiety and anger—about the power of tech over society and its effects on our politics, livelihoods and even our souls—are everywhere. When the news site Axios compared Americans’ views of tech firms in polls taken in October 2017 and March 2018, it found a vast drop in most tech firms’ net favorability ratings (that is, the percent of people favorable to a brand minus the percent who are unfavorable). Facebook, then in the midst of the Cambridge Analytica scandal, took the most severe hit, dropping 28 points.

That wave of dissatisfaction swiftly overtook the leaders who’d invented, perfected, marketed and financed the digital infrastructure of modern life. But for a certain group of them, something curious happened: Instead of fighting the revolution against the web, they joined it. Indeed, now there is practically standard operating procedure for key people in tech to say, in effect, I’m sorry. Our intentions were good.

Consider Tristan Harris. He was far from the first person to suggest, as he has put it, that “the job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.” But Harris once worked at Google, helping to create a number of web services himself while observing others being made. Or take Tony Fadell. When he recently said smartphones and their apps are alarmingly addictive, he wasn’t the first. But Fadell helped design the iPhone. Skeptical that some tech services knowingly exploited people’s psychological vulnerabilities? Consider Sean Parker, Facebook’s former president. He and other tech creators, he told a forum sponsored by Axios, “understood this consciously. And we did it anyway.”

And, of course, not all the critics and callers for reform are former members of tech’s elites. Credit everyone from Facebook CEO Mark Zuckerberg, who went on record saying social networks might need to be regulated, to Apple’s Tim Cook, who endorsed other regulations to protect privacy, with offering some serious soul-searching. Society, says Zuckerman of pop-up ad fame, is “having a public health moment, and we’re asking questions about individual health, we’re asking that question about civic health.” It may be a little disorienting to see people who created the tech agreeing that it has sometimes done harm, but it makes sense that they would want to join this conversation.

One aspect of tech’s revolt against its own culture is deeply personal, even private. It involves a mounting concern about the effects of digital hardware and software on our minds and our close relationships. Last autumn, for example, Harris and other people with long experience at big tech firms joined together to found the Center for Humane Technology. The nonprofit was established on the premise that smartphones and other digital tech have been designed to undermine people’s ability to achieve their own goals. (The Center for Humane Technology did not respond to requests for an interview.)

The Center promotes strategies for reducing tech’s hold on your mind (for individuals) and aims to promulgate ethical design principles (for companies to create less-distracting tech). Together with Common Sense Media, a nonprofit organization that guides parents on media choices, the Center also has launched a “Truth About Tech” campaign with $7 million in funding and $50 million in free media from Comcast, DirecTV and other firms. Aimed at some 55,000 American public schools, the campaign’s goal is to educate children and teens about the hazards of smartphones, online networks and other tech.

Worries about tech’s effect on an individual mind, though, aren’t all that troubles the public. Last winter and spring brought reminder after reminder that many forms of digital tech can be exploited by bad actors—hackers stealing personal information, trolls and bots deliberately spreading distrust and disinformation, ordinary people choosing to egg each other on as they expressed their ugliest sides. Social networks have responded with greater vigilance against attempts at manipulation. But how can they defend users against their own worst instincts?

Even as the industry tries to change, there is widespread recognition that governments will be involved in the changes to come. In the giddy early days of the commercialization of the Internet, many governments tried to stay out of tech’s way, wary of killing the goose that promised to lay so many golden eggs. This consensus is now about as popular and relevant in politics as the divine right of kings. Regulations are already tightening around the world, and more rules are coming. This is especially true now that the titans of tech have accepted their inevitability.

One benchmark for rules on data use has already come into force. It is the European Union’s General Data Protection Regulation, which went into effect last May (prompting a flood of “how we use your data” emails into all our inboxes). It gives EU citizens much more control over how tech companies collect and use, as the regulations state, “any information relating to an identified or identifiable natural person.” Organizations now have to spell out why data is being collected and whether it will be used to create profiles of people’s actions and habits. Moreover, consumers have an unquestioned right to see data that a company has about them, as well as the right to correct any mistakes they find.

Many observers expect, though, that future regulators will look beyond concerns about protecting users’ data. In the longer-term future, for example, regulators could decide to require that users of one service be able to link easily to users of other services. This would remove the lock-in effect that benefits today’s giants because they have so many users.

“I’ve got something like 50,000 followers on Twitter and I’ve probably got 1,500 friends on Facebook. I can’t just leave those networks,” Zuckerman explains. “Yet I need to have a way that I can start interacting with new networks.” So he advocates giving users two new rights. “One is the right to aggregate and the other is the right to federate.”

The right to aggregate means a user can collect different services into a single gateway. The right to federate means that the user can link herself easily to whatever services she wishes, old or new, familiar or experimental. Given the amount of our lives that current tech giants occupy, these rights are essential if new alternatives are to have any chance of mass adoption, he says.

(click the image to enlarge)

 

Many believe a lot of today’s current unease is rooted in the fundamentals of the world’s information infrastructure—choices made long ago by people whose intentions were good, which haven’t worked out as they’d hoped. Such critics aren’t calling for improvements in the current order of things—they want a new order, with new assumptions about information and the people who use it.

They’re looking at one keystone of the digital world as it is now constructed: the incentive so many companies have to collect, store and analyze data about people. That is the business model that Zuckerman and many other Internet pioneers discovered a generation ago: targeted advertising.

Advertisers ultimately support Google, Facebook, Twitter and myriad other services (making them “free” to users) because those advertisers can use information the services collect. It gives them the ability to target their ads to the people most likely to respond. Zuckerman calls it the “original sin” of the modern Internet. It creates a need for companies to gather data on users and to sell it to third parties (or at least to tell investors that they will, and thus make a profit). This commits many tech companies to harvesting, storing and leveraging personal data in ways that users often don’t understand or approve.

Another world is possible. Millions of people could decide, for example, to pay for search capacities or social networks so that the incentives to surveil them go away. If Instagram is valuable to you, wouldn’t you pay, say, $5 a month for it, especially if that payment meant you could be sure that your privacy was secure?

As you might expect, companies’ openness to changing their ways does not extend as far as the undoing of their business model. And, to be fair, they can defend the model as the best and fairest way to make their services available to all. Millions of people around the world, after all, can’t afford $5 a month. “I don’t think the ad model is going to go away,” Facebook’s Zuckerberg told The New York Times last March, “because I think fundamentally, it’s important to have a service like this that everyone in the world can use, and the only way to do that is to have it be very cheap or free.”

In any event, as discussions over the Internet evolve among options from incremental tweak to utopian revolution, one common thread will be the need for more information from tech companies. Despite the headline revelations about big tech, there is much about its operations that remains secret. “We don’t know enough yet,” Zuckerman says. “And in part we don’t know enough because we don’t really have access to all this data that probably knows us better than we do.” Whatever the long-term consequences of this moment, then, its greatest short-term consequence could be opening up the hidden practices and troves of data that tech companies have so jealously guarded. As society discusses the way forward, Zuckerman says, “it’s a conversation that probably shouldn’t be locked within a corporation that ends up being pretty secretive about what it does.”

Authors

  • David Berreby

    Contributor, Korn Ferry Institute