Is it Spy vs. Spy or Me vs. I?

For as long as there have been human beings, we’ve sought to identify our friends and foes with ever-increasing clarity.

authorImage

For as long as there have been human beings, we’ve sought to identify our friends and foes with ever-increasing clarity. People invented flags and uniforms, team colors and corporate logos. They’ve told their children, “We don’t do that.” They’ve explained who the “good guys” and the “bad guys” are in the world, nation and neighborhood. After all, when conflict comes between our version of the good guys and someone else’s good guys, everyone must choose, and show their choice. Are you flying that Russian tricolor, or wrapping the gold and blue flag of Ukraine over your shoulders? Do you call those disputed islands in the east China Sea Senkaku or Diaoyu? It’s human nature, and the root of war: You choose your side, you show your side, you know who stands with you.

Whose side is the Central Intelligence Agency on, when it snoops on the hard drives of al-Qaeda operatives and (as U.S. Sen. Dianne Feinstein has alleged) on Senate staffers? On whose side is a young hacker named Wang, when he writes viruses (for Unit 61398 of the People’s Liberation Army’s General Staff 3rd Department, 2nd Bureau) and writes more than 600 posts for a blog about his lousy job? (As the Los Angeles Times’s Barbara Demick has reported, the themes are poor pay, long hours and lots of ramen noodles.) Whose side is Google on, when it pledges to protect the privacy of its first-rate e-mail product—but then harvests users’ information from that product to make money?

Yes, every conflict includes people and organizations who change sides as time passes (as Edward Snowden, who’d once opined that leakers should be shot, had done by 2013, when he began the systematic leaks that have revealed so much about U.S. cy-berespionage around the world). But in cyberconflict, and cyberlife, many people and organizations seem to be on opposing sides at the same time. This is part of what makes it maddeningly difficult to protect oneself from digital hazards. After all, a friend (the government that relentlessly mines data for signs of terrorist plots) may also be a foe (the same government that includes your data in its relentless mining). Familiar faces feel untrustworthy, somehow—and that can even include the one in the mirror.

What is it about the digital world that fosters this ambiguity? The newness of the technology is certainly a factor. Eons of evolution have primed us to be afraid of big men with weapons; centuries of human history have taught us the indicators of aggression—angry declarations, troop movements and the like. Accustomed to people locking in their loyalties with symbols and rituals, we are not yet used to the idea that a huge amount of damage can be done by someone who just changed his mind.

Last year’s infamous breaches of retailer security (such as the holiday-season attack that stole information on more than 110 million accounts from the American discount store chain Target) depended on malware created by a solitary Russian named Rinat Shabayev, who was all of 23 years old. And one reason for the success of the Target attack, as Bloomberg Businessweek reported last March, was that the company apparently ignored a warning from its own security system. If we do not see our enemies in cyberspace, part of the reason is that we aren’t used to looking for them.

Another consequence of the newness of digital tools is that—contrary to what you might hear from cyberevangelists—they aren’t yet associated with any particular moral or political commitments in the non-digital world. For all the rhetoric about the Internet as a driver of freedom and empowerment, the fact remains that its resources are useful to all sides in the world’s political struggles. Yes, activists have used Facebook and Twitter to fight oppression. But it’s also true, as the tech critic Evgeny Morozov points out, that dictatorships have used them effectively. For example, he notes, during massive street protests in Iran in 2009, government agents used Facebook to check on the political affiliations of people entering the country.

Then there’s the Milan-based firm called Hacking Team. It sells a powerful spyware tool called Remote Control System (RCS)—which can capture e-mails and Skype activity, as well as other data—to governments. That’s an asset for democratic governments protecting their citizens against cybercrime and terrorism. But last winter researchers at the University of Toronto’s Citizen Lab said they had found traces of RCS on computers in Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, South Korea, Sudan, Thailand, Turkey, United Arab Emirates and Uzbekistan. “Nine of these countries receive the lowest ranking, ‘authoritarian,’ in The Economist’s 2012 Democracy Index,” the Citizen Lab post noted. (Hacking Team denied that they sell their tools to repressive regimes; Citizen Lab stood by its claims.)

Of course, if a tool is morally neutral, we can’t blame it when it’s used for bad ends any more than we can praise it when it’s used for good. That, ultimately, is the most important fact about the ambiguity we sense in the cyberworld. It comes from us, not from the technology. It is a consequence of the fact that the cybertools we use both benefit and trouble us, often in the same instant.

We who are Googled by prospective mates, prospective employers, enemies from summer camp, and on and on, also Google. Who wouldn’t want to know if a potential hire had been arrested or made bizarre statements on Twitter? We who are monitored by those who seek to predict our behavior, we also monitor others (with apps, with nanny cams). For example, Verizon Communications now offers its customers a “new tool to help parents set boundaries for children,” called FamilyBase. For $5 a month, it gives parents a complete a report on all activity on their children’s phones—calls, texts, apps downloaded, time spent talking and the times of conversations. Few are the parents who high-mindedly say they don’t want, and shouldn’t have, such information.

We who resent being spied upon by the state also endorse the state spying on other people. (The rule seems to be: “I, in my glorious individuality, am unpredictable but righteous, but please do use Big Data analytics on those other people to predict who will try to blow up a plane next year.”)

As David Simon, the creator of the television program The Wire, put it:

“I know it’s big and scary that the government wants a database of all phone calls. And it’s scary that they’re paying attention to the Internet. And it’s scary that your cell phones have GPS installed. And it’s scary, too, that the little box that lets you go through the short toll lane on I-95 lets someone, somewhere know that you are on the move ... But be honest, most of us are grudging participants in this dynamic. We want the cell phones. We like the Internet. We don’t want to sit in the slow lane at the Harbor Tunnel toll plaza.”

We need to recognize that this ambivalence is part of what makes it hard to defend ourselves against digital dangers. Our policies are as divided as we are, as Bruce Schneier, the chief technology officer of the computer security firm Co3 Systems, has noted. The U.S. military, he wrote recently, distinguishes its efforts at CNE (computer network exfiltration, which is the business of bypassing security features on a network so as to spy on it) from CNA (computer network attack, which is sabotage). But the distinction is meaningless, Schneier writes. The only way to do CNE is to use tools that could also be used for CNA. If a piece of malware can eavesdrop without being detected, there is no way to be certain it won’t switch to doing something more harmful once it is installed. “As long as cyberespionage equals cyberattack, we would be much safer if we focused the NSA’s efforts on securing the Internet from these attacks,” Schneier wrote this year. “True, we wouldn’t get the same level of access to information flows around the world. But we would be protecting the world’s information flows—including our own—from both eavesdropping and more damaging attacks.” To do that, though, we would have to decide that we were on the side of the targets of cyberweapons, not the side of the users of such devices. And that’s a commitment no government seems prepared to make.

Late last February the journalist Quinn Norton attended a workshop on identity at the Office of the Director of National Intelligence (ODNI). That was, as she wrote later, an unexpected decision. As a writer on hackers and hacker culture, with plenty of contacts in that world, she is no friend of the intelligence establishment. A close friend and former lover of Aaron Swartz, the Internet activist who committed suicide last year in the face of an aggressive federal prosecution for data theft, she stands against everything that ODNI stands for. Why did she go? Several times during the meeting, she wrote, she’d heard others say that there are bad people and good people in the world. “I realized when I heard this,” she wrote, “that I went to the ODNI because I don’t believe in bad or good people.”

Download the PDF