Faked Out

A new generation of doctored news stories, photos, and videos is emerging. How leaders are dealing with the influx of easy to-make and surprisingly effective “deepfakes.”

authorImage

Half of Americans say that made-up news and information is a big problem, a problem, according to a new report, that may become much worse as the 2020 presidential election approaches. The New York University study, released Sept. 9, highlights how for-profit firms, foreign adversaries, domestic troublemakers, and even unwitting Americans likely will spread misinformation across social networks portraying candidates saying and doing things they never said or did.

The misinformation explosion is not just a problem for politicians, however, as the new story in Briefings magazine, points out. There are steps leaders at all organizations can take to identify and mitigate the impacts of so-called "deepfakes."

See the new issue of Briefings magazine, available at newsstands and online.

At noon on May 22, Nancy Pelosi, the Speaker of the United States House of Representatives, met with reporters to talk about a meeting she had had earlier in the day with President Donald Trump. Shortly after, a brief clip of Pelosi’s remarks appeared on Facebook. She sounds uncertain, stumbling through words and saying “um” and “ah” repeatedly. Minutes later, a video of another Pelosi appearance showed up on a different Facebook page. This time she’s slurring her words, hesitating several seconds between phrases, and generally sounding like someone who’s had a few too many beers. President Trump retweeted the first clip to his 60 million followers along with the phrase “PELOSI STAMMERS THROUGH NEWS CONFERENCE.” The second video was watched more than 2 million times on Facebook.

So which video was real? Neither.

The first was created by taking scattered, small natural ums and ahs from Pelosi’s 21-minute speech and jamming them together into a single 30-second sequence. The second clip slowed down a recording to 75 percent of its normal speed while digitally altering Pelosi’s vocal pitch to sound as it would at 100 percent. Presto: slurred words and weird pauses that never occurred. The creator of both videos, according to news reports, was neither a Russian master spy nor a genius hacker but a 34-year-old day laborer in the Bronx who runs a few politically oriented Facebook pages. The man denied it, but few believed him, largely because a Facebook official anonymously confirmed his identity.

 

This is life in 2019. Though news fakes have been around for years, a new generation of videos, photos, and stories known as “deepfakes” is increasingly becoming part of our daily media diet. And pretty much anyone can create them.

Thanks to fast internet connections, plenty of data, and, most importantly, artificial intelligence, forgers can easily make and distribute partially (or totally) fabricated audio, photos, maps, texts, and other information. These fakes might show a politician speaking with slurred words, an actor in an embarrassing position, or a neighborhood sprouting a building that doesn’t exist. A lot of this is harmless; the internet is filled with millions of poorly altered videos of cats playing pianos, people hitting 100-yard field goals, and other amusingly impossible feats. But experts from multiple fields worry that the latest fakes can alter perceptions of real people, politics, products, and policies. Private citizens and companies are anxious, and even Congress is worried; in June it held first-of-its-kind hearings on fakes.

“The circulation of deepfakes has potentially explosive implications for individuals and society,” stated University of Maryland law professor Danielle Citron in her written testimony. “Under assault will be reputations, political discourse, elections, journalism, national security, and truth as the foundation of democracy.” Plus, with each passing year, these fakes are becoming cheaper to make and distribute while becoming even tougher to root out. We may be only beginning a global march toward Peak Fake.

 

Though it got more attention during the last presidential campaign, false information isn’t new to the internet. Since the days of dial-up, anyone has been able to find articles saying the Holocaust didn’t happen or that the Earth is flat. That content can attract a wide audience; fake news has been blamed for inciting violence in India, Myanmar, and other nations. But in those cases, the web was just an amplifier.

Digital fakery is different. These fakes turn people into digital puppets, doing or saying things they never said or did, or manipulate places to include events that never happened (or erase events that did). It’s a new form of content for which many of us have few defenses.

“People process faces and emotions unconsciously,” says Jay Van Bavel, a professor of psychology and neural science at New York University who has researched the reasons people believe and spread fake news. Studies have shown that the amygdala—a brain region that responds to intense experiences—will become more active after a mere 30 milliseconds of seeing a photo of a fearful face. Such responses take place so fast we may not be aware that they are happening. And what we aren’t aware of is much harder to control or rebut.

When we evaluate something we read or hear, we have a moment to consider the possibility that it’s untrue or inaccurate. Some of us, Van Bavel notes, are pretty good at sussing out what’s real. But a well-doctored video or map doesn’t give people the time or the clues that they need. “The types of cognitive tools and skills that help us see through a fake story and realize it is far-fetched to think that Hillary Clinton wants to impose sharia law in Florida—I don’t know if they will help us see through these videos.”

(click the image below to enlarge)

 

Fakers’ motives are varied. Some falsify for the sheer pleasure of making things up. For example, the term “deepfake” originally referred to clips of famous peoples’ faces digitally attached to actors in porn videos. Makers of such clips, though they’ve injured both the famous performer and the decapitated porn actor, aren’t trying to fool anyone. There are plenty of examples of less malicious, more playful versions of the “Let’s see what I can make up” impulse all over the web. Some Russia-based scientists recently animated the Mona Lisa, making her look around and speak in an uncanny video. Other fakers are out for thrills or maybe a bit of cash. The Pelosi videos were well-liked by many who don’t like her politics and, according to news reports, brought their creator some ad revenue.

More worrisome are fakes that are in fact part of someone’s considered strategy for winning power and profits (or, conversely, making rivals look bad). For example, during Brazil’s 2019 election campaign, many citizens received false information via WhatsApp group messages. WhatsApp is a key method of communication in Brazil, estimated to be used by more than half of the country’s 210 million citizens. Some of the group messages were old-fashioned lies, but others were novelties made possible by digital technology. For example, according to Luca Belli, a professor at the Fundaç?o Getúlio Vargas School of Law, doctored photos showed officials of one political party celebrating with Fidel Castro during the Cuban Revolution, and doctored audio misrepresented the views of that party’s presidential candidate, Fernando Haddad. According to the Brazilian newspaper Folha de S?o Paulo, the WhatsApp messaging campaign was paid for by a pro-business lobby backing Haddad’s rival, Jair Bolsonaro. (Bolsonaro won the election.)

Technology may make creating these fakes easier, but the reasons they succeed are rooted in the ancient structure of the mind. Political fakery appeals to people’s sense of threat (the other side is really bad) and what psychologists call “confirmation bias”—the well-documented fact that we easily take in (and share) information that aligns with our beliefs and resist information that seems to contradict them. Many internet organizations take advantage of these ancient psychological mechanisms. The incentives in our hyperconnected world are to share not what is true but what is engaging, and the fakes are designed to catch attention.

Plus, there’s divided opinion on what exactly to do with fake videos. Take them down? Prosecute creators? Do nothing? YouTube opted to take the Pelosi videos down shortly after they were posted, but Facebook didn’t. Instead, the social network attached warnings that the videos were disputed and links to fact-checks.

A few weeks after the Pelosi videos debuted, Facebook also let another well-circulated fake stay on its platform. This one had Facebook’s own CEO, Mark Zuckerberg, saying all his ideas come from SPECTRE, the fictional outfit whose nefarious plans James Bond is always trying to foil. “We don’t have a policy that stipulates that the information you post on Facebook must be true,” Facebook said in a statement about the Pelosi videos, reminding us clearly that engaging content is different from true content.

So what can we do to stop, or contain, the spread of digital fakery? One possible solution, already being pursued, is to apply the tools of artificial intelligence, which help create the fakes, to the job of exposing them. For instance, since most videos and photos show people with their eyes open, it is hard for fake-video creators to collect images of people with their eyes closed. People in fake videos simply don’t blink normally, a team of researchers from Cornell University discovered. They created an algorithm to tell fake videos from real ones by analyzing blink rates. The US Department of Defense is reportedly working on AI that can detect deepfakes as well.

However, for the moment, the fakers are far more numerous than the people working to expose their frauds. “We are outgunned,” Hany Farid, a computer-science professor and digital-forensics expert at the University of California Berkeley, recently told the Washington Post. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” Even if that situation were to improve, there are other reasons to worry about a machine solution to the problem of fakes. The more we depend on AI to protect us from fakes that fool people, the more we expose ourselves to a new and potentially greater danger: fakes that fool the machines.

Such forms of fraud already exist. Chinese researchers have demonstrated AI techniques to fool computers that analyze satellite images and photos of terrain. Experts worry about the domino effects of “poisoned” data as it flows from one machine to another, without humans knowing that anything is amiss. Todd Myers, automation lead and chief information officer in the Office of the Director of Technology at the National Geospatial-Intelligence Agency, told an AI conference last spring to imagine a plan of battle that depends on troops taking a crucial bridge—which doesn’t turn out to be there. Or the consequences of faked map info being used to route self-driving trucks. The government might be able to police its own data to guard against such fakery, but what about other organizations that rely on satellite images to understand the world and make decisions?

Even if we can perfect methods to sort out real videos, maps, audio, and words from fakes, some fear we can’t escape the damage to society the fakes can cause by their mere existence. Citizens may soon have trouble trusting anything in the media—especially if it’s a message that doesn’t align with what they want to believe. In the long term, Van Bavel speculates, “that could enhance polarization. That’s where I see a possible political consequence—over a long period of time, that could erode social trust.”

This isn’t just a matter of people’s willingness to believe this or that particular video. There are also secondary effects from seeing others’ reactions and suspecting their motives. A couple of weeks after the Pelosi videos appeared, it was widely reported that Mark Zuckerberg had called the House Speaker to discuss the issue and Facebook’s role in it. It was also widely reported that she never called him back. The videos had been debunked, but one link in the web of trust that keeps society together had broken.

 

Download