AI Gone Wrong

Checkout counters that don’t check you out. Bluetooth implants that get hacked. How companies can face unanticipated glitches from their tech ‘advances.’

authorImage

See the new issue of Briefings magazine, available at newsstands and online. Plus, see KFAdvance.com for more career advice.

Tesla’s visionary leader, Elon Musk, is famously keen on new technologies. And famously not keen on criticism. So, after a pair of investment analysts last year declared that the company’s Tesla 3 factory had too many robots, his response was surprising.

Musk agreed.

“Yes, excessive automation at Tesla was a mistake,” he tweeted in April 2018. “To be precise, my mistake. Humans are underrated.” When the company eventually hit its target of 5,000 cars a week, it was with a factory that had stepped back from extreme automation and put more human workers on the job.

There’s only one Elon Musk, but there’s nothing unique about that feeling that a new tech hasn’t lived up to its billing. In fact, it’s practically a universal 21st-century experience: Introduced to dazzling new technology, we imagine how it’s going to change our lives (for better or for worse). But then it doesn’t quite fulfill those promises. And it creates problems that didn’t exist before. And things go wrong, in ways big and small, even as we keep using it.

It isn’t that we need or want to go back to the way things were before. It’s just that the hot new things don’t work as anyone—designer, vendor, user, or employee—expected. After worrying about how we’d adapt to tech’s disruptive triumph, we find ourselves coping instead with its hiccups and stutters.

In some areas, this coping with tech’s disappointments has become so familiar we scarcely notice the chore. Every day, for example, millions of people type or swipe at their smartphones to tweet, text, or send email and find that a phrase like “but maybe it’s too late” has been autocorrected to something like “Bugs Mugabe it’d tweet.” Resigned to the frustration, many people add a boilerplate disclaimer to every mobile communication, explaining that it was sent from a phone. U no watt that mines.

But smartphones have been part of life for a decade. Where technological innovation is newer, most people haven’t yet built coping mechanisms. Then, they’re surprised and nonplussed by the devices’ failure to meet expectations.

That is one reason that around consumer-facing technologies that are supposed to be seamlessly machine operated—for example, self-checkout (SCO) counters at the grocery store, automated passport control at the international arrivals terminal at the airport, or ordering kiosks at restaurants—there is almost always a human employee hovering nearby. This worker’s job is simply to get people through little bumps and jitters that weren’t supposed to exist.

“Anyone who has used a self-checkout has accidentally put something unexpected in the bagging area and been admonished,” journalist Kaitlyn Tiffany wrote last year in a story for Vox. “They’ve also forgotten to put something in the bagging area and been admonished. They’ve also done seemingly exactly what they were supposed to do and been admonished by some terrible robot nonetheless.”

These little moments are easy to dismiss as a temporary snag in the technology’s development, but they have an impact on people’s feeling for a brand. And, more importantly for the bottom line, they also make it easier for people to steal. This is true physically (hey, no one is looking!), psychologically (they can’t be bothered to pay for a human being to deal with me, maybe I won’t be bothered to pay for this), and even legally (I just forgot to scan this item, officer, and you can’t prove I intended to steal it). According to Tiffany, about 4% of items that pass through self-checkout machines are not paid for. “Indicators are that around one in five customers may regularly use SCO systems as camouflage for theft,” wrote criminologists Adrian Beck and Matt Hopkins, of the University of Leicester in the United Kingdom, in a 2016 research paper.

Tech need not create new problems to disappoint us. Sometimes a new gadget or algorithm works exactly as its designers wished—but ignores other concerns that matter to the rest of us. Twitter, for instance, has algorithms that reveal and promote topics that are getting a lot of attention on the network. That serves the goal of encouraging interest and engagement. The network also has policies against hate speech. But the two aren’t entirely integrated.

So, last November, “kill all Jews” was briefly listed as a “trending topic” on the network. In its apology, Twitter explained that a vandal had scrawled the phrase on the walls of a New York City synagogue. People tweeting their reactions to the hate crime quoted the words. “This was trending as a result of coverage and horrified reactions to the vandalism against a synagogue in New York,” a Twitter spokesperson said. “Regardless, it should not have appeared as a trend.”

Such technological single-mindedness—where the tool works well on one measure but ignores other goals—can have far worse consequences than a temporarily embarrassed corporation. In hospitals and clinics, the effects can be severe or even fatal.

Consider the convenient Bluetooth connection offered on the latest implantable medical devices (IMDs). Surgically placed in a patient’s body, an IMD delivers tiny electrical signals to parts of the brain, alleviating symptoms of Parkinson’s disease, essential tremor, chronic pain, and depression, among other maladies. The device’s Bluetooth protocol was designed to let doctors use a tablet or smartphone to program and monitor the modern IMD. It was not, though, designed to defend against bad actors.

As a result, researchers have found that IMDs can be hacked with readily available off-the-shelf electronics. Last October, the Oxford Functional Neurosurgery Group and the digital security firm Kaspersky warned that hackers could easily get control of implants. “Manipulation could result in changed settings causing pain, paralysis, or the theft of private and confidential personal data,” the report said.

Ironically, even thoughtful improvements can be a source of disillusion—especially when people haven’t been prepared for the upgrade. For example, consider this story by Michael I. Jordan, a professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at the University of California at Berkeley. Fourteen years ago, he and his pregnant wife were told that their prenatal ultrasound exam had shown white spots around the heart of the fetus. That is an indicator of increased risk for Down syndrome. The couple were told that amniocentesis—a test for abnormalities that requires amniotic fluid withdrawn from the uterus—would be required.

About one amniocentesis procedure in 300 ends up destroying the fetus, and Jordan didn’t want to take that risk without being sure it was worth it. So he looked up the study on which the couple’s geneticist had based her advice. He noticed an important detail: the research was done using an older type of ultrasound device, whose images had hundreds of pixels fewer per square inch than the device used on his wife.

“I went back to tell the geneticist that I believed that the white spots were likely false positives—that they were literally ‘white noise,’ ” Jordan wrote last year. “She said ‘Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.’”

As a statistician, Jordan knew that the consequence of this mismatch between training and tech had led to thousands of unnecessary amniocentesis procedures. Thanks to that one in 300 risk, that meant the misreading had caused unnecessarily terminated pregnancies around the world.

But perhaps the most common disenchantment with tech these days is one that doesn’t involve the technology at all. It arrives when people realize that tech hasn’t really reduced the human workload.

The scholar, writer, and filmmaker Astra Taylor recently coined the term “fauxtomation” for such situations: when the arrival of new machines simply disguises or reorganizes work that is still done by people. She recounts waiting for her lunch order when another customer arrived to pick up his, which he had arranged via an app: “‘How did the app know my order would be ready 20 minutes early?’ he marveled, clutching his phone,” Taylor wrote recently in Logic, a journal about technology. The secret to this digital miracle? The human server behind the counter. “I sent you a message when it was done,” she explained. Similarly, many users assume (and companies say) that algorithms do content moderation for social networks, dating apps and other services that need to filter out offensive or illegal content. Yet, Taylor notes, much content moderation is actually performed by human beings, who scan feeds day in and day out for repulsive, unacceptable, and illegal content.

In fact, for all companies’ rhetoric about seamless and frictionless customer experiences, many people sense that the invisible human worker in “fauxtomation” is often the customer. When an organization offers a SCO, Tiffany writes, “it’s not ‘automating’ the process of checkout; it’s simply turning the register around, giving it a friendlier interface, and having the shopper do the work themselves.” When we enter an order onto a touchscreen at a restaurant, scan a bar code in a store, or input our details into a website, we are doing work that employees were once paid to do. This is why performing such chores won’t leave you feeling that you’re in a seamless technological wonderland.

Despite the sense of glitch and disappointment around a lot of new tech, no one is suggesting that we give up seeking to improve computing, data-collecting, or the integration of our physical world with the digital space. For all that our tech is imperfect, we have to recognize that it often works well enough to save us money or time or both. And, in fact, our frustrations can serve as a guide for future improvements.

That’s the silver lining of disappointment in technology: it can clarify our understanding and motivate us to do better. A sense that we haven’t achieved the ideal forces us to notice that we’ve expected too much too soon, or used big, general concepts instead of attending to details. After all, the details—of implementation, alignment, training—matter.

That’s the lesson Jordan draws from his experience with the ultrasound machine. Trying to design a global medical approach to genetic risks is, he notes, a huge challenge. It is not as simple as assuming data from one place and year can fit an exam taking place in another place and year. Instead of expecting that something magical called “AI” will soon take over, he says, we need to get clearer about what AI is—and what it is not.

Ultimately, he believes that will require a new body of engineering knowledge. It will use concepts like “information,” “uncertainty,” and “data” the way 20th-century engineering uses concepts like “force,” “tension,” and “compression.” For the moment, in our relationship to the dawning digital world, we’re like people centuries ago who built cathedrals and bridges before engineering arose as a profession. Those structures sometimes collapsed, killing many and shocking many more. The lesson for medieval stonemasons is the same one for today’s algorithm builders and data crunchers: Yes, build it. But understand better what you’re building.