Beware of being bored

As technology takes away the need to think on the job, workers all over may struggle to find something to do.

authorImage

See the new issue of Briefings magazine, available at newsstands and online.

Ever been chased by a hungry bear, an alligator or a shark? If so, you are one remarkably unlucky person. Almost everyone else in our species checked out of the food chain thousands of years ago. As the comedian Louis CK says, we’re troubled by mortgages and traffic and money, but at least we don’t have to worry about, say, cheetahs down at the train station.

What conferred this easy life on Homo sapiens, of course, was 20,000 years of inventing devices to stand in for our muscles. All that time, we have been steadily offloading physical work onto our tools, from flint-tipped arrows and chariots to 747s and Mack trucks. In fact, the human race does so little physical work now that about 40 percent of all adults worldwide are overweight, according to the World Health Organization. A significant fraction of humanity forces itself to exercise—that is, to do physical work that serves no other purpose than to get done—just to stay healthy.

Now comes a new problem, as we move from those muscle helpers to a world of artificial intelligence that, in effect, is full of mind helpers. Indeed, over the past decade, advances in computation and communication have begun yielding tools that do as much work for our minds as mighty machines do for our bodies. Increasingly, experts believe that this combination of disruptions—the Internet of Things, ubiquitous artificial intelligence, machine learning—add up to a revolution that will end up outsourcing mental and even emotional labor as thoroughly as machines now outsource physical labor. And this one, they say, won’t take thousands of years—and it will create all sorts of problems for company leaders with sizable workforces running out of things to do with their heads.

“I was teaching a class on the Internet,” says Lauren McCarthy, an artist, programmer, and instructor at UCLA’s School of the Arts and Architecture. “And the students kept asking, ‘What is the Internet?’ ” Having grown up with constant connection, the students didn’t really understand where the bounds of it were, she says. Already, at the dawn of the age of smart cars and smart refrigerators, they’re used to having any information they need at their beck and call.

 

These digital citizens are rolling smoothly into a world that makes intelligence as omnipresent as information. Already machines collect data, analyze it and use that analysis to perform work that not long ago required a human brain. Today, for example, algorithms tell judges which prisoners should be denied bail, police where to deploy officers and security patrols at airports where to go. Algorithms read X-rays and other results from medical procedures, and offer organizations an approach that scholars at Carnegie-Mellon’s Human-Computer Interaction Institute have dubbed “algorithmic management,” where, as one recent paper put it, “human jobs are assigned, optimized and evaluated through algorithms and tracked data,” rather than the judgments and analyses of managers.

Optimists tout the obvious benefits of having intelligence take up our mental tasks: more productivity, less time spent on busywork, and improved performance and accountability. The economist Jens Ludwig and his colleagues at the University of Chicago Crime Lab developed an algorithm to predict which prisoners would skip out on bail. When tested on a data set of actual case histories, it guessed wrong 8.5 percent of the time. But the judges who worked those cases in real life were wrong 11.3 percent of the time, Ludwig told a conference at NYU Law School’s Brennan Center for Justice last fall. Moreover, he noted, the algorithm could be altered to do better—not something you can say for sure about humans. “There’s nothing more opaque than the inside of a judge’s head,” he said.

Given a chance to ask a question, a practicing judge in attendance welcomed the algorithmic tool, saying, “We could use all the help we can get.” But he added, “Just don’t do it too well, because judges like to have job security, too.”

The nervous chuckles that greeted that remark are the soundtrack of the AI revolution. They’re the sound of people attracted to machine intelligence’s capacity but worried, too, that the effects on our minds of this much faster revolution will be like the effects of machines on our muscles. This assistance for our mental powers could free us to think great thoughts, have great adventures and make great art. But it may also leave us mentally flabby and inept.

 

(click the image to enlarge)

After all, with mental and emotional skills, the operating principle has always been, “Use it or lose it.” Today, we don’t remember phone numbers because our phones know them. Tomorrow, will we remember how to charm our spouses when the phones already have that down pat? The science-fiction writer Liu Cixin believes, as he recently wrote in the New York Times, “A sort of learned helplessness is likely to set in for us, and the idea of work itself may cease to hold meaning.” In the near future of triumphant AI, he wrote, we may end up like docile, pampered pets, being led here and there by a wise machine that knows us better than we know ourselves, “as unaware of its plans for us as a poodle on its way to the groomer’s.”

The problem, as researchers who work on complex automated systems have learned, is that once you’ve given mental work to an app or gadget, you don’t stay interested in doing it yourself. After a while, you aren’t able to do it yourself, any more than an out-of-shape driver could run a marathon.

This is a problem for any complicated system, from airline cockpits to nuclear power plants to self-driving cars because, “Engineers design systems according to what they expect, but by definition can’t predict the unexpected,” says Bruno Berberian, a psychologist who studies how people interact with highly automated systems at the French aerospace lab ONERA in Toulouse. “As soon as we are out of the realm of the expected, human beings will have to get back into the control loop.”

Scientists aren’t alone in thinking about the boundary between enabler and enfeebler. Artists have been playing with that border, too, creating algorithms and gadgets that are transparently weird, in order to get people to take a second look at their assumptions about technology.

But a funny thing often happens to these over-the-top creations. Even as some people find them creepy, someone else accepts them as a helpful piece of tech.

A few years ago, for example, McCarthy designed a gizmo she called the “Happiness Hat.” It was a wool cap that harbored a bend sensor, which attached to the wearer’s cheek, and a servo mechanism in the back of the hat. Attached to the servo was a metal spike, which dug into the user’s head unless the cheek sensor detected a smile (how much it dug was inversely proportional to how wide the smile was).

 

“Most people understood it was sort of a critique or a joke or satire,” McCarthy says, “but then I got e-mails from people saying, ‘I’ve been talking to my therapist and he thinks that I should really try this thing because nothing else has helped me cure my depression.’ ” She realized, “The line of what was acceptable to people is different for each person.”

These are lines we’ll all soon have to think about, as intelligence-on-tap both attracts us and creeps us out. Aspects of our personal lives already can be outsourced to gadgets a bit like McCarthy’s hat. The Hapifork, for example, gets you to eat less by vibrating when it detects that you are munching too fast. The Spire tracker buzzes when it detects that your breathing is too shallow and stressed for your own good. The Kolibree smart toothbrush monitors how and where you brush your teeth, then sends the data to a smartphone app that tells you how to do a better job. The Upright smart wearable posture trainer sits on your back and buzzes if you slouch (and tracks performance with a smartphone app, of course). The online Crystal app takes over the job of figuring out a new acquaintance. It analyzes public data about a person to tell you what that person is like—and how to best communicate with him or her. The Romantimatic app for smartphones will even handle the sensitive task of showing people you love them—it will text a significant other at specific times (and suggest words for the message).

Just how much of our mental and emotional lives do we want to leave to machines? And what will we do with the free time we win by delegating to them? Perhaps we’ll need to find the mental equivalent of the gym and exercise our minds for the sake of exercising them. A 2012 study by Gerald Matthews, a psychologist at the Applied Cognition and Training in Immersive Virtual Environments Lab at the University of Central Florida, suggests such mental gymnastics could help people get back in the control loop of a self-driving car. He and his colleagues gave passive passengers a smartphone task—answering trivia questions—and discovered that it led them to be more alert when they had to take control back.

There probably aren’t easy answers to the question of how to deal with the sudden arrival of tools that outsource mental work—in part because there is much we don’t know about how people will interact with powerful AI, and in part because people will probably differ in their needs and preferences.

“When does it go too far?” McCarthy asks. “And when do you find aspects of this that are terrifying, but then when you try it, it actually does something for you? How do you negotiate that, navigate that dissonance?” The answers to such questions aren’t clear. But one thing is certain: It’s time to start asking them.

Download the PDF