Mind Readers

What will you buy next? Where is the next crime likely to happen? Tech’s ability to predict a person’s next step—or even next thought—is dramatically changing the strategies for firms and governments.

authorImage

See the new issue of Briefings magazine, available at newsstands and online.

You might know Jane Doe,” says your favorite social network. “Hey, how about following Joe Blow?” says another. No surprise there: Like airlines that prefer full planes or shoemakers who want people to jog, networks nudge people to use their services more often. Yet it’s a delicate task. Few of us want to open our lives to strangers or to people who feel like strangers. So the companies sift data on their users and try to predict who might wish to make connections. By now, Facebook, LinkedIn, Twitter, Instagram and other networks have gotten uncannily adept at such forecasts. Just ask Kashmir Hill.

Last summer, Hill, a reporter for Gizmodo.com, found herself wondering why Facebook was listing a woman named Rebecca Porter in her “People You May Know” feed. The two women had nothing in common. Yet the name rang a bell for Hill. Her father was abandoned as a baby by his father, whose last name was Porter. And, in fact, Rebecca Porter is married to the brother of Hill’s lost grandfather. Facebook had found a great-aunt Hill had never known about.

Hill would love to know how. But she can’t find out. As she wrote last August, no social network wants to show competitors how it predicts possible links among users. So she was left with only her mixed feelings—and her growing collection of other people’s uncanny stories about the “People You May Know” algorithms. As she wrote in August: “I was grateful that Facebook had given me the chance to talk to an unknown relation, but awed and disconcerted by its apparent omniscience.”

 

She isn’t alone. In fact, if you haven’t felt awe and unease about the astonishing new technological powers of organizations to forecast our needs, wants and deeds, here’s a prediction for you: You will soon. Colossal troves of detailed personal data, married to artificial intelligence’s vast analytical power, have conferred eerie predictive power on organizations large and small, public and private, all over the world.

As networks predict who we’ll enjoy contacting, retailers now market “B2I” (business to individual) services by forecasting what individuals want. Amazon, for one, has patented a system for “anticipatory shipping,” which leverages data to determine customers’ wants and send items before they’re ordered.

Government is moving as quickly as business. Cities, for example, frequently use predictive technology to forecast problems, from traffic jams to broken water mains. And “predictive policing”—the use of algorithms to forecast places where crimes are most likely to occur—is now a standard tool in police and sheriff’s departments (at 20 of the 50 largest) all over the United States, and is being used in Denmark, the Netherlands, Belgium and Austria. In China, too, Cloud Walk—a company whose analytic algorithms combine facial recognition and data analysis to predict an individual’s likelihood of committing a crime—is in place in about 50 jurisdictions. Last year police in Delhi began a similar program, in collaboration with scientists from India’s Space Research Organization.

“There is sometimes a sense that this tech is special,” says Jeremy Heffner, product manager and senior data manager at Azavea, a Philadelphia company that makes HunchLab, a predictive policing package. “That’s not so. These are common machine-learning algorithms that people are using.” In other words, just because it’s spooky doesn’t mean it’s difficult or expensive for companies to implement.

For most of us, the obvious signs of this new predictive power aren’t in business strategies or government policies but in our day-to-day lives. Type a few letters into a smartphone’s email or message app, and it predicts what word you intend to enter. Open up your streaming service, and it predicts what music you’ll enjoy or what movie you’re in the mood to watch. Use Google or an iPhone to keep tabs on your calendar, and you might get a notice that because traffic is heavy, you need to leave now for that dentist appointment you forgot. This kind of moment-to-moment prediction of your actions is going to become far more common and accurate in the very near future.

For example, earlier this year, Facebook was awarded a patent for a technology that predicts the emotional state of a user by analyzing data about how fast and hard people type and tap their devices, combined with their movements, location and other information the social network has about them. (When this system is perfected, the network could tailor the look of posts and messages to match their senders’ expected emotions, thus preventing words from being taken the wrong way.)

Whenever it prompts us to do something we hadn’t thought we wanted to do, predictive tech disconcerts us with the thought that algorithms and their owners can know us, and our future, better than we do ourselves. Yet if past technologies are any guide, the new normal will be neither paradise nor dystopia. Instead, we’ll learn how to live with machines that know us as we have never been known before.
At Azavea, the tech is the easy part, says Heffner. “We spend a small portion of our time on the algorithm,” he says. “We spend more time on how things are presented to users.”

After all, he adds, cops know a lot about crime and where it happens—too much to shrug and tell themselves the machine must know better. On the other hand, Heffner says, “if it always agrees, then they’ll say, ‘Why are we bothering to use predictive technology?’” The company’s goal in “tailoring the system to the needs of humans” is to strike the right balance between belief in the machine and belief in one’s self.

But even as we reckon with what we want from predictive machines, those machines will be changing us. Consider one detailed and quite plausible extrapolation, worked up the by the Austin, Texas-based firm Argodesign. In their Utopian future, top-flight predictive algorithms make possible the “Echo Fridge,” a refrigerator with three doors. One would face outdoors, so that it could receive deliveries from Amazon. (Apartment on the 26th floor? No worries, that’s what drones are for.) Inside the home, one refrigerator door would open to stuff Amazon delivered without asking you. On the other side of the unit there’d be a place for the things you decided to keep. Moving an item from one side to another would automatically purchase it. A vast nationwide network—essentially turning each home into a store—would depend entirely on accurate predictions about people’s desires.

The psychological line between home and store is only one of many boundaries that would have to be rethought, as algorithmic predictions take the place of our own judgments. Already, in the past decade, we have learned to use data analysis to correct our impressions, preferring quantified information to our own perceptions (“I thought I was spending time in all our branch offices, but the data shows I put in fewer hours in Denver”). One challenge of the predictive future will be to identify the moments when we choose to override data-based analyses even if they are right—just to assure ourselves we’re still on the job and still matter. We may well reason that in some cases the best action is the one that belongs to us, rather than a machine-honed alternative. Perhaps you’d rather send that odd word that you kind of made up, rather than the machine suggestion. Perhaps you’d rather go with your own sense of what you’d like for dinner, rather than a corporate forecast.

The Crystal Ball of the 21st Century

Algorithms of increasing power can use troves of data that track behaviors to predict what people will do next.

(click the image to enlarge)

 

These kinds of boundary-setting decisions won’t only be required of individuals. Organizations and, indeed, whole nations, will also have to decide where to draw lines between good predictive tech and harmful inventions.

Consider, for example, the powerful effects of predictive tech on politics. In 2013, Michal Kosinski, now of Stanford University, and two colleagues showed they could predict a person’s gender, sexual orientation and politics based solely on Facebook “likes.” A method similar to theirs, which involved correlating the “likes” with a standard psychologists’ measure of the “big five” aspects of personality, later became a key part of the toolkit of Cambridge Analytica. That’s the political consultancy that famously helped the Donald Trump campaign predict which voters would be most interested in its messages (and precisely which of those messages they would be interested in). Kosinski declined to get involved with Cambridge Analytica, but he continues to work at the forefront of using data to predict traits and behavior. Earlier this year, for example, Kosinski and several colleagues demonstrated a system that, with five pictures of a person, could predict his or her sexual orientation with high accuracy. (The system was trained on thousands of photos from a dating site, which taught it subtle signs in facial features that correlate with sexual orientation—signs that humans detect a great deal less well.) Kosinski says the same method could use photos to tell people’s political stance.

The political power of prediction is important for several reasons. For one thing, a correct prediction of how you will vote—and why—allows a campaign to target advertising and other messages to specific individuals, rather than to broad blocks of voters.

Political prediction is also useful for gaming the system by cherry-picking favorable voters for various purposes. In the United States, for example, voting districts for the lower house of the national legislature are redrawn by state governments every 10 years. Whichever party controls that process routinely seeks to draw lines that maximize the power of its voters and reduce the power of its opposition. Having reliable and precise predictions of voting behavior makes this voter-choosing process much easier.

“We know exactly which primaries and general elections you have voted in, and since there are so few realistic candidates in most elections, down or up ballot, we might as well know exactly who you voted for. Marry that data with magazine subscriptions, the kind of car you drive and all sorts of other easily available consumer information that we’ve figured out how to use to map your political preferences, and we can gerrymander and target subdivisions, houses—even double beds,” a congressman wrote on the website Vox in 2015. (Wisely for his or her political future, the politician wrote anonymously.)

A democratic society might well agree with David Carroll, a media scholar at New York’s New School University, that such “hyper-targeting” undermines the norm of open, honest debate by informed, open-minded voters.

In the longer run, of course, transformative technologies lead to changes no one could have imagined at their birth. There is no reason to think predictive tech will break this rule. And what kind of now-inconceivable transformation might it lead to? One possibility: A society that can predict its members’ needs perfectly, at any given moment, might not need older, cruder and more approximate methods for determining what people want and what they are willing to do to get it. It might not, in other words, need markets. Perhaps capitalism, in its quest to predict consumer needs to fine-grained perfection, is inventing the system that will replace itself. For the moment—but only for the moment—we can’t predict.

Download the PDF