The world of social media took a quite blow when formal investigations were launched over an apparently unauthorized third-party use of Facebook data. Then this week, news broke over a different third-party use of data, this time from India Prime Minister Narendra Modi’s official app.
But as the revelations mount, experts say the issue is only reinforcing the critical role that humans will need to play in today’s boom in artificial intelligence—and at all levels at firms.
Certainly, the corporate world has a headache on its hands. According to a recent study by the Ponemon Institute, only 18% of organizations are even aware of how external vendors are using their data at a time when 56% of organizations have experienced a data breach caused by a third party. Some experts say the problem starts at the top, with many boards still lacking ex-CTOs, CIOs or other digital leaders.
“The rapid pace of change means new issues and questions are constantly being formed,” says Craig Stephenson, senior client partner and managing director of the North America CIO/CTO Recruiting practice at Korn Ferry. “You need the right people on the board asking the right questions,” he says.
Beyond boards, companies from the healthcare to defense to retail sectors are discovering that while AI technologies are growing, the talent behind them isn’t. “The shortage of AI talent is very real,” says Samantha Wallace, technology market leader for North America at Korn Ferry Futurestep. “Even the most established people in the AI space are still relatively new in it.”
Indeed, more than half of respondents in a recent survey indicated that the lack of necessary staff skills was the top challenge to adopting AI at their respective companies. Tech giants have reportedly been paying huge salaries to people with this sort of knowledge base.
While there’s no magic bullet for preventing the misuse of data, experts say the Facebook incident likely will cause organizations to create a set of risk oversight that’s separate from where the data itself is captured (i.e., in a company’s legal or compliance function).
“It’s the way you don’t just want the IT team to manage cybersecurity; the same thing will happen with data,” says Stephenson. “Companies will have to ensure that the information that’s being generated falls within their design as an organization, upholds the law, and supports all the privacy issues that it’s intended to support.”
The risk of not doing anything could be devastating. After all, not only is consumer trust at stake, but innovation as well. Across the pond, the EU is getting ready to enact its General Data Protection Regulation (GDPR), an aggressive measure that requires businesses to protect the personal data and privacy of EU citizens for transactions that occur within EU member states. (This goes for US businesses that do business in Europe as well). Under the rule, which goes into effect in May, companies will be held to strict standards about how they collect and use data, and what consumers have control over. Not only are the violation penalties huge, but experts are concerned the rule will make the web “virtually unsurfable.”
Incidents like the one at Facebook add fuel to calls for similar moves by the government here. “If you cannot control and police your own activities and those of the third parties you’re working with, then someone else is going to come in and do it for you—and that’s going to be the government,” says Honor Pollok, senior client partner in the Media & Entertainment practice for EMEA at Korn Ferry.