Are You Violating AI Policy? Most Likely

A new survey shows that six in ten employees are likely using AI in unapproved ways. Could that be a bigger deal than it seems?

August 12, 2025

Outlined on page 87 of the firm’s IT-security policy—section one, paragraph four—are the updated and approved guidelines for employees’ use of AI tools. Knowing that no one would go to the trouble of actually reading the policy, the firm emailed its global workforce a summary of the changes. The problem is, hardly anyone read that either—and the few who did were confused.

There’s good news and bad news when it comes to the latest development in the corporate world’s biggest change in a generation. The good news is that more and more employees are using AI. The bad news is that most of them are probably using it in a way that violates company policy—meaning that they’re potentially exposing the firm to security breaches, legal liability, and other risks. Nearly 60% of US workers surveyed admit to using AI in unapproved ways, though many blame the firm for it, with half saying that their company’s AI policies are unclear. “The way many leaders are communicating about AI, it’s no surprise that employees are confused and fearful,” says Bryan Ackermann, head of AI strategy and transformation at Korn Ferry.

To be sure, many firms, especially those in highly regulated industries like finance and pharmaceuticals, have devised strict policies full of legalese and bureaucratic protocols that experts say discourage employees from using AI. Others have built sophisticated internal AI tools to ensure security and protect proprietary information, but haven’t trained employees to use them. Moreover, as Ackermann notes, employees hear all the talk from leaders about AI can increase savings and efficiency, and naturally they worry about their jobs.

Employees’ confusion over AI policies—and their fear about the technology’s future impact—is leading many to hide rather than spotlight how they are using it. That could be a bigger deal than it seems, since it may hold firms back from uncovering “enterprise-wide innovations,” says Wolfgang Bauriedel, senior client partner in the Digital and Technology practices at Korn Ferry. One recent study, for instance, found that employees don’t disclose their AI use to colleagues and managers for fear of negative repercussions. “The risk of losing out on enterprise-wide innovations is much greater than the risk of people making mistakes from trying things out,” Bauriedel says.

There are other, more practical risks as well. Already, several firms have faced lawsuits over privacy breaches or bias because of employees’ inappropriate use of AI. Firms have also been hacked and had data stolen because employees violated company policies by uploading information to an external rather than internal AI.

Still, experts say it’s not surprising that employees might balk or not know a lot of policies, and not just with respect to AI. Pointing to studies showing low awareness and compliance with corporate policies overall, Richard Marshall, global managing director of the Corporate Affairs and Investor Relations Center of Expertise at Korn Ferry, says leaders shouldn’t expect employees to read a handbook before they use AI. He says AI is moving so fast that leaders, in order to evolve necessary guardrails, need to be in constant communication with their employees who use the technology.

Experts also suggest leaders get creative with AI-policy communication, for instance, by building it into training programs or sending out test emails similar to those used in phishing scams (these can help determine if employees understand the approved and unapproved ways to use AI). For his part, Ackermann says leaders need to “create a moment” around AI that their employees can rally around. Instead of worrying about whether workers are breaking a rule, he says, “leaders need to focus employees around how they can use AI to transform their roles and organizations.”

 

Learn more about Korn Ferry’s Organization Strategy capabilities.