en
Skip to main contentJanuary 26, 2026
How much skill and knowledge do you have, and how much are you faking with the help of AI? Those are big questions surrounding AI cheating, but the biggest question of all might be: What, exactly, is considered cheating?
Indeed, the definition of AI cheating continues to vary wildly—and change frequently. For example, one organization might think one of its employees using AI to design an entire marketing campaign is an innovative move, while another organization might consider it an unimaginative cop-out. To make things even more confusing, what was considered cheating a year ago might be A-OK now. “It’s a chess match out there,” says JP Sniffen, Korn Ferry’s practice leader of its Military Center of Expertise.
The one thing that isn’t in dispute is that using AI to push boundaries is rampant. As many as 40% of Gen Z men say they’ve passed work done by AI off as their own, according to one survey. Nearly 60% of job candidates are particularly willing to use the tool, according to a 2024 survey, and an overwhelming majority of those use it to exaggerate or outright lie about their skills in a résumé, cover letter, job application, or skills assessment. And those are the less nefarious ones (AI is making various types of fraud and scams considerably easier to execute). The issue is being repeatedly brought up by clients, says Bryan Ackermann, Korn Ferry’s head of AI Strategy and Transformation. “Cheating is a thing, it’s a real challenge and it will get worse because the technology is improving,” he says.
AI, of course, could be a powerful tool to improve company productivity, find new business strategies, and otherwise help organizations grow. But, like many nascent technologies, there are growing pains, and cheating has become one of the most common headaches. Some organizations are taking a drastic approach. In late December when the world's largest accreditation body for accountants announced that it’s banning people from taking licensing exams remotely. The reason: It’s become too easy for test takers to use AI tools to come up with the answers.
The onus, experts say, is on organizations to define, and then communicate, what AI can and cannot be used for. Already many firms in highly regulated industries like finance and pharmaceuticals have devised strict policies, but there are still thousands of organizations where policies are nebulous, at best. Even in cases where companies have codified policies, don’t expect employees to be reading the rules in a handbook. In a survey of 1,000 workers last year by the business consultancy Resume Now, nearly 60% admitted to using AI in unapproved ways, though many blame their firms for violations, saying that their company’s AI policies were unclear. “You need to be in constant communication with employees who use the technology,” says Richard Marshall, Korn Ferry’s global managing director of corporate affairs.
Communications also should go beyond stressing legal compliance. Companies should remind employees that whether they use their own brains, hands, AI, or a combination to produce work that they are ultimately responsible for the finished product. “Crappy work is crappy work,” Ackermann says.
The cheating issue becomes a little more complex when it comes to hiring. Companies can insist on in-person interviews so a candidate can’t just consult AI to help answer a question. During remote interviews, organizations can also insist that candidates turn over control, via software, of their phones or computers to the talent recruiters. At the same time, organizations can get creative here, experts say. Since AI skills are coveted by organizations, talent professionals can lay out where candidates should use the technology in the hiring process. They should also draw the line in where candidates should avoid it. “The key is communication,” Ackermann says.
Learn more about Korn Ferry’s AI in the Workplace capabilities.
Stay on top of the latest leadership news with This Week in Leadership—delivered weekly and straight into your inbox.