Taking Orders... From AI?

A new study finds people are surprisingly fine having AI as their coworker—but not as their boss. Why that needs to change. 

August 19, 2025

Just as they do with their fellow humans, people feel at ease giving AI orders—but they don't like taking direction from it. 

As firms rush into the so-called agentic era of AI, a distinct hierarchy is emerging. Three out of four workers in a new study say they don’t mind working alongside AI agents, which act independently of human prompting to make decisions and take actions on a wide range of tasks. But—in a critical finding for the long-term future of AI—only 30% are comfortable being managed or overseen by the technology. Or, as Lisa Peacock-Edwards, a senior client partner in the Technology and Digital Officers practice in EMEA for Korn Ferry, puts it: “People are happy to use AI, as long as they are in control of it.”

Right now, AI agents are functioning primarily as digital assistants that automate tasks and processes. And though there are very few current examples of AI agents managing people, the inverse is increasingly common. Experts say it’s only a matter of time before AI agents assume some direct responsibility for managing people.

Firms have already slashed middle and other layers of management, in part to get leaner and more efficient with AI (82% of firms in the survey plan to expand the use of agents, for instance). The managers who remain are expected to take on more direct reports, from an average of five people two years ago to 20 or more now. “AI is going to be needed to optimize management,” says Moses Zonana, a senior client partner in the Technology practice at Korn Ferry. 

The problem, however, is that AI and humans seem to work best in partnership with each other—at least so far—and partnerships involve both giving direction and being directed. Bryan Ackermann, head of AI strategy and transformation at Korn Ferry, says there’s bound to be a “significant backlash” by human workers against AI oversight. Some of the resistance will stem from an impulse for self-preservation; some will stem from people’s fear of losing their jobs to AI. But some will reflect a lack of understanding of how AI agents can help people be more productive, as well as of firms’ poor communication and training with respect to how AI agents can contribute to personal advancement and business goals. “Firms are entering a dangerous space where they are about to learn how much humans will tolerate AI oversight,” says Ackermann. 

Not wanting to take direction from AI reflects a perceptual fallacy, according to some experts, that will be alleviated with exposure. To be sure, AI already provides cues and direction in many aspects of our personal lives that we don’t think twice about. It offers driving directions, recommends what to watch or buy, and assists us in responding to emails, to cite a few. “The more people get exposed to AI agents, the more of an exchange it will be,” says Dan Petrossi, a senior client partner in the Technology practice at Korn Ferry.

Evidence of progress is already building. Edwards cites one client who said employees at a call center started performing better and reported being more engaged and satisfied with their jobs after AI agents were deployed as managers. And, according to the survey, trust in AI agents rises dramatically over time, from a low 36% at firms just beginning to deploy them to as high as 95% at firms where they’ve been in use for a while. “Some people like that AI managers respond quickly, are available 24/7, and don’t forget things,” says Edwards. 

 

Learn more about Korn Ferry’s AI in the Workplace capabilities.