Get More From The Report

November 24, 2025

AI, at its core, is supposed to be a machine. No emotions. No humor. No empathy. And that’s pretty much how companies and workers are seeing it as they gradually adopt AI tools. The only problem: AI is starting to develop its own “human” qualities—and firms must adapt quickly, before it’s too late.

A new study finds that chatbots and AI agents, not unlike humans, literally respond better to charm and sweet talk. Researchers at the University of Pennsylvania found that prompts using psychological techniques of persuasion can produce improved responses from large-language models. Conversely, the study found that the wrong words can steer AI in a detrimental direction, creating possible risks for companies.

This is the latest in a body of research demonstrating that how people “talk” to AI—politely or rudely—is just as important to the technology’s response as a prompt is. That means that firms and workers need to be acutely aware of how to approach and develop this once-in-a generation technology. Nearly one-third of employees regularly report using ChatGPT for work tasks and 80 percent of firms are planning to adopt AI chatbots—frequently the first point of contact for customers and clients—by the end of this year. “There could certainly be some unexpected downsides,” says Nigel Melville, a professor of technology and operations at the University of Michigan. Developing chatbots that are too vulnerable to human persuasion, or too stubborn, can backfire, he adds. “Once it’s broken, it’s hard to rebuild.”

To be sure, Bryan Ackermann, head of AI strategy and transformation at Korn Ferry, says leaders need to think differently—particularly as they increasingly deploy chatbots and agents both internally and externally—about the algorithms and safeguards used to develop AI’s “personality.” “Context engineering is critical for generative AI and agentic AI to be able to effectively work as part of the organization,” says Ackermann. Already, firms are dealing with the “spiral of misery” customers experience when chatbots don’t deliver what they want.

There is also the risk of “jailbreaking”—hackers using sweet talk to circumvent security restrictions and gain access to internal systems. Researchers did find they could manipulate ChatGPT into violating safeguards designed to keep it from providing confidential, sensitive, or otherwise prohibited information. “The same techniques that allow users to get a really good answer from AI are also able to get it to return a forbidden one,” says Michael Welch, a Korn Ferry senior client partner who specializes in AI and digital transformation.

Welch says the challenge for firms is to develop chatbots and agents that can be intuitive and adapt to user inputs on the fly—without being so pliable that they can potentially jump the guardrails controlling their interactions. That’s part of why developers of large-language models employ sizable teams of engineers to continually refine and optimize personality and behavior models. In the end, says Welch, humans remain an important part of recognizing and training AI agents and chatbots to understand and block paths of manipulation. “AI can only learn from people, so how we approach it is how it will behave towards us,” he says.

Photo credits: Flavia Morlachetti/Getty Images