Is It Real—or Is It ChatGPT?

Experts worry managers may not be able to judge the performance of a worker who’s quietly using the new AI tool.

Timothy’s work reports weren’t bad, but they weren’t that good either—until recently. Over the last few months, his manager began noticing a decided improvement. Timothy’s reports were more detailed, more persuasive, and more original. They were also written entirely by a chatbot. 

Since its launch late last year, ChatGPT, an artificial-intelligence bot that can create original content that’s all but indistinguishable from that of a human being, has raised thorny ethical questions in numerous quarters—from school districts, where educators worry about cheating, to the political arena, where experts have raised concerns about automated lobbying. The business world hasn’t been a focus of these debates, but its leaders are also uneasy about work product created by AI.

Sure, experts say, ChatGPT could dramatically improve productivity, but it could also present questions about the authenticity of communications. Using AI to automate marketing messages or social media posts is one thing, but what about using it to create a speech for a CEO? What happens when an employee or independent contractor relies heavily on ChatGPT for a project? Are they misrepresenting themselves and their intellectual property? 

“It is critically important right now for leaders to be authentic and transparent in how they present themselves to customers, employees, and investors,” says Richard Marshall, global managing director of the Corporate Affairs and Investor Relations Center of Expertise at Korn Ferry. “This works against all of that.”

In the world of remote work and Zoom, Marshall says, employees can delegate a lot of work to ChatGPT and pass it off as their own. That’s one reason some companies no longer require cover letters for applicants. Others are already thinking about how ChatGPT will affect their evaluations of employee performance for certain positions.  

ChatGPT (which was created by OpenAI) is far more advanced than any other AI tool on the market. It can write business plans, produce detailed reports on all kinds of topics, and create original content. Even more impressive, it can recognize factual inaccuracies in queries, remember past conversations with users, and generate creative, spontaneous, emotional responses. 

It should be said that business leaders are more excited than concerned about how ChatGPT and other advanced AI can potentially improve performance. “It’s a classic example of humans and technology working closer together,” says Esther Colwill, president of global technology, communications, and professional services at Korn Ferry. She lists some of the many ways ChatGPT can improve workplace processes: it can help automate non-value-added tasks; free up people for more creative, innovative work; improve customer service; aid in training and development; help employees respond to emails and draft basic documents—and more. “It’s about process improvement,” she says, “and leveraging whatever tools are at an employee’s disposal to create an outcome that meets customer need.”

Marshall and Colwill agree that the introduction of ChatGPT and similar AI into the workplace increases the importance of individual ownership and accountability on the part of employees and leaders. From a communications perspective, Marshall says, ChatGPT would be hard-pressed to capture the nuances in a CEO’s voice or understand the tone or point of view needed in a speech or piece of messaging. Still, he says, firms will need to remind employees—as well as job candidates—that truthful communication is expected. “ChatGPT is a valuable tool,” he says, “but it has to be used properly.”