Trial and Error: The Rush (and Risk) Around ChatGPT

As both major tech firms and start-ups try to compete with ChatGPT, experts worry about imperfect products flooding the market.

In a matter of months, millions of people have come to rely on ChatGPT, even though the technology, by its own makers’ admission, was released while still in beta. Since then, various tech firms, large and small, have come out with other intriguing AI products that are similarly imperfect

Welcome to a longstanding practice in the tech field: the release of flawed products designed to improve as consumers use them. The approach is known as the “minimum viable product” model, and it has at least some experts worried. They contend that the potential risks—of a new AI tool that can create content almost as well as a human being can—are too great to be left to users to figure out. “There is a risk in this case,” says Chris Cantarella, global sector leader of the Software practice at Korn Ferry. “The tool is just so powerful.” Across many industries, firms are already relying on its less-than-perfect output to make key hiring, marketing, and operational decisions.

For years, companies have trialed products before releasing them, usually behind closed doors, in focus groups or beta testing. In the late 90s, tech companies started to popularize the idea that when it comes to software, good is better than perfect, and that products can be released—and problems corrected—later. As Cantarella notes, tech companies at the turn of the millennium risked losing funding by waiting to release products because of how fast the development process was. This made a minimum viable product better than no product at all. “It was a way to create hype,” says Cantarella. 

That’s still the case, except now, instead of testing a product’s viability by circulating it among a small number of experts, companies are releasing new AI tools to larger and larger audiences. At last count, more than 25 million people were using ChatGPT each day, and millions are testing a variety of other new products. Even the creators of the latest AI chatbots acknowledge that the technology could be returning information that’s wrong, inaccurate, biased, or misleading. “The use of AI chatbots is already off and running,” says Barbara Rosen, global accounts lead for the Technology Market practice at Korn Ferry, “but many leaders don’t understand how it works and what the risks are to their firms.” 

In response, experts say organizations need to put guardrails on the use of the AI tool until it’s better understood. Some sectors, such as finance, have already restricted access to certain features to protect client privacy. At the same time, those same experts say that other industries can’t afford to ignore the new tool—because competitors deploying it could step ahead of them. In the same fashion as the tech industry, firms need to test the AI tool through a process of trial and error. “The challenge,” says Cantarella, “will be to use it thoughtfully and responsibly.”

 

For more information, contact Korn Ferry’s Software and Platforms practice.