Get More From The Report

  • THE PROBLEM Studies show misusing AI can diminish critical thinking and creativity.

  • WHY IT MATTERS Human ingenuity is essential to stewarding the future.

  • THE SOLUTION Employ the technology in support of learning.

November 24, 2025

For the last few months, Siddhant, a 34-year-old services manager for Curve Royalty Systems, a royalty-management platform for record labels and music publishers, has adopted a new evening routine. After he puts his three-year-old son to bed, the Toronto resident proceeds to spend the next four or more hours working with ChatGPT and various other generative AI apps.

Siddhant is not a coder or a tech guy; he holds a music-related master’s in fine arts. But in a short period of time, he’s created numerous digital apps to solve software and workflow problems for himself and his clients. “It’s surreal,” says the nocturnal citizen developer. 

Through the tools he’s built, Siddhant is saving enough time in his day that he can now, ironically enough, study coding. Not because he has any plans for a career change—it’s so he can understand how better to collaborate with AI to serve his clients. “You as a human need to be smart enough to know when it’s BSing,” Siddhant says, stressing the human role in this relationship: “If I had AI 10 years ago, without the real-world context I have now, I wouldn’t have known the questions to ask.”

When ChatGPT launched three years ago, it became the fastest-growing consumer application in history, reaching 1 million users in just five days. Now, it’s nearing 1 billion users, many of them knowledge workers using the technology to perform a variety of daily duties. Promising access to information, improved productivity, clearer communication, real-time insights, and almost instantaneous digital problem-solving, AI has, in a very short period, become integral to the workplace, including the C-suite. For both businesses and workers, the allure is obvious.

“What are we trading for efficiency and proficiency?” 

But there is more to this story. Adoption of ChatGPT and similar tools has been so quick that researchers are only beginning to understand the impact—on humans.  A number of studies are demonstrating that AI isn’t just rapidly transforming the ways we learn, create, and execute. It’s also changing how we think—just not, as it turns out, the way most of us want. Indeed, some worry the technology may be eroding the very cognitive capacities that help Siddhant collaborate so successfully with ChatGPT: an internal reservoir of industry knowledge, robust critical-thinking skills, and the intuition to know when to call BS. For Amelia Haynes, a research manager at the Korn Ferry Institute, it all raises a key question as the AI revolution looms: “What are we trading for efficiency and proficiency?”

The Talented Intern

This is not the first time humans have questioned the impact of a new technology on cognition. When symbols were introduced to capture language, for example, there were fears as far back as Grecian times that writing would weaken long-term memory. When calculators were popularized, alarmists claimed we would lose our ability to conceptualize mathematics. And when Google debuted  and smartphones hit the market, people fretted that we’d no longer remember anything.

To varying extents, all of these fears have been borne out. Researchers point to the so-called Google effect, a phenomenon where users have a tendency to forget information that they can easily look up online—which can have broad implications for learning and comprehension. But that is generally how evolution works. Something is gained, something is lost. While the philosophers and pundits grapple with whether that trade-off is good, bad, or indifferent, time is the ultimate judge. “I used to have decent handwriting,” Siddhant says. No longer. “Certain skills will leave as I don’t use them. It’s a give and take.” Who knows, maybe the day will come when we don’t have to architect sentences at all.

Coined in 1955, the term artificial intelligence refers to the ability of a machine to emulate human cognition and decision-making. Advanced AI systems known as generative AI (gen AI) go beyond just aggregating information. They can analyze large datasets and identify patterns, allowing for creation—from authoring text to composing art to generating code.

“Many people say they use it for everything.”

Currently, we are in the age of hybrid intelligence, when digital learning systems still require human governance. Experts say, though, the day is not far off—and may already be here, in some ways—when AI will be able to operate independently. The embodiment of this technological consciousness is known, of course, as the AI agent. “For the first act, AI was a helper,” says Bryan Ackermann, who leads Korn Ferry’s AI and Transformation sector. “We were calling it a talented intern.” But the intern kept getting smarter and smarter. Earlier this year, AI became contemplative, asking clarifying questions. It went, as Ackermann points out, from being a helper to a peer. And as AI pulls alongside the knowledge worker, responsibility is shifting from execution to oversight. In this role, the cognitive capacities most threatened by the technology, such as critical thinking and ingenuity, may become the most relevant.

Michael Gerlich, head of the Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, offers this analogy: Gen AI can help improve the candle so that it burns longer and brighter and is cheaper to make. But it won’t invent the light bulb. “The step from candle to light bulb is what the human does,” Gerlich says, “and that process is not linear.”

The Illusion of Learning

Siddhant’s vibe-coding sessions go something like this: He’ll conceptualize a problem. Then he’ll prompt ChatGPT to help create a digital solution. He will refine his prompt or give feedback to the program when it doesn’t interpret his query as intended. Once he has the makings of a fix, he will ask other gen AI iterations to explain how the code ChatGPT drafted is working, so he can better comprehend the mechanisms at play. Man and machine will continue this process until Siddhant has exactly the answer and understanding he seeks.

Siddhant is a case study in best practices for utilizing this powerful technology. He is careful not to outsource aspects of his work that are integral to his personal competency. For instance, he recently authored an article for a trade publication offering advice for the next generation entering the music industry. He outlined the piece himself based on his own insights, then asked AI to smooth and proofread it. “Instead of allowing it to make me dumb, I’m now able to prioritize things that actually make me smarter,” he says.

But many people would use AI to write the entire article, along with any other text the job requires. And to summarize any necessary reading. And to analyze data. And to brainstorm ideas. And to tell them what, when, and how much to eat. “Ask people how and why they use it,” says Gerlich, “and many people say they use it for everything.”

When humans outsource aspects of thinking, it’s called cognitive off-loading—which isn’t inherently a negative thing. Most people agree that using AI to put meetings onto a calendar frees up mental capacity for more complex purposes. The problem is when cognitive off-loading is deployed in place of developing proficiency. Neuroscience has shown that knowledge and skill are intertwined, explains Barbara Oakley, a professor of engineering at Oakland University and global expert on learning. To think critically or creatively about something, we must understand it deeply, which requires having key information stored internally.

The Thinking Person's Guide to Using Gen AI

Want to avoid digital amnesia? Here are steps and prompts that experts offer for using gen AI to strengthen learning.

Use AI to interrogate your understanding of a topic, instead of to write the report.

prompt

"Here's how I understand [insert topic]. Can you evaluate my explanation and ask me questions that would help me deepen my understanding?"

Answer questions yourself before having ChatGPT answer them.

prompt

"Can you compare my response to the correct answer and explain where I went wrong or right?"

Go to the source. Once you've read gen AI's summary, read the primary source.

prompt

"I read a summary of [insert topic]. Can you help me find primary sources and highlight any complexities I might miss at a surface level?"

Let’s say you’ve just learned a new system at work. The next day, you need to explain an aspect of that system in an email to a colleague. The act of recalling and articulating the relevant processes reinforces that understanding in your brain. Down the road, when a function isn’t performing optimally, you’ll be able to conceptualize the problem and collaborate with AI on a solution. But if ChatGPT writes that email, you bypass the cognitive labor that’s necessary for deep knowledge to take root. AI can be a tremendous learning support, experts say, but how and when it’s deployed is crucial.

While the first studies of gen AI’s impact on cognitive function were self-reported, neuroscience is now using EEGs to show what’s taking place inside the brain. A study out of the MIT Media Lab, titled “Your Brain on ChatGPT,” found that those who used the app to write an essay had significantly less neural activity and fewer connectivity patterns than those who performed the task using just their brains or a search engine. (Remember, neurons that fire together, wire together.) The brain-only group showed stronger activation of networks linked to creativity, memory, and semantic processing. Anecdotally, they also took more ownership of their work and could better recall it. And, of course, their writing was more original.

A plethora of other studies have had equally dreary findings, showing that over- or misuse of gen-AI tools blunts critical thinking, motivation, and satisfaction. Perhaps most concerning is that AI creates the illusion of learning. Users aren’t aware of the gaps in their knowledge, a phenomenon known as digital amnesia. That leads to developing a false sense of competence. “We’re seeing that people, like professors, who previously said they were not, are actually offloading,” Gerlich says.

Collective Liability, Collective Response

Who is most vulnerable to being seduced by ChatGPT’s allure? Those who are underprepared, overwhelmed, and stressed out. Particularly prone to such a trifecta are older professionals, who may rely on AI to compensate for cognitive decline, and young workers, who may be developing learning skills. On the other hand, those who are calm, capable, and confident are more likely to delegate tasks with discretion, a finding that holds true across generations. Like so many of the ills that plague modern corporations and those who work for them, the risks of AI to human functioning are linked to onerous demand.

“It’s all about how you engage with it.”

How leaders are approaching  AI, Korn Ferry’s Ackermann says, generally falls into one of three categories: 1) those who lean into the efficiency play; 2) those who are frozen; and 3) those who are thoughtfully transforming. Leaders who want to avoid a future workforce unable to think outside of the box, he says, must train employees in best practices and set manageable workloads, which, by all accounts, isn’t an easy task in today’s world. AI is advancing so quickly, it’s hard to keep up. Meanwhile, there is tremendous pressure to do more with less in half the time. “There is a lot of FOMO going on,” Ackermann says.

Ultimately, AI’s impact on human cognition is a collective challenge that requires a collective response. It requires investment from educators, tech developers, business leaders, the professions, and individuals. Companies—and leaders—that fail to enter this new frontier with foresight and empathy may find themselves ill-equipped to steward AI into the future. (Unfortunately, the skills of leadership are not readily outsourced to automation.)

The next act of this story may be defined not by the emergence of AI agents, but by how well man can integrate machine into daily work life. Leaders can employ this technology to make workers produce faster but shallower work—or to expand their cognitive potential in all directions. That may just turn out to lead to a place beyond the light bulb. “At the end of the day,” Korn Ferry’s Haynes cautions, “it’s all about how you engage with it."

Image credits: Vithun Khamsong/Getty Images; Yuichiro Chino/Getty Images; Rakdee, Novan Adhi Putra, Natalia Horban, Vladimir Ivankin, KOJ.art/Getty Images; Andriy Onufriyenko/Getty Images; Chad Baker/Getty Images