Does Your Work AI Actually Think? It Doesn’t Matter

Best-selling author Dan Goleman argues how the bigger question is how humans respond when machines appear thoughtful.

March 02, 2026

Daniel Goleman is author of the international best-seller Emotional Intelligence and Optimal: How to Sustain Personal and Organizational Excellence Every Day. He is a regular contributor to Korn Ferry. 

In Korn Ferry’s recent research, Mind or Machine: The Illusion of Consciousness, experts explore a fundamental confusion. As AI systems become more fluent, more conversational, and more capable, we begin to treat them as if they possess something they don't have—consciousness.

The piece points out that consciousness isn’t defined by output or performance, but by awareness, self-awareness, goal-directedness, information integration, and qualitative (or subjective) experience. In other words, while today’s AI systems can simulate certain aspects of intelligence, they do not experience, intend, understand, or make meaning in the way humans do. As the author puts it, they can produce “startling fluency,” but fall short of nuanced and emotional understanding.

As conversations about Artificial General Intelligence, or AGI, accelerate, this distinction becomes increasingly important. AGI is often described as a future form of AI that thinks like a human. But framing alone creates more confusion than clarity. It risks tricking us into believing that humans and machines are ultimately the same thing.

A simpler way to understand AGI is this: It refers to systems that can perform across many different tasks and domains without being retrained each time. These systems can write, analyze, summarize, plan, and generate ideas across contexts. What’s more, they are expected to adapt to situations they haven’t even encountered before.

As “human” as all of this may sound, what AGI does not necessarily mimic is consciousness. And when it comes to consciousness, debates tend to fall into one of three camps: those who believe it’s impossible without biology, those who believe it could emerge from sufficient complexity, and those who believe it’s likely that machines will become conscious, even though we aren’t sure how we would ever recognize it.

What all these perspectives share is an agreement that whether human or machine, subjective experience is private. We infer consciousness in other people not because we can completely verify it, but because we recognize ourselves in them. With AI, that inference becomes unreliable. As Korn Ferry states, “As AI systems may simulate self-reflection and metacognition, the gap between convincing imitation and actual experience remains wide.”

In other words, that gap becomes harder and harder to detect. A system may describe its “thinking,” explain its “reasoning,” or reference its own outputs, without having any inner awareness at all. From the outside, those distinctions are nearly impossible to confirm.

This is why the question of whether AI is conscious, while fascinating, is not the most urgent one for leaders.

The more immediate challenge is how humans respond when machines appear thoughtful and self-directed. When something speaks clearly and confidently, most humans instinctively assign intelligence, credibility, and intention. The issue then becomes less about whether machines are becoming conscious, and more about how humans can remain mindful not to respond as if they definitively are.

Emotional intelligence in the AI age is not about empathizing with machines or humanizing technology, but about regulating how we respond to systems that feel authoritative while lacking accountability. One core aspect of self-awareness is knowing our own values and noticing when our behaviors do or do not align with them. When they don’t, we feel emotions such as shame, sadness, regret, or anxiety. These feelings serve an important purpose, signaling the need to pause, reconsider, or show up differently.

Whether or not AGI has “arrived” is, in many ways, beside the point. What is already here is a new relationship between humans and machines, one in which fluency and generality can easily be mistaken for wisdom.

In that environment, emotional intelligence is not a “soft skill,” but a stabilizing force. It’s what allows leaders to work with powerful tools without surrendering discernment, meaning, or responsibility. Not because machines are becoming more like us, but because it has never been more important that we don’t become more like them.

As Korn Ferry puts it, “AI can act as if it feels, but no evidence shows physical or emotional grounding.” The article goes on to suggest that leaders treat AI as a mirror, rather than a mind. These systems reflect our language, assumptions, and patterns back to us with remarkable clarity. If we can rely on them in this way, without engaging them as true thought partners in decisions that require a more refined form of consciousness, we may be able to get the very best out of these technological advancements. 

Co-written by Elizabeth Solomon

 

Click here to learn more about Daniel Goleman's Building Blocks of Emotional Intelligence.