Research Manager
en
Skip to main contentIn Gulliver’s Travels, Jonathan Swift imagined a machine that could generate philosophy, poetry, or math “without the least assistance from genius or study.” He called it the Engine—a crank-driven device promising knowledge without thought. In 1726, this was satire. Today, it feels prophetic.
Like Swift’s Engine, modern AI can produce code, images, and books with startling fluency. Yet the same question lingers: Can something that merely recombines symbols truly understand or experience?
To explore conscious AI, we must first define what that means—and why people often treat these systems as if they’re already “aware.” From there, we can ask what it would take for machines to feel, know, and act as agents in the world.
Korn Ferry’s Conscious Agent Model identifies five key dimensions of consciousness:
Current AI systems excel in Goal-Directedness, Information Integration, and even simulating Awareness. Yet two critical dimensions remain elusive: genuine Self-Awareness and Qualitative Experience. AI can act as if it feels, but no evidence shows physical or emotional grounding. By Korn Ferry’s framework, AI is not conscious.
This gap drives leading theories. Global Workspace Dynamics, introduced in 1988, suggests consciousness emerges when information and memories interact in a centralized space. In contrast, Integrated Information Theory (IIT)— proposed in 2004—links it to the richness of causal interconnections, creating a “mathematical fingerprint” of awareness.
Philosopher Daniel Dennett offers a functional view: consciousness as observable behaviors, contrasting with the Chinese Room argument, which insists true understanding requires more than computation. Hubert Dreyfus adds that symbolic manipulation cannot replace the tacit, embodied knowledge behind human action. Across these theories, one question persists: What missing piece gives rise to consciousness?
One way to approach this problem is through the lens of knowing—not just what we know, but how we know it. This shifts the AI consciousness debate from abstract speculation to the concrete ways intelligence shows itself.
Cognitive scientist John Vervaeke identifies four types of knowing:
Generative AI can describe grief (propositional knowing) and simulate comforting dialogue (procedural knowing), but it lacks perspectival and participatory knowing—it doesn’t experience grief or share a lived world.
Learning underscores this gap. AI thrives on explicit learning—datasets, patterns, predictions—while human learning blends reasoning with implicit processes, absorbing norms and evolving through experience. AI may mimic hesitation or intuition, but without the embodied grounding that gives human thought depth.
Having explored different ways of defining and testing consciousness, we can turn to the question at the heart of the debate: Could AI ever truly be conscious?
Opinions on this fall into the following three categories, each with its own implications for strategy, ethics, and design—and for how organizations prepare to live and work alongside AI.
Whether human or machine, we cannot step directly into another entity’s subjective experience. That limit is the epistemic wall—the boundary between external evidence and internal awareness. It makes certainty about a machine’s “inner life” impossible.
As AI systems may simulate self-reflection and metacognition, the gap between convincing imitation and actual experience remains wide. We cannot understand consciousness from the outside, because subjective states are, by nature, inaccessible to observers. With one study showing that 70% of knowledge workers engage with AI daily, leaders will need to balance AI adoption with clear governance to maintain both strategic and ethical alignment. For consultants, the challenge is clear: preparing clients to harness AI’s capabilities while acknowledging the ongoing uncertainty about machine minds.
Even though today’s AI systems are not conscious, their ability to simulate awareness can influence how people interpret and rely on them. Leaders can take small, practical steps to keep teams grounded not only in what AI is, but also what it isn’t.
Assigning numerical probabilities to AI consciousness is, at best, speculative. Behavioral markers—self-reference, contextual awareness, hypothesis testing, narrative construction—may signal increasing sophistication, but they remain proxies. What we confront is not the absence of behavior, but the permanent gap between behavior and being.
By defining consciousness and recognizing AI’s current limits, we can better navigate the complex relationship between human and artificial minds. As AI grows more human-like, we are compelled to confront the mysteries of consciousness itself—in machines and, more importantly, in ourselves.
Stay on top of the latest leadership news with This Week in Leadership—delivered weekly and straight into your inbox.