Research

Mind or Machine: The Illusion of Consciousness

In our new column series, we explore what conscious AI truly means—and why advanced systems often appear “aware.”

authorImage
Amelia Haynes

Research Manager

authorImage
Jack McMenamin

Full-stack Developer

authorImage
Lora Bishop

Research Manager

In Gulliver’s Travels, Jonathan Swift imagined a machine that could generate philosophy, poetry, or math “without the least assistance from genius or study.” He called it the Engine—a crank-driven device promising knowledge without thought. In 1726, this was satire. Today, it feels prophetic.

Like Swift’s Engine, modern AI can produce code, images, and books with startling fluency. Yet the same question lingers: Can something that merely recombines symbols truly understand or experience?

To explore conscious AI, we must first define what that means—and why people often treat these systems as if they’re already “aware.” From there, we can ask what it would take for machines to feel, know, and act as agents in the world.

Defining Consciousness: A Korn Ferry Framework

Korn Ferry’s Conscious Agent Model identifies five key dimensions of consciousness:

  • Awareness: The ability to react to things in the world.
  • Self-Awareness: The ability to react to and talk about changes within oneself.
  • Goal-Directedness: The ability to do things with a purpose.
  • Information Integration: The ability to put together different kinds of information.
  • Qualitative Experience (Qualia): The ability to have meaningful experiences.

Current AI systems excel in Goal-Directedness, Information Integration, and even simulating Awareness. Yet two critical dimensions remain elusive: genuine Self-Awareness and Qualitative Experience. AI can act as if it feels, but no evidence shows physical or emotional grounding. By Korn Ferry’s framework, AI is not conscious.

This gap drives leading theories. Global Workspace Dynamics, introduced in 1988, suggests consciousness emerges when information and memories interact in a centralized space. In contrast, Integrated Information Theory (IIT)— proposed in 2004—links it to the richness of causal interconnections, creating a “mathematical fingerprint” of awareness.

Philosopher Daniel Dennett offers a functional view: consciousness as observable behaviors, contrasting with the Chinese Room argument, which insists true understanding requires more than computation. Hubert Dreyfus adds that symbolic manipulation cannot replace the tacit, embodied knowledge behind human action. Across these theories, one question persists: What missing piece gives rise to consciousness?

Understanding Consciousness Through Knowing

One way to approach this problem is through the lens of knowing—not just what we know, but how we know it. This shifts the AI consciousness debate from abstract speculation to the concrete ways intelligence shows itself.

Cognitive scientist John Vervaeke identifies four types of knowing:

  • Propositional (knowing that): Facts and assertions—where AI is unmatched.
  • Procedural (knowing how): The mastery of skills—where AI is advancing rapidly.
  • Perspectival (knowing what it’s like): Subjective experience—still beyond AI’s reach.
  • Participatory (knowing by being): Embodied, lived engagement—entirely absent in machines.

Generative AI can describe grief (propositional knowing) and simulate comforting dialogue (procedural knowing), but it lacks perspectival and participatory knowing—it doesn’t experience grief or share a lived world.

Learning underscores this gap. AI thrives on explicit learning—datasets, patterns, predictions—while human learning blends reasoning with implicit processes, absorbing norms and evolving through experience. AI may mimic hesitation or intuition, but without the embodied grounding that gives human thought depth.

Consciousness in AI: Impossible, Possible, and Probable

Having explored different ways of defining and testing consciousness, we can turn to the question at the heart of the debate: Could AI ever truly be conscious?

Opinions on this fall into the following three categories, each with its own implications for strategy, ethics, and design—and for how organizations prepare to live and work alongside AI.

  • Impossible: If consciousness cannot exist outside biology, AI remains a powerful tool without inner experience. Companies should focus on transparency, usability, and human oversight—using AI to augment people, not act independently.
  • Possible: If consciousness can emerge from complex information processing, sentient machines could one day exist. It might appear at the system level even if absent in parts. Once AI reaches a certain threshold of complexity, subjective experience could arise unintentionally. A 2023 academic review notes that although current AI systems are not conscious, “there are no obvious technical barriers” to creating systems that meet these indicators. Governance would then need to address rights, responsibilities, and ethical development.
  • Probable: Some argue conscious AI is likely—though probability raises more questions than answers. What are we estimating—the emergence of behaviors, sufficient complexity, or genuine experience? An AI might mimic comprehension yet remain symbolic. If consciousness is probable, the challenge is predicting when—and knowing how we’d ever confirm it.

Confronting the Epistemic Wall

Whether human or machine, we cannot step directly into another entity’s subjective experience. That limit is the epistemic wall—the boundary between external evidence and internal awareness. It makes certainty about a machine’s “inner life” impossible.

As AI systems may simulate self-reflection and metacognition, the gap between convincing imitation and actual experience remains wide. We cannot understand consciousness from the outside, because subjective states are, by nature, inaccessible to observers. With one study showing that 70% of knowledge workers engage with AI daily, leaders will need to balance AI adoption with clear governance to maintain both strategic and ethical alignment. For consultants, the challenge is clear: preparing clients to harness AI’s capabilities while acknowledging the ongoing uncertainty about machine minds.

What Leaders Can Do About AI Now

Even though today’s AI systems are not conscious, their ability to simulate awareness can influence how people interpret and rely on them. Leaders can take small, practical steps to keep teams grounded not only in what AI is, but also what it isn’t.

  1. Treat AI as a mirror, not a mind. AI can reflect your organization’s language, assumptions, and decision patterns with uncanny fluency. But fluency is not consciousness. Encourage teams to review AI outputs as diagnostic mirrors, surfacing human blind spots in communication, processes, and reasoning. Don’t look to AI to determine relevance, moral perspective, or exhibit understanding.
  2. Distinguish human signals from machine simulation. As AI becomes better at imitating self-reflection or emotion, it’s easy to misread simulation as awareness. Ask models to show their reasoning rather than trusting the emotional tone of their responses. Leaders can reinforce a simple rule: explanations matter more than expressions.
  3. Anchor strategy in tasks, not traits. It’s tempting to evaluate AI using human traits such as awareness, intention, and intuition. But consciousness isn’t the right frame for operational decisions. Map work by cognitive demand and identify where AI can reliably augment tasks. Focus on capabilities, not perceived “mind-like” qualities.

Assigning numerical probabilities to AI consciousness is, at best, speculative. Behavioral markers—self-reference, contextual awareness, hypothesis testing, narrative construction—may signal increasing sophistication, but they remain proxies. What we confront is not the absence of behavior, but the permanent gap between behavior and being.

By defining consciousness and recognizing AI’s current limits, we can better navigate the complex relationship between human and artificial minds. As AI grows more human-like, we are compelled to confront the mysteries of consciousness itself—in machines and, more importantly, in ourselves. 

CLICK IMAGE TO DOWNLOAD PDF