The AI Is Emotionally Self Aware. Is the Boss?


Best-sellingauthor Dan Goleman says new AI models are being trained in an emotional-intelligenceskill that many leaders can’t—or won’t—master.
Daniel Goleman is author of the international best-seller Emotional Intelligence and Optimal: How to Sustain Personal and Organizational Excellence Every Day. He is a regular contributor to Korn Ferry.
This winter,Anthropic released what it's calling Claude's Constitution—a lengthy and detailed document outlining thevalues, character, and ethical commitments it wants its AI to embody. Theauthors behind this constitution made a choice: rather than writing a list ofrules for the AI to follow, they described virtues for it to develop. Rules,they reasoned, tell you what to do in anticipated situations while character iswhat guides you when no rule covers the moment.
This raisesan uncomfortable question for many leaders: Have you done this workyourself?
According toorganizational psychologist Tasha Eurich, 95% of people believe they areself-aware—meaningthey think they know what they value and how those values impact their emotionsand behaviors. But the real number of self-aware people is somewhere between 10and 15%, showing a striking gap between how people think they are andhow they actually are.
Self-awarenessis external and internal. Externally, it involves understanding how othersperceive you. Internally, it points to something deeper: knowing what youvalue, what drives you, and how your emotions shape your behavior. It’s animportant distinction, mostly because a person can be skilled at one and not atthe other. For example, a leader may know how they land with their team, buthave no idea how their internal world functions. They may see the disruptionthey cause but have no idea how to better manage their emotions or consistentlyalign their actions with what they say they stand for.
Values havealways been at the core of how humans organize themselves in decisions,partnerships, communities, cultures, and religions. They are also one of themost impactful mechanisms by which self-awareness becomes actionable. When youknow what you stand for, you have a reference point for your own behavior — away to ask: Am I actually living this? Without that anchor, even goodintentions go sideways.
Research hascontinually pointed to something counterintuitive: the more power andexperience a leader accumulates, the less self-aware they tend to become. Notbecause they stop caring, but because they get so high up in the hierarchy thatthe feedback loops stop working. The more senior you are, the less likely thepeople around you are to tell you the truth.
What'snotable about Claude's Constitution is not just that it asks the AI to hold values,it asks it to hold those values with humility; to remain open to being wrong;to update when presented with good reasons; and to approach ethical questionswith curiosity rather than defensiveness.
This is astandard most leaders struggle to meet. Emotional self-awareness, thefoundation of all emotional intelligence, requires not just knowing your valuesbut being honest enough to see when your behavior contradicts them. That kindof honesty demands the very thing that becomes harder as leaders rise: thewillingness to receive feedback without defending against it.
The irony isstunning. We are building machines capable of moral self-correction, but it’snot necessarily a skill humans and organizations have already mastered.
The stakesare not abstract. Research shows that working with colleagues who aren’tself-aware can cut a team’s success in half. Which raises a question worth asking: willAI's success be cut in half too, if it can't actually develop theself-awareness its constitution demands? Writing values down — for humans ormachines — is the easy part. Living by them, especially under pressure, isanother matter entirely.
Co-written by Elizabeth Solomon
Click here to learn more about Daniel Goleman's Building Blocks of Emotional Intelligence.





