I Swear, I Didn’t Use AI
Workers who disclose that they use AI to complete tasks risk being seen as replaceable by that same AI.

I Swear, I Didn’t Use AI
NOTE: While this transcript has been reviewed, it may contain errors. Please review the episode audio before quoting from this transcript.
Jill Wiltfong:
Let people know when you're using AI—or keep them guessing?
Bryan Ackermann:
Pandora's box is open. We tossed it down the hallway and stomped on it a few times. Organizations need to lean in and give guidance.
Terminator clip:
You're really real.
Tamara Rodman:
If an employee can turn out a better work product faster by using AI—fantastic.
Jill Wiltfong:
But should firms disclose when they use AI?
Dr. Anand Rao:
It’s the promise versus the reality. No matter what, you want to make your AI safe for the people.
Jill Wiltfong:
I swear I didn’t use AI.
Hi, I’m Jill Wiltfong, Chief Marketing Officer for Korn Ferry, and this is Briefings—our deep dive into topics that corporate leaders need to care about.
Time to fess up. You’ve been using AI to craft your company memos, haven’t you? Don’t worry, your secret’s safe with me. In fact, three out of four knowledge workers say they’re using AI to get work done, so you’re in pretty good company.
There’s just one wrinkle: a new study found that workers who disclose they use AI to complete tasks are trusted less by colleagues.
Welcome to the era of the great AI paradox. On the one hand, everyone’s expected to be working with the latest AI tools so you don’t fall behind and potentially lose your job. But on the other hand, if you rely on those tools too much, your firm may decide your role can be replaced entirely by AI.
That’s not to mention the potential legal and reputational minefield that can open up when it comes to firms using AI with customers and clients.
So, what do you do? Let people know when you’re using AI—or keep them guessing? That’s what we’ll explore today. Because the truth is, we’re all thinking twice now whenever someone says, “I swear I didn’t use AI.”
Before we start, if you’re watching us on YouTube, please be sure to like, subscribe, and leave a comment to let us know your thoughts on this topic.
With us now are Bryan Ackermann, Korn Ferry’s Head of AI Strategy and Transformation, and Tamara Rodman, a Korn Ferry Senior Client Partner in its Culture, Change, and Communications Practice. Both have extensive experience working with leaders on their communications strategies when it comes to addressing the AI elephant in the room. So thank you both for being here today.
Bryan Ackermann:
Thank you.
Jill Wiltfong:
I’m sure many of our viewers aren’t so sure about how using AI—or disclosing AI use—may help or hurt their careers. But before we jump into that, let’s see what guidance firms are giving on AI.
Turns out, not much. One study found only 30% of employees say their organization has either general guidelines or formal policies for using AI.
Jill Wiltfong:
Bryan, does that surprise you?
Bryan Ackermann:
It does a little bit. If I were to hazard a guess, I would think that perhaps they’re hidden in an IT security policy document, deep in the heart of somebody’s intranet—on page 87, right after the laptop login policy. So I would imagine that there’s more official guidance than there is support in helping employees actually use AI.
Jill Wiltfong:
Let’s say more firms did have policies. Would that be opening a can of worms—or is it a necessity now?
Bryan Ackermann:
We’re at the point where, yeah, Pandora’s box is open. Someone ripped the cover off and tossed it down the hallway. I think organizations need to lean in and give guidance— as opposed to wait for inevitably somebody to challenge how an individual in an organization is using it.
Jill Wiltfong:
Let’s kind of switch a little bit from the organization to the individual. Tamara, you mentioned an example of a senior leader who uses AI to write all their messages and another who uses it for LinkedIn thought leadership. And in both cases, it's pretty clear it was AI. What do you advise leaders to do when it comes to this.
Tamara Rodman:
So the example you mentioned, the reason it was blindingly obvious that it was AI-generated was because it was actually better than how they usually communicate. It wasn’t in their voice.
I don’t think there’s anything wrong with leaders using AI to improve communication. Just make sure it still sounds like you. If you find yourself thinking, “I’d never say that,” others will notice too.
Jill Wiltfong:
That’s Arnold Schwarzenegger playinh an AI protector in Terminator 2: Judgement Day. Fact may not be mirroring fiction, perhaps a little less dramatic fashion. As we mentioned, three in four knowledge workers say they use AI. But Tamara, employees admitting to using AI might be even riskier maybe than even leaders doing so since managers could wonder if their jobs are even needed. What’s your advice for employees who want to show they’re following their leaders directives to embrace ai, but who also don’t wan to lose there jobs. To they tell there bosses that they are using ChatGPT? Do they not? What is your guidance.
Tamara Rodman:
If an employee can turn out a better work product faster using AI—fantastic. But the key is not outsourcing higher-level thinking. Let AI analyze the data, sure—but the strategic decisions? That’s where the human brain comes in. That’s where you shine. I would not rely on AI for that, but to the extent that I can help you get to those informed recommendations faster, absolutely own up to it.
Jill Wiltfong:
There's still, of course, roughly a quarter of knowledge workers who aren't using AI in their workflow, despite all the encouragement that they're getting. Bryan, what do you think leaders need to do to communicate better or maybe just differently in order to get even stronger AI adoption?
Bryan Ackermann:
So much of this uncertainty, in a lot of cases, fear around how the use of AI is gonna be perceived comes right back to the organization not doing a good enough job of explaining themselves, of laying out the goals and aspirations of the organization when it comes to AI, what they expect of people, even if imperfect, how they expect this transition and this transformation to affect jobs, organizations are not doing a good enough job to communicating that, which will then, even if it's imperfect or evolving, will make it a two-way conversation.
Jill Wiltfong:
Very nice. I do wanna end by returning to the issue of AI policy. Tamara, even if it is high time companies had policies around AI use, that's easier said than done. How should leaders begin to think about framing those usage guidelines?
Tamara Rodman:
Well, actually, Jill, what you just said is key. Think in terms of guidelines versus ironclad policies. I mean, it is impossible to account for every potential scenario. So I would say, instead think about the mindset you want to instill in people as it relates to using AI. I would say most of all, make sure that you would be proud to have your name associated to whatever that output looks like.
Jill Wiltfong:
Very good. Tamara, Bryan, thanks for the discussion today. Really helpful to get your opinions and to get all of our minds kind of wrapped around this. Thank you. We've looked at how companies should address AI usage internally, but how transparent do they need to be when it comes to using AI with clients and customers? We'll discuss after the break. Stay with us.
Rupak Bhattacharya:
Hi, and welcome to This Week in Leadership. I'm Rupak Bhattacharya, and here's a quick look at what else is happening in business.
Unknown Narrator:
University grads will have to deal with one of the worst job markets in years. \
Rupak Bhattacharya:
Hirings of employees with over 10 years of experience rose by 27% from 2023 to 2024. Meanwhile, hiring for new grads is down 25%. Entry and junior roles are still posted, but experts worry with more experienced workers snagging those jobs, tech firms may face long-term issues not having younger workers ready to take over.
Unknown Narrator:
How many direct reports you should have?
Rupak Bhattacharya:
Two years ago, the average number of direct reports per manager was just five. But now, some firms are having managers oversee 20 or more people. Cost cutting is, of course, one reason for this, but one theory also says that fewer managers will enable faster decision making and better strategic alignment with the C-suite's vision.
Unknown Narrator:
Gen Z's pivot from college to blue collar jobs.
Rupak Bhattacharya:
Two out of five Gen Zers are currently pursuing blue collar employment, including 37% of those with bachelor's degrees. Though there is certainly a need for blue collar roles to be filled, experts say the shift raises concerns for a corporate world looking for its own future leaders. For more insights from This Week in Leadership, head to kornferry.com/insights. Now, back to Jill in our episode, I Swear I Didn't Use AI.
Jill Wiltfong:
We're back. And now let's step away from the issue of whether employees should disclose AI use to their bosses and ask whether companies should or shouldn't tell their customers or clients about their AI use. With us now ix Anand Rao, a distinguished services professor of applied data science and AI in the Heinz College of Information Systems and Public Policy at Carnegie Mellon University.
Before becoming a professor, he also spent multiple decades advising large companies on AI usage and innovation. Anand, thanks for joining me.
Dr. Anand Rao:
Great to be here. Thank you.
Jill Wiltfong:
That last clip featured Michael Douglas in the movie Falling Down, speaking the old adage that the customer is always right.
Well, over 60% of consumers believe that companies should disclose their AI usage. So tell me, are the customers right in this case? Should firms disclose when they use AI?
Dr. Anand Rao:
I would say, this particular case, yes, the customers are right. I think one of the biggest challenges that I think we face with AI is trust. People trusting what, what the machine says or what the AI says. And I think in order to build that trust, I would say you definitely have to disclose.
Jill Wiltfong:
Okay, let's talk a little bit about that, that disclosure thing. So you have said that even with disclosing AI use, companies do need to be aware of over disclosing. And you've given the example of a firm whose marketing team showcased a cutting edge AI demo that was beyond the firm's actual AI capability at the time, which left customers a little disappointed. So I'm curious, where should leaders draw that line when it comes to how much to disclose on their AI usage?
Dr. Anand Rao:
I think I would go back to the notion of trust here. It is the promise versus reality, right? So promising too much, I know as a marketing, you want to make it exciting for people to actually use it. And of course we all know that there is a huge competition amongst firms to be slightly better than the other team and to project maybe far more better than the other team. But at the end of the day, the customers are using these tools so they'll make out, right? So the hype versus the reality or the promise versus the reality. So I would err more on being whatever we can do as the reality.
Jill Wiltfong:
That's a scene from the movie "I, Robot" where Will Smith discusses the three laws of robotics made famous by science fiction author Isaac Asimov. In the real world, of course, we're now seeing more and more AI regulations coming into effect. There's the EU AI Act, which penalizes non-compliant companies with up to 7% of global revenue. And in the US, there's a growing patchwork of state AI laws now coming into place. Anand, you're not a lawyer, but given your extensive experience in the AI space, how do you recommend leaders balance not running afoul of all these emerging laws while simultaneously accelerating AI adoption within their firms?
Dr. Anand Rao:
I think our research shows that are probably around 900 plus legislations in the states over the past, literally three to four years on AI. So what do we do or how do we comply with them, especially we want to be national, like most companies? I would sort of put the baseline as AI safety and security. So no matter what, it doesn't matter whether there is a regulation or not, you want to make your AI safe for the people. Then I would put transparency, disclosure, as we have been discussing, as number two. That is required by law in some of the cases like the EU AI Act and a couple of other places as well. But I would also have it as a tenant even if there is no law, the disclosure.
Jill Wiltfong:
You have also said importantly that there are some industries, like banking, that do need to be fairly restrictive around their AI usage because of issues around bias and fairness and things like that. Talk to me a little bit more about that.
Dr. Anand Rao:
There is impact to the individual. Obviously, checking credit worthiness and so on is very much what AI could do, but you need to be careful when it comes to that in terms of inclusivity, fairness, and bias. So I think that's why when it come to financial services, the regulators, as well as the banks and financial institutions, are very careful on making sure that the AI that they're putting out is not real time learning. It can be tested, validated in the development stage, and only then released.
Jill Wiltfong:
Really enjoyed having you on here today to talk about this issue facing companies right now. So thank thank you for being here.
Dr. Anand Rao:
Thank you for having me.
Jill Wiltfong:
The executive producer of Briefings is Jonathan Dahl. Today's episode was produced by Rupak Bhattacharya and Zachary Dore, and it was edited by Jaron Henrie-McCrea. It contains reporting by Russell Pearlman, Arianne Cohen, and Peter Lauria. Our video segment contains original artwork by Frazer Milton, Hayley Kennel, Jonathan Pink, and Sasha Kostyuk. Don't forget to read our magazine available at newsstands and at kornferry.com/briefings. That's it for Korn Ferry Briefings. I'm Jill Jill Wiltfong. See you next time.

Podcast Guest
Dr. Anand Rao
Distinguished Service Professor of Applied Data Science and Artificial Intelligence
Carnegie Mellon University’s Heinz College
Dr. Anand Rao is Distinguished Service Professor of Applied Data Science and Artificial Intelligence at Carnegie Mellon University’s Heinz College, where he teaches operationalizing AI, responsible AI, large-language-model applications, and agentic technologies. Previously PwC’s Global Artificial Intelligence Leader and a Partner, he spent more than 35 years applying analytics and AI across finance, healthcare, technology, aerospace, telecommunications and other industries.

Podcast Guest
Bryan Ackermann
Head of AI Strategy & Transformation, Managing Partner, Assessment & Succession, Leadership & Professional Development
Korn Ferry
Mr. Ackermann brings over thirty years of experience to the firm. He leads globally Solutions providing individual insight via assessments and multi-raters, and all leadership, professional development and training solutions in Korn Ferry’s Consulting and Digital lines of business. Prior to this role, Mr. Ackermann was the Chief Information Officer of Korn Ferry, responsible both for the corporate enterprise as well as client facing technology teams.

Podcast Guest
Tamara Rodman
Senior Client Partner, Culture, Change & Communications
Korn Ferry
Tamara has worked at the intersection of communications, HR and marketing for more than two decades, helping leaders inspire movements and behavior change at scale. Tamara has a global outlook, having led engagements spanning every time zone and being embedded in client operations from European HQs to Australian mining camps. Her expertise is as varied as her clients and their challenges, including employee value proposition development, culture transformation, purpose/mission/values definition and corporate communications functional effectiveness. Tamara is a certified executive coach who helps leaders communicate strategy with clarity and conviction, define their own brand and bring it to life via thought leadership and strategic stakeholder engagement.




.jpg)







