PMtheBuilder logoPMtheBuilder
ยท2/1/2026ยท8 min read

How To Talk To Executives About Ai Uncertainty

Guide

Every AI PM faces this moment:

You're presenting an AI feature to executives. You've explained what it does, shown the demo (which worked great), outlined the business case.

Then someone asks: "What's the accuracy?"

"About 85%," you say.

The room goes quiet. You can see them doing the math. 15% wrong. If we have 10,000 users, 1,500 bad experiences. They're already picturing the support tickets. The angry tweets. The competitors pointing and laughing.

"Can we get it to 100%?"

And now you're in a corner. Do you explain that 100% isn't possible with AI? Do you promise improvements you can't guarantee? Do you hedge so much that you kill the project's momentum?

This is the AI PM's constant dilemma: AI has uncertainty baked in, but business leaders want certainty.

Here's how to navigate it.


Why AI Uncertainty Freaks Out Executives

Let's be honest about what's happening in their heads:

Traditional software: "If we build it right, it works every time." AI: "It works 85% of the time."

For executives trained on deterministic technology, this sounds like:

  • "It's broken 15% of the time"
  • "Engineering needs to fix it"
  • "Why would we ship something that fails?"

They're applying a mental model that doesn't fit.

Your job isn't to convince them that 85% is acceptable. It's to help them understand AI as a different kind of technology with different expectations.


The Framework: Benchmark, Don't Absolve

Here's the mental shift that helps:

Don't say: "AI is probabilistic, so we can't guarantee results." (This sounds like excuse-making)

Say: "The relevant benchmark is [X]. We're at [Y]. Here's our improvement trajectory."

Executives understand benchmarks. They understand relative performance. They don't understand "probabilistic" as anything other than "unreliable."

Example Transformation:

Before (sounds like excuse-making): "Our AI assistant is about 85% accurate. AI systems have inherent uncertainty, so we can't guarantee perfect results. We're working on improving it."

After (sounds like ownership): "Human agents handling the same requests are 88% accurate on first response. Our AI is at 85%, closing the gap. Industry best-in-class for similar AI assistants is 87%. We're targeting 90% by Q3, with a clear roadmap of improvements."

See the difference? Same facts, totally different impression.

The first version makes AI sound like a liability. The second version makes it sound like a well-managed project with reasonable performance.


The Benchmarks That Matter

What should you compare AI performance to?

Human Benchmark

  • How do humans perform on the same task?
  • Often, humans aren't as good as executives assume
  • "Humans are 88% accurate, AI is 85%" is very different from "AI is 15% wrong"

Status Quo Benchmark

  • What happens without AI?
  • Users don't get instant answers, wait for humans, abandon tasks entirely
  • Sometimes 85% AI accuracy beats 0% because there was no alternative

Industry Benchmark

  • What are competitors/analogous products achieving?
  • If best-in-class is 87% and you're at 85%, you're competitive
  • If best-in-class is 95% and you're at 85%, you have a gap

User Expectation Benchmark

  • What do users expect and tolerate?
  • Users know AI isn't perfect
  • Often their bar is lower than internal stakeholders assume

Improvement Trajectory

  • Where were you 3 months ago?
  • Where will you be in 3 months?
  • Are you improving? How fast?

Armed with benchmarks, "85% accurate" becomes "competitive with industry, better than manual alternative, improving 2% per quarter."


The Four Messages Executives Need to Hear

When presenting AI uncertainty, cover these four things:

1. What the AI Can Reliably Do

Don't open with uncertainty. Open with capability.

"This AI can [specific capability] with [performance level]. That means [business value]."

Lead with the win. Set the frame as "here's a powerful capability" not "here's a risky experiment."

2. Where the Boundaries Are

Now explain limitations โ€” but as design choices, not failures.

"We've scoped the AI to [specific use cases] where accuracy is [high]. For [other use cases], we route to humans."

Boundaries aren't weaknesses if they're intentional. Show you've made thoughtful tradeoffs.

3. What Happens When It's Wrong

This is where you demonstrate maturity.

"When the AI is uncertain, it says so. When it's wrong, here's the user experience. Here's the fallback. Here's how we detect and learn from errors."

Executives fear AI failure because they imagine chaos. Show them controlled, manageable failure modes.

4. How We're Improving

End with trajectory.

"We're currently at [X]. We've improved from [Y] last quarter. We're targeting [Z] by [date]. Here's how we'll get there."

Progress is reassuring. Stagnation is scary.


Language That Works

Phrases that land well:

โœ… "The AI performs at [X], which is [comparable to / better than] [benchmark]."

โœ… "We've designed guardrails to [specific protection]."

โœ… "When the AI is uncertain, it [specific behavior]."

โœ… "We monitor quality in real-time and can respond within [timeframe]."

โœ… "Our improvement roadmap takes us from [current] to [target] by [date]."

โœ… "The alternative is [status quo], which has [these problems]."

Phrases that backfire:

โŒ "AI is probabilistic, so..." (sounds like excuse)

โŒ "We can't guarantee anything." (sounds like no accountability)

โŒ "It's actually really good for AI." (damning with faint praise)

โŒ "Users just need to learn how to use it." (blame shifting)

โŒ "We're working on it." (no specifics = no credibility)


Handling the Hard Questions

"Can we get to 100%?"

Wrong answer: "No, AI doesn't work that way."

Right answer: "100% accuracy isn't achievable even for humans, but we can get asymptotically closer. Our roadmap targets [X]% by [date]. At that level, error rate is comparable to [benchmark] and user experience is [positive outcome]."

"What if it gives bad advice and we get sued?"

Wrong answer: "We have disclaimers."

Right answer: "We've implemented [specific guardrails] for high-risk scenarios. The AI [declines / escalates / requires confirmation] for [sensitive topics]. Legal has reviewed our approach and approved [specifics]."

"Why are we shipping something that's wrong 15% of the time?"

Wrong answer: "It's better than nothing."

Right answer: "Great question. The alternative is [status quo]. With AI, users get [benefit]. The 15% of cases where AI is uncertain are [handled gracefully]. We believe the net user experience is significantly better, and our data from beta shows [evidence]."

"What happens when it fails publicly?"

Wrong answer: "We'll handle it."

Right answer: "We have an incident response plan. For quality issues: [detection, response, communication]. We can roll back within [time]. We've already run [drills/tabletops]. Here's our communication playbook for different scenarios."


The Exec Summary Template

When presenting AI projects, use this structure:

BLUF (Bottom Line Up Front): One sentence: what, why, confidence level.

"We're launching [AI feature] to [business outcome]. Current performance is [X], which is [benchmark comparison]. Go-live confidence: [High/Medium/Low] with [conditions]."

Capability: What the AI does well. Business value. Evidence.

Boundaries: What we've scoped out. Why. Alternative handling.

Risk Mitigation: Failure modes, guardrails, monitoring, incident response.

Trajectory: Current state, improvement roadmap, target end state.

Ask: What you need from them (approval, resources, air cover).

This structure demonstrates you've thought it through. It frames AI as a managed capability, not a gamble.


The Meta Point

Here's what I want you to internalize:

AI uncertainty isn't a communication problem. It's a leadership opportunity.

The PM who can clearly explain AI tradeoffs, set appropriate expectations, and maintain credibility while being honest about limitations โ€” that's a PM executives trust with more AI initiatives.

The PM who oversells and underdelivers, or who hedges so much nothing gets shipped โ€” that's a PM who loses trust.

Your job is to be the person in the room who truly understands AI's capabilities and constraints, and translates that understanding into confident, actionable recommendations.

Uncertainty communicated well is credibility-building. Uncertainty hidden or avoided is career-limiting.


Key Takeaways

  1. Benchmark, don't absolve โ€” compare AI performance to humans, status quo, and industry, not to perfection

  2. Show four things โ€” what AI can do, where the boundaries are, what happens when it's wrong, how you're improving

  3. Uncertainty is a leadership opportunity โ€” the PM who can navigate this well earns trust for bigger AI bets

๐Ÿงช

Free Tool

How strong are your AI PM skills?

8 real production scenarios. LLM-judged across 5 dimensions. Takes ~15 minutes. See exactly where your gaps are.

Take the Free Eval โ†’
๐Ÿ› ๏ธ

PM the Builder

Practical AI product management โ€” backed by PM leaders who build AI products, hire AI PMs, and ship every day. Building what we wish existed when we started.

๐Ÿงช

Benchmark your AI PM skills

8 production scenarios. Free. LLM-judged. See where you stand.

Take the Eval โ†’
๐Ÿ“˜

Go deeper with the full toolkit

Playbooks, interview prep, prompt libraries, and production frameworks โ€” built by the teams who hire AI PMs.

Browse Products โ†’
โšก

Free: 68-page AI PM Prompt Library

Production-ready prompts for evals, architecture reviews, stakeholder comms, and shipping. Enter your email, get the PDF.

Get It Free โ†’

Want more like this?

Get weekly tactics for AI product managers.