Why A.I. Creates the Illusion of Courage—and the Risk of Reputational Ruin
In Shakespeare’s Othello, published in 1604, the villain Iago delivers a stark reminder about what people must always truly own: “Good name in man and woman … is the immediate jewel of their souls.” And when that “good name” is filched (or taken), he adds, it “makes me poor indeed.” Four hundred years later, the value of a good name hasn’t changed. Reputation remains one of the most valuable and fragile assets any person or institution, large or small, can possess. Deloitte Australia learned this the hard way.
In an “independent assurance review” commissioned by Australia’s Department of Employment and Workplace Relations, Deloitte delivered a report that was later found to contain fabricated references and citations. Deloitte acknowledged it had used generative A.I. to assist with drafting. But whatever “human review” existed clearly failed at the most basic of professional duties: checking the work and verifying sources before submitting the final product to a client.
The consequences were swift and reputationally expensive. The report was corrected and reissued, and Deloitte agreed to repay part of its fee. It’s tempting to frame this as a simple story about “A.I. hallucinations” and sloppy quality control. But that interpretation misses the deeper point. This wasn’t merely a failure of technology. It was a failure of judgment—something more elusive, especially in the fog of speed, confidence and expediency.
And that is where A.I. becomes dangerous in a subtle way: not because it makes us careless, but because it can give us the illusion of courage. No—A.I. doesn’t make us courageous. Instead, it can make us feel courageous and capable of bold actions and even great heroics. Courage and heroism aren’t the same thing, but both involve acting in the face of fear or risk. Courage, however, carries an additional requirement: judgment. It asks not only “Can I act?” but “Should I act, and on what basis?”
Aristotle famously located courage between two vices: cowardice and rashness. The coward retreats from danger; the rash person charges toward it thoughtlessly. The courageous person advances, but only after deliberation, seeing the risks clearly and choosing anyway.
A.I. changes the emotional texture of action. It can reduce friction so dramatically that action feels easier, bolder, even fearless. A draft appears instantly. A report looks polished. Citations arrive pre-packaged. The user experiences a surge of confidence: we’ve got this.
But confidence is not courage. And speed is not judgment.
What A.I. often sells is a kind of synthetic bravery: the sensation of being decisive without the burden of deciding. The feeling of accomplishment with effort minimized or obscured.
Deloitte’s episode of what might be called “cubicle heroism” wasn’t malicious. It was ordinary. It reflected the quiet thrill of getting more done, faster; the seduction of authoritative-sounding prose; the assumption that review can be light-touch because the output looked credible. The result, however, was not what was intended.
And rashness, even more so when multiplied by institutional credibility, can make us “poor indeed.” The temptation is intensifying.
McKinsey’s 2025 global survey, The State of A.I. in 2025: Agents, Innovation and Transformation, published November 2025, reports that 88 percent of respondents say their organizations are regularly using A.I. in at least one business function, up from 78 percent the year before.
The same research points to a shift beyond large language models as mere “predictive text” engines and toward agentic A.I. systems capable of planning and executing multi-step workflows. McKinsey reports that 23 percent of respondents say their organizations are already scaling an agentic A.I. system in at least one function, and another 39 percent are experimenting with agents.
McKinsey’s conclusion is clear: organizations with ambitious A.I. agendas are seeing the greatest benefits. This is precisely where the risk sharpens.
Because the more A.I. can do, the easier it becomes for humans to stop judging. Agents don’t just suggest; they will act. And when action becomes cheap, organizations will start confusing output with outcomes, and motion with progress.
A.I. can dramatically expand what’s possible. It can also shrink the space in which we pause, question and verify. That shrinking is the real danger, because courage lives in that space. Judgment is not a footnote or caveat; it is the whole game. If A.I. is a lever, judgment is the fulcrum. Without it, the lever doesn’t lift; it flings, often far beyond the intended target.
Judgment decides when to use A.I. Not every task deserves automation. Some work is valuable precisely because it forces deliberation: strategy, hiring, performance decisions, clinical or legal judgment and reputationally sensitive communication. Using A.I. isn’t inherently wrong, but it raises the required standard of review rather than lowering it.
Judgment shapes how to use A.I., where A.I. generates possibilities and humans provide direction. Clear intent, constraints and context matter more as systems become more capable. “Do this for me” is no longer enough. “Do this with these assumptions, this evidence standard, and this verification step” is how better results are achieved.
Judgment filters truth from noise. The Deloitte episode is not an outlier case; it’s a predictable failure mode to which all A.I. users are vulnerable. Generative A.I. can be brilliant and still confidently wrong in ways that look right. If we treat fluency as accuracy, we will ship errors at scale. Judgment protects what must remain human. Trust, accountability and moral responsibility do not outsource well. Neither does leadership. We can delegate drafting; we cannot delegate ownership.
Real courage in the A.I. era won’t be dramatic, but it should not be invisible. It will be procedural, explicit and sometimes inconvenient.
It will look like:
- The courage to slow down when the tool makes it easy to speed up.
- The courage to verify when the output feels polished enough to ship.
- The courage to disclose when A.I. meaningfully shaped the work product.
- The courage to say “I don’t know” rather than accept a plausible answer.
- The courage to absorb short-term friction to avoid long-term reputational loss.
This is not an anti-A.I. position. It’s a pro-accountability one. As a published author, speaker and frequent contributor of opinions and articles, these are the strict rules I hold myself to. As a board director, often in conversations about how A.I. can improve bottom lines, these are the same rules I invoke.
A.I. will give us speed, scale and synthesis. Humans will supply discernment, values and context. When those meet, the results can be extraordinary. When they don’t, A.I. will merely be amplifying rashness.
The paradox of A.I. is that it can make us feel fearless while at the same time making our work—and our reputations—more fragile. It can make action effortless while making consequences heavier. It can make us look more capable while quietly eroding the essential habits—verification, skepticism, deliberation—that earned credibility and capability in the first place.
So, the central question isn’t whether A.I. will make us courageous. It won’t.
The real question is whether we will continue exercising judgment when A.I. offers a convincing substitute: the illusion of courage—confidence without responsibility, speed without scrutiny, output without ownership. Because that is how a “good name” gets filched. And that is how, in Iago’s words, we could be made “poor indeed.”