AI Misstep at a Big Four: How Deloitte Faced Reputational and Financial Loss

October 21, 2025
Deloitte Australia has come under serious scrutiny after a $290,000 government report, published on the Department of Employment and Workplace Relations website in July 2025, was found to contain multiple AI-generated errors.

What Happened?

Reportedly, 20 errors were found in the initial report, and they were not merely grammatical slips or awkward phrasing. According to an Associated Press article , these inaccuracies included a fictitious quote attributed to a federal court judgement and references to academic research papers that did not exist.

The mistakes were reported to the media by Sydney university researcher, Chris Rudge, leading to a revised version of the report being published and the original removed from the government website.

The Fallout

Deloitte responded to the allegations after reviewing the report, confirming that “some footnotes and references were incorrect” stating that “the updates made in no way impact or affect the substantive content, findings and recommendations in the report.”

Following publication of the amended report, Deloitte agreed to a partial refund equal to the final instalment under the contract; the exact amount has not been disclosed and will be made public once the reimbursement is complete.

The episode has caused clear reputational damage. Senator Barbara Peacock, the Australian Greens’ spokesperson on the public sector, said Deloitte “misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent,” adding the incident was “the kind of thing a first-year university student would be in deep trouble for.” She also expressed her opinion that the firm should refund the full $290,000.

While the final repayment figure remains uncertain, it is the reputational damage and erosion of stakeholder trust, not the financial reimbursement, that pose the greatest threat to the Deloitte’s long-term stability and credibility.

How does Governance play a role in the situation?

Governance isn’t a box-ticking exercise, it underpins everything an organisation does, including how technology is used. Good governance provides the structures, oversight and accountability needed to ensure technology is deployed responsibly, ethically and in line with contractual obligations.

Where robust governance is absent, risks multiply. In this case, reliance on AI without effective governance exposed clear gaps. Governance is about more than compliance; it’s the foundation for earning trust, sustaining credibility, and driving long-term organisational integrity. To ensure accuracy, integrity, and strong governance “human in the loop”, audit trails for AI outputs and escalation protocols for anomalies, measures that protect against reputational damage and financial loss are vital.

The partial reimbursement, public criticism and hit to stakeholder confidence show what can happen when governance is treated as an afterthought. AI brings distinct risks, hallucinations and bias among them, and without governance those risks become liabilities. Yet when properly governed, AI can unlock powerful benefits, from sharper decision-making and operational efficiency to transformative innovation.

How could this have been avoided?

It would appear that a lack of governance lies behind the mistakes identified. To prevent similar failings when using AI, organisations should have in place a clear AI governance framework that includes:

  • Clear policies: On acceptable AI use and disclosure.
  • Mandatory quality assurance: With Human validation for AI-generated content.
  • Audit trails: Plus record-keeping for AI outputs.
  • Regular audits: Including training on responsible AI practices.
  • Escalation procedures: For flagged anomalies or uncertain outputs.

 

Effective risk mitigation through applying these robust controls would have identified and addressed the report’s errors before publication, preventing potential reputational and operational fallout.

The lesson is simple: governance is the backbone of responsible AI adoption. Without it, organisations risk not just fines but long-term damage to credibility.

Bridgehouse Governance Support

Bridgehouse can offer tailored frameworks and expert guidance to support you unlock AI’s value while safeguarding trust and integrity.

If you’d like help embedding responsible AI practices or strengthening governance more broadly, contact us at [email protected]

Get in touch

We would be pleased to answer any queries or have an informal chat to discuss your possible governance needs.