
Deloitte is facing renewed criticism after yet another AI-related error emerged—this time in a healthcare report prepared for Newfoundland and Labrador in Canada. The document, which contained multiple factual inaccuracies about hospitals and medical facilities, has raised alarms among healthcare professionals and government observers. Several portions of the report appeared to have been generated by AI systems, sparking concerns that Deloitte is increasingly relying on automated tools without sufficient human review. Because the report is tied to critical provincial healthcare planning, industry experts warn that such inaccuracies could misinform policy decisions and undermine public trust.
The controversy follows closely on the heels of a similar incident last month in Australia, where Deloitte was forced to refund approximately 290,000 dollars after another government-commissioned report was found to contain fabricated academic citations and even an invented quote attributed to the Federal Court. Australian welfare academic Dr. Christopher Rudge was the first to identify the misleading references, triggering a wave of scrutiny that ultimately led to Deloitte issuing a corrected version with rewritten footnotes and source listings. A spokesperson for the Department of Employment and Workplace Relations later stated that the errors did not affect the report’s overall recommendations, but nonetheless confirmed that the final contract payment would be returned.
These repeated lapses across different countries are now prompting wider discussions about the risks associated with government consultants depending heavily on generative AI. Policymakers, digital governance experts, and public sector leaders argue that Deloitte’s missteps illustrate a systemic vulnerability: as governments look to AI to reduce costs, speed up research, and expand analytical capacity, insufficient oversight can lead to unreliable outputs with real-world consequences. In Canada, the incident has already eroded confidence in the firm’s processes and may push federal and provincial authorities to introduce stricter guidelines for contractors employing AI tools, particularly for sensitive healthcare and social policy work.
The unfolding situation underscores a growing global dilemma. While AI promises efficiency and scale, Deloitte’s recent errors demonstrate that without strong human validation frameworks, automated systems can amplify inaccuracies and jeopardize decision-making in crucial public sector domains.




