
The UK government announced that it will collaborate with Microsoft, academics, and industry experts to create a system capable of identifying deepfake content online. This initiative is part of a broader effort to establish clear standards for detecting and addressing harmful AI-generated material.
Although manipulated media has been circulating online for many years, the rapid growth of generative AI technologies—spurred by the launch of tools like ChatGPT—has heightened concerns about the scale and sophistication of deepfakes.
The government, which recently made the creation of non-consensual intimate images a criminal offense, is developing a deepfake detection evaluation framework. This framework aims to set consistent standards for assessing the effectiveness of detection tools and technologies.
“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” said technology minister Liz Kendall.
The proposed framework will focus on evaluating how technology can identify and understand harmful deepfake content, irrespective of its origin. Detection systems will be tested against real-world threats, including sexual abuse, fraud, and impersonation. The government hopes that this approach will provide law enforcement and policymakers with a clearer understanding of where detection gaps remain, while also establishing expectations for industries regarding deepfake detection standards.
Government figures indicate a sharp rise in deepfake content online, with an estimated 8 million deepfakes shared in 2025, up dramatically from 500,000 in 2023.
Regulators and governments worldwide have struggled to keep up with the rapid evolution of AI, a challenge that became particularly pressing this year after reports that Elon Musk’s Grok chatbot generated non-consensual sexualized images of individuals, including children. The British communications watchdog and privacy authorities are currently investigating Grok.
By partnering with Microsoft and leading experts, Britain aims to develop a reliable detection system while establishing robust standards to mitigate the risks associated with increasingly realistic and widely shared deepfake content.




