Britain Works with Microsoft on AI Tools to Protect Against Deepfakes

The UK government announced that it will collaborate with Microsoft, academics, and industry experts to create a system capable of identifying deepfake content online. This initiative is part of a broader effort to establish clear standards for detecting and addressing harmful AI-generated material.

Although manipulated media has been circulating online for many years, the rapid growth of generative AI technologies—spurred by the launch of tools like ChatGPT—has heightened concerns about the scale and sophistication of deepfakes.

The government, which recently made the creation of non-consensual intimate images a criminal offense, is developing a deepfake detection evaluation framework. This framework aims to set consistent standards for assessing the effectiveness of detection tools and technologies.

“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” said technology minister Liz Kendall.

The proposed framework will focus on evaluating how technology can identify and understand harmful deepfake content, irrespective of its origin. Detection systems will be tested against real-world threats, including sexual abuse, fraud, and impersonation. The government hopes that this approach will provide law enforcement and policymakers with a clearer understanding of where detection gaps remain, while also establishing expectations for industries regarding deepfake detection standards.

Government figures indicate a sharp rise in deepfake content online, with an estimated 8 million deepfakes shared in 2025, up dramatically from 500,000 in 2023.

Regulators and governments worldwide have struggled to keep up with the rapid evolution of AI, a challenge that became particularly pressing this year after reports that Elon Musk’s Grok chatbot generated non-consensual sexualized images of individuals, including children. The British communications watchdog and privacy authorities are currently investigating Grok.

By partnering with Microsoft and leading experts, Britain aims to develop a reliable detection system while establishing robust standards to mitigate the risks associated with increasingly realistic and widely shared deepfake content.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch