Meta will soon bring fact-checking helpline on WhatsApp to report AI generated deepfakes

AI has emerged as the primary focus for major tech companies. Google, Microsoft, OpenAI, and other industry leaders are actively enhancing their AI and ML capabilities to transform digital workflows. However, the increasing accessibility of AI technology poses new challenges, including the proliferation of misinformation through AI-generated content. To address this issue, Meta has partnered with the Misinformation Combat Alliance (MCA) to introduce a specialized fact-checking support service on WhatsApp aimed at combating deepfakes and AI-generated misinformation.

In an official statement shared on social media, MCA announced the forthcoming launch of this helpline, scheduled for March 2024. This service will enable users to report and flag potentially deceptive AI-generated media, commonly referred to as deepfakes, via a WhatsApp chatbot offering multilingual support in English and three regional languages. Alongside facilitating the reporting of AI deepfakes, the helpline will also connect users with verified and credible information sources.

According to MCA, users can submit messages to the helpline, where they will be assessed and validated by fact-checkers, industry partners, and digital laboratories to ascertain their authenticity. The aim is to identify and expose false or manipulated content. MCA stated, “We will collaborate closely with member fact-checking organizations, industry partners, and digital laboratories to scrutinize and authenticate content, providing accurate rebuttals to false claims and misinformation.”

The program’s primary objective is to implement a four-pronged strategy encompassing detection, prevention, reporting, and raising awareness about the escalating threat of deepfake dissemination while establishing a crucial mechanism enabling citizens to access reliable information to combat misinformation, as noted by MCA.

Notably, while Meta is developing a WhatsApp chatbot, the Misinformation Combat Alliance is establishing a central unit dedicated to deepfake analysis. This unit will manage all communications received through the helpline. Meta disclosed its collaboration with 11 independent organizations specializing in fact-checking to identify, verify, and investigate misinformation on its platform.

Bharat Gupta, president of Misinformation Combat Alliance, emphasized the significance of the Deepfakes Analysis Unit (DAU) as a critical intervention to curb the propagation of AI-driven disinformation across social media and the internet.

Social media platforms like WhatsApp have grappled with the challenge of misinformation dissemination, as users frequently share inaccurate or misleading content. Meta has undertaken measures to address this issue, including limiting the forwarding of messages and eliminating fake accounts.

Furthermore, Meta recently announced its intention to label AI-generated images across its platforms, including Facebook, Instagram, and Threads. This labeling initiative aims to assist users in distinguishing between authentic and synthetic photos, which can closely resemble reality. The labels will be applied to any content exhibiting industry-standard indicators of AI creation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

Sign Up for CXO Digital Pulse Newsletters

Sign Up for CXO Digital Pulse Newsletters to Download the Research Report

Sign Up for CXO Digital Pulse Newsletters to Download the Coffee Table Book

Sign Up for CXO Digital Pulse Newsletters to Download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report