Facebook May Be Scanning Your Private Photos for Meta AI Without Clear Disclosure

Meta is testing a controversial new Facebook feature that scans users’ personal photo galleries even including media that hasn’t been shared online, raising new privacy concerns. As first reported by TechCrunch, some users have begun seeing a pop-up while uploading a Story, prompting them to enable “cloud processing.”

Meta can routinely access and upload images and videos from a user’s phone to its cloud servers due to this feature. In exchange, Meta offers personalised content such as AI-generated photo collages, event recaps, and themed filters. While framed as a convenience tool, the implications run deeper.

Tapping “Allow” gives Meta the ability to analyse all media on the device. Its AI can process metadata like timestamps and locations, facial recognition details, and objects in photos to enhance user suggestions and improve model training. “Cloud processing” allows Meta to automatically access and upload images from their phone’s gallery to its cloud servers on a regular basis.

Privacy advocates are alarmed by both the extent of access and the lack of clarity around the feature. Meta hasn’t issued a formal announcement, aside from a quietly published help page for Android and iOS. The feature’s vague description means many users may grant permission without fully understanding what they’re opting into. “Once enabled, the uploads continue quietly in the background, turning personal, unpublished media into potential training material for Meta’s AI systems.”

Meta claims this feature is optional and can be turned off anytime. If disabled, the company says it “will begin deleting any unpublished images from its cloud servers within 30 days.” Still, the company hasn’t clarified whether these images could be used to train generative AI models in the future.

This comes amid growing scrutiny over Meta’s AI data practices. The company has previously confirmed it scraped public content from Facebook and Instagram for model training, yet hasn’t clearly defined what “public” means—or what protections exist for user content under its updated AI terms, which took effect on June 23, 2024.

Currently being tested in the U.S. and Canada, this feature could soon expand globally. In markets like India, where smartphones often store sensitive documents and personal media, the lack of transparency especially in regional languages—could pose serious risks.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch