Meta is testing a controversial new Facebook feature that scans users’ personal photo galleries even including media that hasn’t been shared online, raising new privacy concerns. As first reported by TechCrunch, some users have begun seeing a pop-up while uploading a Story, prompting them to enable “cloud processing.”
Meta can routinely access and upload images and videos from a user’s phone to its cloud servers due to this feature. In exchange, Meta offers personalised content such as AI-generated photo collages, event recaps, and themed filters. While framed as a convenience tool, the implications run deeper.
Tapping “Allow” gives Meta the ability to analyse all media on the device. Its AI can process metadata like timestamps and locations, facial recognition details, and objects in photos to enhance user suggestions and improve model training. “Cloud processing” allows Meta to automatically access and upload images from their phone’s gallery to its cloud servers on a regular basis.
Privacy advocates are alarmed by both the extent of access and the lack of clarity around the feature. Meta hasn’t issued a formal announcement, aside from a quietly published help page for Android and iOS. The feature’s vague description means many users may grant permission without fully understanding what they’re opting into. “Once enabled, the uploads continue quietly in the background, turning personal, unpublished media into potential training material for Meta’s AI systems.”
Meta claims this feature is optional and can be turned off anytime. If disabled, the company says it “will begin deleting any unpublished images from its cloud servers within 30 days.” Still, the company hasn’t clarified whether these images could be used to train generative AI models in the future.
This comes amid growing scrutiny over Meta’s AI data practices. The company has previously confirmed it scraped public content from Facebook and Instagram for model training, yet hasn’t clearly defined what “public” means—or what protections exist for user content under its updated AI terms, which took effect on June 23, 2024.
Currently being tested in the U.S. and Canada, this feature could soon expand globally. In markets like India, where smartphones often store sensitive documents and personal media, the lack of transparency especially in regional languages—could pose serious risks.