Google has issued a serious warning to its 2 billion Gmail users about a new wave of cyber threats that exploit the very AI enhancements designed to improve user experience. These advanced attacks, fueled by recent generative AI developments, include “indirect prompt injections,” which hide “malicious instructions within external data sources,” invisible to users but readable by AI tools.
This growing concern was confirmed in a recent report by 0din, Mozilla’s zero-day investigative unit. A researcher demonstrated a prompt injection vulnerability within Google Gemini for Workspace, revealing that attackers can embed hidden commands in emails. When a user opts to “summarize this email” using Gmail’s AI features, “Gemini faithfully obeys the hidden prompt and appends a phishing warning that looks as if it came from Google itself.”
These attacks take advantage of AI’s trust-based design, turning helpful features into potential threats. Users may see what appears to be a legitimate security alert from Google within the AI-generated summary, but in reality, the message could be manipulated by a hidden prompt embedded using HTML elements like <span> or <div> with invisible text.
0din is urging organizations and users to treat these AI summaries with caution. “Train users that Gemini summaries are informational, not authoritative security alerts,” the report states, adding that systems should “auto-isolate emails containing hidden <span> or <div> elements with zero-width or white text.”
Security experts liken these new attacks to earlier email-based threats. “Prompt injections are the new email macros,” says 0din, warning that “trustworthy AI summaries can be subverted with a single invisible tag.” They stress that “until LLMs gain robust context-isolation, every piece of third-party text your model ingests is executable code,” underscoring the urgent need for stronger safeguards.
This marks a significant shift in cybersecurity, as threat actors now target AI features themselves — not just users or systems. “As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,” Google warns.
Users are strongly advised to delete any email that displays a security alert in the AI-generated summary, as it may contain malicious AI commands that put their data, devices, and privacy at risk.