Google Warns Gmail Users of AI-Based Threats Exploiting Prompt Injections

Google Warns Gmail Users of AI-Based Threats Exploiting Prompt Injections

Google has issued a serious warning to its 2 billion Gmail users about a new wave of cyber threats that exploit the very AI enhancements designed to improve user experience. These advanced attacks, fueled by recent generative AI developments, include “indirect prompt injections,” which hide “malicious instructions within external data sources,” invisible to users but readable by AI tools.

This growing concern was confirmed in a recent report by 0din, Mozilla’s zero-day investigative unit. A researcher demonstrated a prompt injection vulnerability within Google Gemini for Workspace, revealing that attackers can embed hidden commands in emails. When a user opts to “summarize this email” using Gmail’s AI features, “Gemini faithfully obeys the hidden prompt and appends a phishing warning that looks as if it came from Google itself.”

These attacks take advantage of AI’s trust-based design, turning helpful features into potential threats. Users may see what appears to be a legitimate security alert from Google within the AI-generated summary, but in reality, the message could be manipulated by a hidden prompt embedded using HTML elements like <span> or <div> with invisible text.

0din is urging organizations and users to treat these AI summaries with caution. “Train users that Gemini summaries are informational, not authoritative security alerts,” the report states, adding that systems should “auto-isolate emails containing hidden <span> or <div> elements with zero-width or white text.”

Security experts liken these new attacks to earlier email-based threats. “Prompt injections are the new email macros,” says 0din, warning that “trustworthy AI summaries can be subverted with a single invisible tag.” They stress that “until LLMs gain robust context-isolation, every piece of third-party text your model ingests is executable code,” underscoring the urgent need for stronger safeguards.

This marks a significant shift in cybersecurity, as threat actors now target AI features themselves — not just users or systems. “As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,” Google warns.

Users are strongly advised to delete any email that displays a security alert in the AI-generated summary, as it may contain malicious AI commands that put their data, devices, and privacy at risk.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch