Cybersecurity researchers have disclosed a critical prompt injection vulnerability in Google Gemini that allowed threat actors to bypass privacy controls and exfiltrate private Google Calendar data. The flaw exploited indirect prompt injection through specially crafted calendar invites.
How the Attack Worked
- Malicious Calendar Invite:
Attackers sent a normal-looking calendar event containing a hidden, natural language prompt in the event description.
- Trigger via Innocuous Query:
When a user asked Gemini a benign question like “Do I have any meetings on Tuesday?”, the AI processed the embedded prompt.
- Silent Data Exfiltration:
Gemini created a new calendar event summarizing all of the target’s private meetings. In many enterprise setups, the new event was visible to the attacker, allowing them to read sensitive information without any direct user interaction.
“AI applications can be manipulated through the very language they’re designed to understand,” said Liad Eliyahu, Miggo Security’s Head of Research.
Wider Implications for AI Security
This vulnerability highlights how AI-native features can expand the attack surface, introducing risks beyond traditional software flaws:
- Reprompt attacks: Varonis showed how AI chatbots like Microsoft Copilot could be exploited in a single click to exfiltrate sensitive enterprise data.
- Google Cloud Vertex AI risks: Privilege escalation via “double agent” Service Accounts could allow attackers to access LLM memory, chat sessions, or storage buckets.
- Other AI vulnerabilities:
- The Librarian (CVE-2026-0612–0616): Access to admin consoles, cloud metadata, and running processes.
- Cursor IDE (CVE-2026-22708): Remote code execution via indirect prompt injection exploiting trusted shell commands.
- Anthropic Claude Code: Malicious plugins bypassing human-in-the-loop protections to exfiltrate files.
“Coding agents cannot be trusted to design secure applications,” said Ori David, Tenzai. Human oversight is critical for authorization, business logic, and security controls.
Key Takeaways
- Prompt injection is a real and practical threat: Language understood by AI can be weaponized to bypass controls.
- AI expands attack surfaces beyond code: Threats now reside in language, context, and AI behavior at runtime.
- Continuous auditing is essential: Enterprises must evaluate LLMs for hallucinations, factual accuracy, bias, and jailbreak resilience while securing associated cloud resources.
This vulnerability has been patched following responsible disclosure, but the incident underscores the importance of securing AI-driven systems and agentic workflows.