Connect with us

AI Security

“Single-Click AI Attack Exposes Hidden Data Risks in Microsoft Copilot”

Published

on

Cybersecurity experts have uncovered a critical vulnerability in AI chatbots like Microsoft Copilot, revealing a new attack method called Reprompt that enables data exfiltration with just a single click. The technique bypasses enterprise security protections entirely, raising concerns about the security of AI-powered productivity tools.

According to Varonis researcher Dolev Taler, “Only a single click on a legitimate Microsoft link is required to compromise victims. No plugins or user interaction with Copilot are needed.” Taler explained that attackers can maintain control even after the Copilot session is closed, allowing sensitive information to be extracted silently.

Microsoft has since addressed the vulnerability, which does not impact enterprise users running Microsoft 365 Copilot. At its core, the Reprompt method exploits three key techniques:

  1. URL-Based Injection: Attackers can embed crafted instructions in the “q” parameter of a Copilot URL, prompting the AI to execute commands when the link is opened.
  2. Guardrail Bypass: Copilot’s data-leak safeguards apply only to the initial request. Reprompt forces the system to repeat actions, circumventing these protections.
  3. Persistent Data Extraction: Once triggered, Reprompt initiates an ongoing sequence of instructions between Copilot and the attacker’s server, enabling continuous, hidden exfiltration of information.

In practice, a malicious actor could send a legitimate-looking Copilot link via email. Once clicked, the AI executes embedded commands and can “reprompt” itself to collect additional data, such as files accessed by the user, personal information, or scheduled activities. Because follow-up commands are server-driven, the exact nature of exfiltrated data is invisible to the initial prompt.

Reprompt underscores a broader security challenge in AI systems: their inability to differentiate between instructions directly entered by a user and those delivered via automated requests. This creates opportunities for indirect prompt injections, which can target large language models across multiple platforms.

The discovery comes alongside other AI security vulnerabilities, including:

  • ZombieAgent/ShadownLeak variants, which exploit AI connections to third-party apps for zero-click attacks.
  • Lies-in-the-Loop (LITL), which manipulates confirmation prompts to execute malicious code.
  • GeminiJack, allowing hidden corporate data extraction via shared documents or calendar events.
  • CellShock, affecting Anthropic Claude for Excel and enabling formula-based data leaks.
  • Various indirect prompt injection flaws in tools such as Perplexity Comet, Notion AI, Slack AI, and Amazon Bedrock.

Experts recommend organizations adopt layered AI security measures, limit elevated privileges for AI tools, and monitor access to sensitive data. Varonis’ Dor Yardeni advises caution with links from unverified sources, especially those related to AI chatbots, and to avoid sharing personal information that could be exploited.

“As AI agents gain broader access to corporate systems, a single vulnerability can have a much larger impact,” said Noma Security. Organizations must define trust boundaries, enforce strict monitoring, and stay updated on emerging AI threats.

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2023 Cyber Reports Cyber Security News All Rights Reserved Website by Top Search SEO