Cybersecurity experts have issued a warning about serious vulnerabilities in the open-source AI agent OpenClaw that could allow attackers to manipulate the system, steal sensitive data, and potentially compromise entire networks.
The warning comes from the China’s National Computer Network Emergency Response Technical Team, which highlighted multiple security risks linked to the platform’s default configuration and powerful system permissions.
Weak Security Defaults Raise Major Concerns
OpenClaw—previously known as Clawdbot and Moltbot—is designed as a self-hosted autonomous AI agent capable of performing tasks on behalf of users, including browsing the web, retrieving information, and executing system commands.
However, cybersecurity officials say these capabilities create significant security exposure when combined with weak default protections.
Because the agent often runs with high-level system privileges, attackers may exploit it to gain control of a device or extract confidential information.
Prompt Injection Attacks at the Center of the Risk
A major concern involves a technique known as Prompt Injection, where attackers hide malicious instructions inside external content—such as web pages—that the AI agent later reads.
This attack type, also called Indirect Prompt Injection (IDPI) or Cross-Domain Prompt Injection (XPIA), tricks the AI into executing instructions without direct interaction from the attacker.
Security researchers say such attacks can lead to:
- Leakage of sensitive user data
- Manipulation of AI-generated responses
- SEO manipulation and content poisoning
- Interference in automated decision systems
Experts from OpenAI have also warned that prompt injection techniques are evolving and increasingly incorporate social engineering tactics.
Messaging Apps Could Become Data Leaks
Researchers from PromptArmor recently demonstrated how messaging platforms with link preview features—such as Telegram and Discord—could be used as data exfiltration channels when integrated with OpenClaw.
In the attack scenario, a manipulated AI agent generates a malicious link containing hidden parameters that include sensitive data. When the link preview is automatically rendered in a messaging app, the confidential information may be transmitted to an attacker-controlled domain without the user clicking the link.
This makes the attack particularly dangerous because data can be leaked automatically as soon as the AI responds.
Additional Security Threats Identified
Beyond prompt injection, security authorities outlined several other potential risks associated with OpenClaw:
- Accidental deletion of important data due to misinterpreted AI instructions
- Malicious plugins or “skills” uploaded to repositories like ClawHub that execute harmful commands
- Exploitation of known vulnerabilities that could allow attackers to take control of the system
Officials warned that these threats pose significant risks to organizations operating in sensitive sectors such as finance, energy, and technology.
A successful compromise could expose trade secrets, internal code repositories, or critical infrastructure data.
Governments Move to Restrict Use
In response to the security concerns, Chinese authorities have reportedly restricted government agencies and state-owned companies from installing OpenClaw AI tools on workplace computers.
The restrictions are also said to extend to families of military personnel, reflecting concerns about potential espionage or data leaks.
Malware Campaigns Exploiting OpenClaw’s Popularity
The rapid rise in OpenClaw’s popularity has also attracted cybercriminals.
Researchers from Huntress discovered malicious repositories on GitHub that impersonate legitimate OpenClaw installers.
These fake downloads have been used to spread malware, including:
- Vidar Stealer
- Atomic Stealer
- GhostSocks
Some of the malicious repositories were reportedly promoted through AI-generated search results, increasing the likelihood that users would download them.
Security Recommendations for Users
Cybersecurity experts recommend several precautions for organizations using OpenClaw:
- Do not expose the platform’s default management port to the internet
- Run the agent inside isolated containers
- Avoid storing credentials in plaintext files
- Download plugins only from trusted sources
- Disable automatic updates for external skills
- Keep the system updated with the latest patches
As autonomous AI agents become more widely used in workplaces and research environments, security experts warn that protecting them against manipulation and exploitation will become increasingly critical.