Cybersecurity

Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE

Published

on

April 2026 — Security researchers have uncovered a severe and currently unpatched vulnerability in LeRobot, an open-source robotics framework developed under the Hugging Face ecosystem, that could allow attackers to execute arbitrary code remotely without authentication.

The flaw raises serious concerns for AI-driven robotics systems, where compromised inference servers could lead not only to data theft but also to potential operational and physical risks.

Unauthenticated Remote Code Execution via Deserialization

The vulnerability, tracked as CVE-2026-25874 (CVSS 9.3), stems from unsafe deserialization of data using Python’s insecure pickle format within LeRobot’s asynchronous inference pipeline.

LeRobot is an open-source robotics platform created under the Hugging Face ecosystem and widely used for AI model deployment in robotics research and prototyping environments.

Researchers say the issue occurs when the system processes data received over unauthenticated gRPC channels without encryption or proper validation, allowing attackers to inject malicious serialized payloads.

How the Attack Works

The vulnerability lies in the way LeRobot handles remote procedure calls such as:

  • SendPolicyInstructions
  • SendObservations
  • GetActions

An attacker who can reach the exposed service port can send a specially crafted pickle payload that is automatically deserialized by the system.

This leads directly to:

  • Remote code execution on the server or client
  • Full system compromise
  • Unauthorized access to connected robotic systems

High-Risk Impact on AI and Robotics Systems

Security researchers warn that the implications go beyond traditional server compromise.

Because LeRobot is designed for AI inference workloads, it often runs with elevated privileges and access to:

  • Internal model data and training sets
  • Cloud or on-premise compute resources
  • Connected robotic devices and controllers
  • Sensitive API keys and credentials

If exploited, attackers could potentially:

  • Steal confidential AI models and credentials
  • Move laterally across connected networks
  • Disrupt or sabotage robotic operations
  • Cause physical-world impacts through compromised systems

Vulnerability Still Unpatched in Production Versions

The flaw has been confirmed in version 0.4.3, and a fix is reportedly planned for version 0.6.0. However, at the time of disclosure, no patch had been released.

Security researcher Valentin Lobstein validated the exploit and confirmed its effectiveness against current deployments.

Security Concerns Over Unsafe Serialization

The root cause of the issue is the use of Python’s pickle module, which is known to be unsafe when handling untrusted input.

Security experts have long warned that pickle-based deserialization can allow arbitrary code execution if attackers control input data—making it unsuitable for network-facing services.

Despite this, researchers found that LeRobot uses pickle in its inference pipeline without authentication or encryption on gRPC communication channels.

Community Response and Prior Warnings

The vulnerability was independently reported by multiple researchers, including one known as “chenpinji,” as early as late 2025. Developers acknowledged the issue and noted that parts of the codebase require significant refactoring due to its experimental origins.

Project maintainers stated that LeRobot was initially designed as a research tool rather than a production-grade system, which contributed to limited security hardening.

Growing Concerns in AI Infrastructure Security

Security analysts say the incident highlights a broader issue in AI development frameworks: rapid experimentation often outpaces security engineering.

Researchers also pointed out the irony that while Hugging Face previously developed safer serialization tools like Safetensors, its robotics framework still relies on unsafe deserialization methods for network-reachable components.

Final Outlook

The LeRobot vulnerability underscores the growing risks facing AI and robotics platforms as they transition from research environments to production deployments.

Until a patch is released, experts strongly advise isolating affected systems, restricting network exposure of inference services, and avoiding any untrusted input to gRPC endpoints.

Click to comment
Exit mobile version