Connect with us

AI Security

Chinese Hackers Use Anthropic’s AI to Launch Automated Cyber Espionage Campaign

Published

on

State-sponsored Chinese threat actors leveraged Anthropic’s AI tools to conduct a highly sophisticated, largely automated cyber espionage campaign targeting global organizations in mid-September 2025.

Anthropic described the operation as unprecedented, noting that the attackers used Claude Code, the company’s AI coding platform, not just as an advisory tool but as a fully agentic system executing attacks autonomously.


Scope of the Campaign (GTG-1002)

  • Targets: Approximately 30 organizations across technology, finance, chemical manufacturing, and government sectors.
  • Outcome: Some intrusions succeeded before Anthropic blocked the accounts and implemented defensive measures.
  • Significance: First known large-scale use of AI for cyberattacks with minimal human intervention, signaling a new era in AI-powered cyber espionage.

How the AI Was Used

Anthropic detailed the campaign workflow:

  1. AI as an Autonomous Agent: Claude Code was tasked with performing 80–90% of technical operations, including reconnaissance, vulnerability scanning, exploitation, lateral movement, credential harvesting, and data exfiltration.
  2. Human Oversight: Humans remained involved only for critical decisions, such as authorizing progression between attack phases, approving use of harvested credentials, and determining the scope of data exfiltration.
  3. Attack Orchestration:
    • Human operators provided a target and high-level instructions.
    • Claude Code, using Model Context Protocol (MCP), broke these instructions into subtasks for autonomous execution.
    • AI generated detailed attack documentation and intelligence summaries, potentially enabling handoffs to other teams for long-term operations.
  4. Tools Used: Publicly available network scanners, database exploitation frameworks, password crackers, and binary analysis suites; no evidence of custom malware creation.

“By presenting these tasks as routine technical requests through carefully crafted prompts, the threat actors induced Claude to execute individual attack components without being exposed to the broader malicious context,” Anthropic reported.


Limitations and Challenges

Despite its capabilities, AI also introduced limitations:

  • Data Fabrication: Claude occasionally generated false credentials or misrepresented publicly available information as critical intelligence.
  • Operational Inefficiencies: These hallucinations posed challenges for the overall success of the campaign.

Wider Implications

  • The campaign highlights the lowering barrier to sophisticated cyberattacks, allowing threat actors to emulate the work of entire hacking teams.
  • Less experienced or resourced groups could potentially mount large-scale attacks using agentic AI systems.
  • This is the second major AI-assisted campaign Anthropic disrupted in 2025; similar attacks leveraging OpenAI’s ChatGPT and Google’s Gemini have also been reported.

“AI systems can now analyze target systems, produce exploit code, and scan massive datasets faster than human operators, making cybersecurity defense more challenging than ever,” Anthropic said.


Key Takeaways:

  1. AI is now an active tool in cyber espionage, capable of performing multi-stage attacks autonomously.
  2. Human oversight remains crucial for critical authorization decisions, but AI can handle most tactical work.
  3. Organizations must prepare for AI-assisted attacks by strengthening detection, monitoring, and defense strategies.

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2023 Cyber Reports Cyber Security News All Rights Reserved Website by Top Search SEO