OpenClaw Infostealer Attack: What the First AI Agent Identity Theft Means for Your Security

Infostealer malware stole OpenClaw AI agent configs, gateway tokens, and behavioral guidelines. With 135,000+ exposed instances and 1,184 malicious skills, here's what security teams need to know.

11 min read·

TL;DR

Key Point Summary
First AI agent identity theft Vidar infostealer exfiltrated OpenClaw config files, gateway tokens, and the agent's behavioral guidelines ("soul")
135,000+ exposed instances SecurityScorecard found exposed OpenClaw deployments across 82 countries, 12,812 exploitable via RCE
Critical RCE vulnerability CVE-2026-25253 (CVSS 8.8) enables one-click remote code execution, even on localhost instances
Supply chain poisoning 1,184+ malicious skills found in ClawHub, OpenClaw's official marketplace
Compliance impact Organizations running AI agents need to reassess access controls, credential storage, and audit trails

Quick Answer: On February 13, 2026, Hudson Rock disclosed the first documented case of infostealer malware targeting an AI agent's identity. A Vidar variant stole OpenClaw configuration files containing gateway tokens, cryptographic keys, and the agent's core behavioral guidelines. This incident, combined with a critical RCE vulnerability and massive supply chain poisoning campaign, represents a turning point: AI agents are now prime targets for credential theft, and security teams need to treat them with the same rigor as privileged service accounts.


OpenClaw, the open-source AI personal assistant that surged to over 200,000 GitHub stars since its November 2025 debut, is at the center of a multi-vector security crisis. Within weeks of becoming the most popular AI agent platform, it became a magnet for threat actors exploiting its rapid adoption and default-insecure configurations.

This isn't a theoretical risk. Real credentials were stolen. Real instances are exposed. And real malware is circulating through its official skill marketplace.

OpenClaw security crisis statistics: 135,000+ exposed instances, 12,812 RCE exploitable, 1,184+ malicious skills, CVE-2026-25253 CVSS 8.8


What Happened: The First AI Agent Infostealer Attack

On February 13, 2026, cybersecurity firm Hudson Rock disclosed the first real-world case of information stealer malware successfully exfiltrating an AI agent's configuration data.

The malware, identified as a likely Vidar variant (an off-the-shelf infostealer active since 2018), didn't use a custom OpenClaw module. Instead, it employed a broad file-grabbing routine designed to locate files with specific extensions and directory names containing sensitive data. Three critical files were stolen:

Stolen File #1: openclaw.json

This file contains the OpenClaw gateway token, the victim's email address, and workspace path. The gateway token is essentially the master key: it authenticates the user's connection to the OpenClaw gateway service. With this token, an attacker can connect remotely to a victim's OpenClaw instance (if the port is exposed) or masquerade as the legitimate client in authenticated API requests.

Stolen File #2: device.json

This file contains cryptographic keys used for secure pairing and signing operations within the OpenClaw ecosystem. An attacker with these keys could impersonate the victim's device, bypassing security checks to access encrypted logs and paired cloud services.

Stolen File #3: soul.md

Perhaps the most unique theft: the agent's core operational principles, behavioral guidelines, and ethical boundaries. This file defines how the AI agent behaves, what it's allowed to do, and what guardrails are in place. Stealing this file gives attackers a blueprint for crafting prompts that bypass the agent's safety constraints.

How the Vidar infostealer targets OpenClaw: infection, file scan, exfiltration, and exploitation of openclaw.json, device.json, and soul.md

As Hudson Rock's CTO Alon Gal warned:

"As AI agents like OpenClaw become more integrated into professional workflows, infostealer developers will likely release dedicated modules specifically designed to decrypt and parse these files, much like they do for Chrome or Telegram."


The Broader Crisis: It's Not Just One Incident

The infostealer attack didn't happen in isolation. OpenClaw is facing a compounding security crisis across three fronts.

CVE-2026-25253: One-Click Remote Code Execution (CVSS 8.8)

A critical vulnerability in OpenClaw allows attackers to achieve remote code execution with a single click, even against instances bound to localhost.

The attack chain works as follows:

  1. Token theft: The OpenClaw application accepted a gatewayUrl via a query string and established a WebSocket connection without user confirmation, transmitting authentication credentials in the process
  2. Cross-Site WebSocket Hijacking: Because the WebSocket server fails to validate the origin header, an attacker's JavaScript can connect to the victim's local instance from a malicious website
  3. Full takeover: Using the stolen token, the attacker disables user confirmation, disables sandboxing, and executes arbitrary shell commands on the victim's machine

CVE-2026-25253 attack chain: malicious link, token theft, WebSocket hijack, and full system control

This vulnerability was patched in version 2026.1.29, but the damage window was significant. Belgium's Centre for Cybersecurity (CCB) published an emergency advisory classifying it as critical.

135,000+ Exposed Instances Across 82 Countries

SecurityScorecard's STRIKE team discovered over 135,000 unique IPs running exposed OpenClaw instances across 82 countries, with 12,812 directly exploitable via remote code execution.

The root cause: OpenClaw's default configuration binds the service to 0.0.0.0:18789 (all network interfaces) rather than 127.0.0.1 (localhost only). Users who deployed the tool for personal automation unknowingly broadcast their control panels to the internet.

Researchers scanning exposed instances found over 1,800 instances leaking API keys, chat histories, and account credentials through the unauthenticated /api/export-auth endpoint.

ClawHub Supply Chain Poisoning: 1,184+ Malicious Skills

The attack surface extends to OpenClaw's official skill marketplace, ClawHub. Security audits revealed a massive supply chain poisoning campaign dubbed ClawHavoc:

  • Initial audit by Koi Security found 341 malicious skills out of 2,857 analyzed
  • As of February 16, 2026, the number grew to 824+ confirmed malicious skills across 10,700+ total skills
  • Bitdefender identified 14 threat actors contributing malicious content, with a single actor (Hightower6eu) uploading 354 malicious packages

The malicious skills primarily delivered Atomic macOS Stealer (AMOS), exfiltrated credentials to external webhooks, and hid reverse shell backdoors inside functional code. Attack vectors included typosquatted skill names, fake cryptocurrency tools, and trojanized productivity extensions.


Why This Matters for Compliance

If your organization uses AI agents in any capacity, this incident has direct implications for your SOC 2 and ISO 27001 compliance posture.

AI Agents Are Privileged Service Accounts

An AI agent with access to your email, APIs, cloud services, and internal resources isn't just a productivity tool. It's a privileged service account that most organizations haven't incorporated into their access control policies.

The OpenClaw incident demonstrates that:

  • Agent credentials are stored in plaintext config files on developer workstations, falling outside centralized secrets management
  • Agent behavioral guidelines are exfiltrable, giving attackers a roadmap to bypass safety controls
  • Default configurations prioritize convenience over security, violating the principle of least privilege

Compliance Framework Gaps

Current frameworks are catching up to AI agent risks, but auditors are already asking questions. Organizations deploying AI agents need to demonstrate:

  • Inventory and classification of all AI agent deployments (ISO 27001 Annex A.5.9)
  • Access control for agent credentials and API tokens (SOC 2 CC6.1)
  • Change management for agent configurations, skills, and behavioral guidelines
  • Monitoring and logging of agent actions and data access patterns
  • Incident response procedures specific to AI agent compromise

For a deeper dive into building compliant AI agent guardrails, see our guide on AI agent security guardrails for SOC 2 and ISO 27001.


Bastion's Recommendations

Based on the confirmed attack vectors in this incident, here are actionable steps your security team should take now.

1. Audit All AI Agent Deployments

Immediately identify every OpenClaw (or other AI agent) instance running in your environment. Check for:

  • Instances binding to 0.0.0.0 instead of 127.0.0.1
  • Exposed ports (default: 18789) accessible from outside the local machine
  • Unpatched versions vulnerable to CVE-2026-25253
Bash
# Check for exposed OpenClaw instances on your network
nmap -p 18789 --open <your-network-range>

2. Rotate All Compromised Credentials

If any team member has used OpenClaw, assume their gateway tokens and device keys may be compromised. Rotate immediately:

  • OpenClaw gateway tokens
  • Any API keys accessible through the agent (OpenAI, cloud providers, SaaS tools)
  • Paired device credentials
  • Any credentials stored in the agent's configuration or accessible through installed skills

3. Treat Agent Configs Like Secrets

AI agent configuration files contain authentication material and should be treated with the same rigor as SSH keys or API tokens:

  • Never store agent configs in plaintext on developer workstations
  • Use a secrets manager or vault to inject credentials at runtime
  • Add agent config files (openclaw.json, device.json, soul.md) to your .gitignore and endpoint monitoring
  • Apply MCP security best practices for any Model Context Protocol configurations

4. Lock Down Default Configurations

Override OpenClaw's insecure defaults before deployment:

  • Bind to 127.0.0.1 only (never 0.0.0.0)
  • Enable authentication on all API endpoints
  • Run agents in sandboxed containers with minimal permissions
  • Disable the /api/export-auth endpoint in production environments

5. Vet Third-Party Skills Like Dependencies

The ClawHub supply chain attack mirrors the patterns we've documented in npm supply chain attacks. Apply the same rigor:

  • Audit every installed skill before deployment
  • Verify publisher identity and publication history
  • Review skill source code for suspicious network calls, file access, or credential exfiltration
  • Pin skill versions and monitor for unexpected updates
  • Prefer skills from verified publishers with established track records

6. Add AI Agents to Your Endpoint Security Strategy

Infostealers are evolving to target AI agent data alongside browser credentials. Your endpoint protection should:

  • Monitor for unauthorized access to agent configuration directories
  • Detect exfiltration of .json and .md files from known agent paths
  • Include agent config files in your data loss prevention rules
  • Extend EDR coverage to detect Vidar and AMOS variants targeting AI tooling

7. Update Your Incident Response Plan

Add AI agent compromise as a specific scenario in your incident response procedures:

  • Define escalation paths for suspected agent credential theft
  • Document which systems and data each agent can access (blast radius mapping)
  • Establish procedures for revoking agent access and rotating paired credentials
  • Include AI agent compromise in your next tabletop exercise

The Bigger Picture

This incident marks a turning point. AI agents are no longer just tools that run on your machine: they hold credentials, access sensitive systems, and operate with significant autonomy. The security community has spent decades learning to protect browser credentials, SSH keys, and API tokens. Now we need to extend that same discipline to AI agent identities.

As OpenClaw's creator Peter Steinberger joins OpenAI and the project transitions to an open-source foundation, the platform will likely mature its security posture. But the fundamental challenge isn't specific to OpenClaw. Every AI agent framework, from MCP-based tools to autonomous coding assistants, faces the same architectural risks: plaintext credentials, overprivileged access, and unvetted third-party extensions.

The organizations that get ahead of this will be those that treat AI agents as first-class entities in their security architecture, not afterthoughts bolted onto existing workflows.


Need help securing your AI agent deployments? Bastion helps SaaS companies build security programs that account for emerging threats like AI agent compromise. Get started today.


Sources

Share this article

Other platforms check the box

We secure the box

Get in touch and learn why hundreds of companies trust Bastion to manage their security and fast-track their compliance.

Get Started