Four Incidents, One Root Cause
12% of OpenClaw's skill registry was compromised. 824 malicious skills. Keyloggers. Credential stealers. One-click RCE from visiting a single webpage.
And nobody governing any of it.
The last few weeks have delivered four major AI security incidents back to back. Each looks different on the surface. Underneath, they share the same root cause: AI agents are being deployed without identity, without behavioral baselines, and without security, control and governance in place.
Here's what happened, what it means, and what the security industry needs to do about it.
🔥 OpenClaw: The First Major AI Agent Supply Chain Crisis
One-click remote code execution. Visit a malicious webpage, lose your machine.
OpenClaw became one of GitHub's fastest-growing repositories in history — 346,000 stars, 135,000 exposed instances across 82 countries. It also became the first major AI agent supply chain crisis of 2026.
Researchers confirmed 341 malicious skills in the OpenClaw registry, escalating to 824+ active malicious skills by early April. These weren't subtle — they included keyloggers, credential stealers targeting OAuth tokens and API keys, and exfiltration tools.
The attack chain for CVE-2026-25253 takes milliseconds. A single malicious webpage is enough. No user interaction required beyond visiting the page. 15,000+ instances were directly vulnerable to remote code execution at the time of disclosure.
But the deeper problem isn't the CVE. It's what happens when employees connect personal AI agents to corporate systems — Slack, Google Workspace, GitHub, CRM — without security team visibility. Those agents inherit OAuth tokens and API access that enable lateral movement across the entire enterprise.
RuntimeAI Take
OpenClaw wouldn't be a crisis if enterprises had agent-level discovery. The agents were there for weeks or months. Nobody knew. RuntimeAI's Discovery Pipeline scans 9 categories of AI tools — including running processes, IDE extensions, MCP configurations, Docker containers, and npm/Python packages — and surfaces every shadow agent in the environment within minutes. You can't govern what you can't see. Discovery is the first layer.
Once discovered, the second layer is identity. Every OpenClaw instance operating without a cryptographic identity is an anonymous agent with implicit trust. RuntimeAI issues SPIFFE/X.509 certificates per agent — if an agent can't prove its identity cryptographically, it doesn't get network access. Period.
💰 Mercor / LiteLLM: 40 Minutes, 4 Terabytes
Two poisoned PyPI packages. A $10B AI startup breached. Meta paused all work.
On March 27, the threat group TeamPCP published two malicious versions of LiteLLM (v1.82.7 and v1.82.8) to PyPI. The poisoned packages were live for approximately 40 minutes before detection. That was enough.
Mercor — a $10 billion AI recruiting startup whose customers include Anthropic, OpenAI, and Meta — was one of thousands of companies affected. The extortion group Lapsus$ claims to have obtained 4TB of data, including source code, database records, candidate PII, and employer data.
The fallout was immediate: Meta paused all work with Mercor. OpenAI launched an investigation into its exposure. Reports suggest AI training data and details of secretive AI projects may have been compromised.
RuntimeAI Take
Mercor wouldn't be a 4TB breach if supply chain attestation existed at the agent layer. The compromised LiteLLM package immediately changed its behavioral signature — new egress destinations, credential harvesting, encrypted exfiltration to attacker-controlled infrastructure. These are detectable signals.
RuntimeAI's defense-in-depth approach would have intercepted this at multiple layers: default-deny egress blocks exfiltration to unknown domains (the attacker's endpoint would never be on the allowlist). Behavioral drift detection flags the moment a known component starts acting differently. And the kill switch terminates the compromised component at the network layer in under 100ms — no cooperation from the compromised software required.
40 minutes is an eternity when you have the right layers in place. It should have been 40 seconds.
🔗 CVE-2026-32211: Microsoft's Own MCP Server Had Zero Auth
Network access = full data access. No credentials needed. API keys, auth tokens, project configs exposed.
On April 3, Microsoft disclosed CVE-2026-32211 — a missing authentication vulnerability in the Azure DevOps MCP server (@azure-devops/mcp on npm). CVSS score: 9.1.
The vulnerability is as straightforward as it is damaging: the MCP server had no authentication on critical endpoints. Anyone with network access could reach it. No credentials required. Exposed data included configuration details, API keys, authentication tokens, and project data.
As of this writing, no patch is available. Microsoft has published mitigation guidance recommending network segmentation and firewall rules.
RuntimeAI Take
This is the MCP security problem in one CVE. MCP servers aggregate access to enterprise systems — databases, repos, CI/CD pipelines, cloud infrastructure. When the protocol layer itself ships without authentication, every connected system is exposed.
This wouldn't matter if every MCP tool call went through a governed gateway with authentication enforced by default. That's exactly what RuntimeAI's AI Integration Fabric does: a 10-layer security, control and governance pipeline on every tool invocation — identity verification, access package check, OPA policy evaluation, rate limiting, behavioral risk scoring, input DLP, credential injection from Vault (agents never hold real keys), tool execution, output DLP, and immutable audit logging.
The Azure flaw is a missing front door. RuntimeAI's AI Integration Fabric is the front door — with six locks on it.
📊 SANS Top 5: Every Attack Technique Now Has an AI Dimension
For the first time in this keynote's history, every single attack technique carries an AI dimension.
At RSAC 2026, SANS released their annual "Top 5 Most Dangerous New Attack Techniques." For the first time in the history of this keynote, every single one carries an AI dimension:
- AI-Generated Zero Days — What once required months and millions from nation-state brokers now takes hours with AI
- AI-Accelerated Supply Chain Attacks — 65% of organizations experienced a supply chain attack in 2026; 454,000+ malicious packages in 2025 alone
- AI Forensic Failures — Over-reliance on AI in incident response creates dangerous blind spots
- AI-Targeted OT Complexity — AI-powered attacks increasingly targeting operational technology
- AI-Accelerated Attack Lifecycles — Breach to lateral movement in 8 minutes
RuntimeAI Take
SANS just made it official: the threat model changed. Every enterprise deploying AI agents without identity, without behavioral monitoring, without a runtime firewall is running the exact exposure profile SANS just put on the map.
Eight minutes from breach to lateral movement means your response has to be automated, not human-paced. RuntimeAI's kill switch propagates across all Enforcer instances in under 100ms via the Global Signal Bus. Behavioral anomaly detection doesn't wait for a SOC analyst to notice — drift from baseline triggers policy action automatically. The 21-checkpoint Data Plane Enforcer evaluates every LLM API call inline, in under 50ms. That's the kind of speed a post-SANS world demands.
The Pattern
These aren't four isolated incidents. They're four symptoms of the same systemic failure — AI agents deployed without security, control and governance:
| Incident | Root Cause | Missing Layer |
|---|---|---|
| OpenClaw | Shadow AI agents with no visibility | Agent Discovery |
| Mercor / LiteLLM | No runtime behavioral monitoring | Behavioral Intelligence + Kill Switch |
| Azure MCP CVE | No auth on MCP tool access | AI Integration Fabric with Identity |
| SANS Top 5 | AI agents are the new attack surface | Full Security, Control & Governance Plane |
The security industry spent 20 years building identity, access control, and zero trust for humans. AI agents got none of it. They ship with implicit trust, hardcoded credentials, and zero audit trails.
The question isn't whether your org will have an AI agent incident. It's whether you'll know about it when it happens.
What We Built to Close the Gap
RuntimeAI is a full security, control and governance platform for AI agents. Not another scanner. Not another dashboard. A complete enforcement layer that sits between your agents and everything they touch:
- 🔍 AI Discovery — 9 scanner categories. Find every agent — registered or shadow — across endpoints, VMs, cloud, containers, IDEs, and MCP configurations
- 🪪 Agent Identity Fabric — SPIFFE/X.509 cryptographic identity per agent. No anonymous tool calls. No implicit trust.
- 🛡️ 21-Checkpoint Data Plane Enforcer — Every LLM API call passes through identity, policy, DLP, behavioral analysis, and audit — inline, in under 50ms
- 🔗 AI Integration Fabric — 10-layer security, control and governance pipeline on every tool invocation. Identity, access packages, DLP, credential injection, audit.
- 🧠 Behavioral Intelligence — Drift detection catches the moment a component starts acting differently than its established baseline
- 🚨 Kill Switch — Sub-100ms quarantine propagated to every Enforcer via the Global Signal Bus. No cooperation from compromised software needed.
- 📋 Compliance Hub — SOC 2, FedRAMP, EU AI Act, HIPAA evidence generated automatically from real telemetry
- 💰 Cost Intelligence — Token-level budget enforcement. Runaway agents hit a wall, not your invoice.
The era of ungoverned AI agents was never sustainable. This week proved it.
Govern Your AI Agents Before Attackers Find Them
See how RuntimeAI provides identity, policy enforcement, behavioral monitoring, and a kill switch for every AI agent in your environment.
Request a Demo →