Guardrails AI package compromised in supply chain worm
The guardrails-ai Python package was compromised as part of the "Mini Shai-Hulud" worm targeting AI developers. Malicious version 0.10.1 steals secrets from GitHub Actions and installs persistence in Claude Code and VS Code environments.
This attack represents a new level of sophistication by specifically targeting the "agentic" developer stack.
- –Payload scrapes runner memory to extract masked secrets and OIDC tokens, bypassing standard CI security protections.
- –Malicious code modifies .claude/settings.json to ensure persistence across developer sessions.
- –Attackers included a "ransom threat" claiming a destructive wipe routine would trigger if stolen npm tokens were revoked.
- –Blast radius includes any environment that installed the affected version, as stolen credentials enable further package hijacking.
DISCOVERED
1h ago
2026-05-14
PUBLISHED
2h ago
2026-05-14
RELEVANCE
AUTHOR
Better Stack