OPEN_SOURCE ↗
HN · HACKER_NEWS// 31d agoSECURITY INCIDENT
McKinsey's Lilli falls to autonomous exploit
Security startup CodeWall says its autonomous offensive agent found an unauthenticated SQL injection and IDOR chain in McKinsey's internal AI platform Lilli, gaining read-write access to chat data, files, user accounts, prompts, and RAG assets within about two hours. McKinsey told The Register it fixed the issues within hours and found no evidence that client data or confidential information were accessed by the researcher or any other unauthorized third party.
// ANALYSIS
Autonomous attack tooling is turning old web bugs into AI-era systemic failures, and the most important lesson here is that prompt layers now need the same protection as code and secrets.
- –CodeWall claims one exposed endpoint plus blind SQLi iteration was enough to reach 46.5 million chat messages, 728,000 files, and writable system prompts
- –The scary part is not just data exposure but prompt tampering, which could silently poison answers, citations, and guardrails for thousands of internal users
- –This is a sharp warning for enterprise AI teams building RAG-heavy internal tools: prompts, retrieval stores, and user content should not sit in one loosely protected blast radius
- –McKinsey's rapid patching matters, but the incident still shows how conventional AppSec gaps become much more damaging once an AI layer is attached
// TAGS
lilliagentllmragautomationsafety
DISCOVERED
31d ago
2026-03-11
PUBLISHED
32d ago
2026-03-11
RELEVANCE
8/ 10
AUTHOR
mycroft_4221