OPEN_SOURCE ↗
REDDIT · REDDIT// 5h agoSECURITY INCIDENT
X user tricks Grok into $200K transfer
An X user reportedly exploited Grok’s connection to Bankrbot by getting the chatbot to translate and pass along a Morse code message, which was treated as an authorized command and triggered a transfer of 3 billion DRB tokens on Base. The attacker quickly sold the tokens, highlighting how indirect prompt injection can turn into a financial control issue when an AI model has wallet-linked automation.
// ANALYSIS
This is a security incident, not a product feature, and the interesting part is the trust boundary failure between a general-purpose chatbot and an execution-capable bot.
- –The attack path appears to be prompt injection through a translation task, then command forwarding into a privileged automation system.
- –The weak point was not just Grok, but Grok’s ability to act as a bridge into Bankrbot permissions.
- –Crypto-connected AI workflows need strict command parsing, human confirmation, and capability scoping before any on-chain action.
- –If the reported sequence is accurate, this is a clean example of how “just relaying text” can still become execution when downstream systems treat model output as trusted input.
// TAGS
grokxaibankrbotsecuritycryptobasesafety
DISCOVERED
5h ago
2026-05-05
PUBLISHED
5h ago
2026-05-05
RELEVANCE
9/ 10
AUTHOR
ImCalcium