OPEN_SOURCE ↗
YT · YOUTUBE// 28d agoTUTORIAL
Markdown drops AI agent token usage 99.7%
Checkly's analysis reveals that using HTTP content negotiation to serve Markdown instead of HTML to AI agents reduces token consumption by 99.7% — from 180,573 tokens to just 478 per page. Only Claude Code, Cursor, and OpenCode currently send the `Accept: text/markdown` header; the rest of the major agents still default to HTML.
// ANALYSIS
A 99.7% token reduction is not a micro-optimization — at agent scale, this is the difference between viable and unaffordable. The fix is a one-line header check on the server side, which makes the ROI absurdly high.
- –HTTP content negotiation is a decades-old standard being repurposed as the emerging handshake between agents and web servers — no new protocol needed
- –Most major agents (OpenAI Codex, Gemini CLI, GitHub Copilot, Windsurf) still don't send markdown preference headers, leaving massive savings on the table
- –Checkly pairs this with llms.txt and structured "agent skills" — a layered approach to making web content agent-native
- –For teams running high-frequency agents that fetch documentation or web content, this is immediately actionable with no API changes required
- –The finding aligns with Cloudflare and Vercel data, suggesting this is becoming an industry-wide pattern worth standardizing around
// TAGS
agentllmdevtoolapichecklyinference
DISCOVERED
28d ago
2026-03-15
PUBLISHED
28d ago
2026-03-15
RELEVANCE
8/ 10
AUTHOR
DIY Smart Code