Llama.cpp extension compares local model performance
This Chrome extension provides a historical performance dashboard for local LLM users by intercepting SSE streams from `llama.cpp` server UIs. It enables users to track, aggregate, and visualize metrics like tokens per second and latency across multiple sessions, hardware configurations, and model swaps.
This essential "quality of life" tool for the local LLM community transforms ephemeral console logs into actionable hardware optimization data. It intercepts SSE streams directly from the browser's fetch API to ensure compatibility across various web-based UIs while using local IndexedDB storage for privacy. The extension features side-by-side model rankings and scatter plots to determine optimal performance ratios, alongside JSONL export and PNG dashboard stitching for deeper analysis.
DISCOVERED
20d ago
2026-03-23
PUBLISHED
20d ago
2026-03-23
RELEVANCE
AUTHOR
colonel_whitebeard