BACK_TO_FEEDAICRIER_2
Llama.cpp extension compares local model performance
OPEN_SOURCE ↗
REDDIT · REDDIT// 20d agoOPENSOURCE RELEASE

Llama.cpp extension compares local model performance

This Chrome extension provides a historical performance dashboard for local LLM users by intercepting SSE streams from `llama.cpp` server UIs. It enables users to track, aggregate, and visualize metrics like tokens per second and latency across multiple sessions, hardware configurations, and model swaps.

// ANALYSIS

This essential "quality of life" tool for the local LLM community transforms ephemeral console logs into actionable hardware optimization data. It intercepts SSE streams directly from the browser's fetch API to ensure compatibility across various web-based UIs while using local IndexedDB storage for privacy. The extension features side-by-side model rankings and scatter plots to determine optimal performance ratios, alongside JSONL export and PNG dashboard stitching for deeper analysis.

// TAGS
llamacppbrowser-extensionllminferenceopen-sourcedevtoolllamacpp-ui-metrics-extension

DISCOVERED

20d ago

2026-03-23

PUBLISHED

20d ago

2026-03-23

RELEVANCE

8/ 10

AUTHOR

colonel_whitebeard