OPEN_SOURCE ↗
REDDIT · REDDIT// 6d agoBENCHMARK RESULT
RTX 5090 drives Gemma 4 26B-A4B
Early testing on a modified vLLM build with NVFP4 support shows Gemma 4 26B-A4B running comfortably on an RTX 5090 with full context enabled. The poster says the model weights take about 15.76 GiB, leaving the rest of the card for KV cache, and reports around 150 tokens/s in a story-telling prompt with no thinking plus roughly 80 ms TTFT in streaming mode. They also say output quality looks good.
// ANALYSIS
Strong-looking local inference numbers, but this is a highly tuned single-user datapoint rather than a general benchmark.
- –The headline win is throughput: ~150 t/s on a consumer GPU is a solid result for a 26B-class model.
- –Full-context operation matters here, since the report suggests the memory budget still leaves room for KV cache instead of forcing aggressive compromise.
- –The stack is custom, so results depend on the specific vLLM fork, NVFP4 path, prompt style, and sampling settings.
- –TTFT around 80 ms makes the setup feel responsive enough for interactive use, not just batch generation.
- –Quality being described as good is the real signal that the speedup is not obviously coming at the cost of output usefulness.
// TAGS
gemma-4gemma-4-26b-a4brtx-5090vllmnvfp4local-llmbenchmarkinferencetokens-per-second
DISCOVERED
6d ago
2026-04-06
PUBLISHED
6d ago
2026-04-06
RELEVANCE
8/ 10
AUTHOR
Nice_Cellist_7595