OPEN_SOURCE ↗
REDDIT · REDDIT// 12d agoBENCHMARK RESULT
DGX Spark q4_0 Craters at 64K
This Reddit post shares a benchmark sweep on the NVIDIA DGX Spark GB10 showing that llama.cpp KV cache quantization is not uniformly beneficial at long context lengths. In the reported setup, q4_0 is roughly comparable to f16 at 8K and 32K, but at 64K it collapses from 283 tps to 21 tps while also using more RSS memory than f16. The author attributes the regression to dequantization traffic saturating unified memory bandwidth, and argues that q8_0 avoids the cliff with only a small speed penalty.
// ANALYSIS
Strong signal, but it is a hardware-specific warning, not a general indictment of KV quantization.
- –The interesting part is the 64K cliff: performance degrades catastrophically only once the KV cache gets large enough to stress unified memory.
- –The memory result is counterintuitive but plausible on GB10, where dequantization workspace and metadata can outweigh int4 savings.
- –The practical takeaway is that low-bit KV cache formats may help at modest context lengths, but they are not automatically a win on unified-memory systems.
- –The q8_0 result is the most actionable datapoint here: if you want stable long-context behavior, that seems like the safer default.
- –For TurboQuant-style schemes, the bottleneck looks less like compression ratio and more like attention-time read amplification over the memory bus.
// TAGS
llmkv-cachequantizationllama.cppdgx-sparkgb10unified-memorybenchmarklong-context
DISCOVERED
12d ago
2026-03-31
PUBLISHED
12d ago
2026-03-31
RELEVANCE
8/ 10
AUTHOR
dentity9000