BACK_TO_FEEDAICRIER_2
GLM-5.1 eats more VRAM than GLM-5
OPEN_SOURCE ↗
REDDIT · REDDIT// 2d agoNEWS

GLM-5.1 eats more VRAM than GLM-5

Reddit users testing GLM-5.1 GGUF quants on the same 24K prompt as GLM-5 report higher VRAM use and slower tokens/sec, even when the downloads are smaller. The likely explanation is runtime overhead, especially KV cache and offloading behavior, not checkpoint size alone.

// ANALYSIS

This is the classic local-LLM trap: disk size is only the weights, while inference memory is weights plus KV cache, allocator overhead, and any layer upcasting. At long context, those runtime costs can swamp the savings from a smaller quant.

  • Z.ai’s own GLM-5 docs emphasize that total memory needs to exceed the quantized file size and that KV-cache quantization can materially reduce VRAM use.
  • A 24K context run puts heavy pressure on the cache, so a smaller GGUF can still consume more VRAM if its runtime path is less memory-efficient.
  • Dynamic quants are not perfectly apples-to-apples; some layers may be upcast for quality, which can trade disk savings for higher live memory.
  • The slower tokens/sec fit the same pattern: once memory pressure rises, you get more offloading, less headroom, and worse throughput.
  • Net: GLM-5.1 may be a stronger model, but it is not necessarily a lighter one for local inference at the same settings.
// TAGS
llminferencegpuself-hostedopen-weightsglm-5.1glm-5

DISCOVERED

2d ago

2026-04-09

PUBLISHED

3d ago

2026-04-09

RELEVANCE

8/ 10

AUTHOR

relmny