BACK_TO_FEEDAICRIER_2
llama.cpp fixes KV-cache rotation for Gemma 4
OPEN_SOURCE ↗
REDDIT · REDDIT// 4d agoPRODUCT UPDATE

llama.cpp fixes KV-cache rotation for Gemma 4

llama.cpp patches KV-cache rotation for heterogeneous iSWA models, where sliding-window and non-sliding-window layers use different head sizes. The fix matters for Gemma 4 and similar hybrids, which could otherwise show incorrect rotary handling during inference.

// ANALYSIS

A small-looking inference patch, but it closes a correctness gap in a widely used local-model runtime. For projects serving hybrid-attention models, this is the kind of low-level fix that quietly determines whether outputs stay stable at scale. The change targets heterogeneous iSWA layouts, not generic attention, so it addresses a specific class of model architecture bugs. The author reports matching perplexity between f16, q8_0, and q8_0 rot paths, which is a good sign the rotation logic now preserves behavior. This is especially relevant for local and quantized inference users, since llama.cpp often becomes the reference runtime for new open models. The mention of Gemma 4 suggests upstream model support is moving fast enough that runtime correctness still needs active patching.

// TAGS
llama-cppllminferenceopen-sourcegemma-4

DISCOVERED

4d ago

2026-04-07

PUBLISHED

4d ago

2026-04-07

RELEVANCE

8/ 10

AUTHOR

jacek2023