OPEN_SOURCE ↗
REDDIT · REDDIT// 4d agoOPENSOURCE RELEASE
llama.cpp enables attn rotate by default
The latest llama.cpp release appears to turn attention rotation on by default, which clears up the confusion around whether users need to opt in manually. The release also adds support for attention rotation in heterogeneous iSWA/SWA paths, so long-context behavior is moving deeper into the default runtime.
// ANALYSIS
This is one of those upstream changes that matters more than the headline suggests: the feature is no longer just "merged," it is becoming the default behavior people will actually hit in production.
- –The release notes point to `kv-cache : support attention rotation for heterogeneous iSWA`, which is the concrete signal that SWA-style workloads are now part of the supported path.
- –The Reddit reply says it is enabled by default, and that there is no CLI flag to turn it on manually, which matches the "quiet default" pattern common in llama.cpp.
- –For developers running long-context or hybrid-attention models, this shifts the debugging question from "how do I enable it?" to "which model/runtime combinations still need extra tuning?"
- –The confusion in the thread is itself a signal: llama.cpp keeps adding low-level optimizations faster than its surface-level flags and docs fully catch up.
// TAGS
llminferenceopen-sourceclillama-cpp
DISCOVERED
4d ago
2026-04-08
PUBLISHED
4d ago
2026-04-08
RELEVANCE
9/ 10
AUTHOR
Altruistic_Heat_9531