OPEN_SOURCE ↗
REDDIT · REDDIT// 14d agoPRODUCT UPDATE
llama-server cache migration breaks GGUF scripts
The latest llama-server build automatically migrates models downloaded with -hf from the old ~/.cache/llama.cpp layout into Hugging Face's cache tree. That makes the project more interoperable with Hugging Face tooling, but it breaks launch scripts and model managers that relied on stable local file paths.
// ANALYSIS
Good ecosystem move, rough developer experience. If cache paths are part of how users deploy and share models, a one-way startup migration is a breaking change, not just a refactor.
- –The Hugging Face cache is blob/snapshot based, so direct filename assumptions stop working once files are re-homed.
- –Teams that symlink, rsync, or mount model directories across machines feel the pain first because the new layout optimizes dedupe, not human-readable paths.
- –`--model-url` users are spared, but anyone who leaned on `-hf` gets an abrupt filesystem change with no obvious rollback.
- –This should have shipped with a grace period, an opt-out, or a compatibility shim before moving user files.
// TAGS
llama-cppopen-sourceinferencecliself-hosted
DISCOVERED
14d ago
2026-03-28
PUBLISHED
14d ago
2026-03-28
RELEVANCE
8/ 10
AUTHOR
hgshepherd