BACK_TO_FEEDAICRIER_2
Local LLMs need shared model storage
OPEN_SOURCE ↗
REDDIT · REDDIT// 16d agoINFRASTRUCTURE

Local LLMs need shared model storage

A Reddit user on r/LocalLLaMA asks how to stop local AI tools on a MacBook Pro from downloading the same large models into separate folders. The core complaint is simple: once you test multiple apps, duplicate caches start burning through disk space.

// ANALYSIS

This is one of those unglamorous local-AI problems that matters more than it sounds. The ecosystem already has partial answers, but no universal storage convention, so users end up stitching together a workaround.

  • Ollama already supports moving its model store with `OLLAMA_MODELS`.
  • LM Studio also exposes a configurable models directory, so a single canonical store is viable in at least some stacks.
  • Where apps do not support a shared path, symlinks or a shared mount are the practical workaround, though they can be brittle.
  • The best long-term design is a shared blob store with per-app metadata, not duplicated weights per app.
// TAGS
local-llmllmself-hostedinferencedevtool

DISCOVERED

16d ago

2026-03-26

PUBLISHED

17d ago

2026-03-26

RELEVANCE

5/ 10

AUTHOR

LyckeMi