BACK_TO_FEEDAICRIER_2
Developer seeks local LLM for 24GB Mac Mini
OPEN_SOURCE ↗
REDDIT · REDDIT// 14h agoINFRASTRUCTURE

Developer seeks local LLM for 24GB Mac Mini

A developer is searching for the best local coding model to run on a 24GB Mac Mini M4 Pro. They need a model capable of handling small to medium Terraform, React, Flutter, and Node.js tasks for daily development.

// ANALYSIS

The 24GB RAM constraint on Apple Silicon requires careful model selection to balance inference speed and capability for coding tasks.

  • 24GB unified memory leaves roughly 16-18GB for model weights, limiting choices to heavily quantized 32B models or less quantized 7B-14B models.
  • Models like Qwen2.5-Coder-14B or DeepSeek-Coder-V2-Lite are likely the sweet spot for this specific hardware footprint.
  • Running models locally for daily development significantly reduces API costs while ensuring code privacy and offline availability.
  • This highlights a growing trend of developers optimizing local LLM setups for specific tech stacks (like Terraform and React) rather than relying solely on cloud providers.
// TAGS
localllamallmai-codinginferenceself-hosted

DISCOVERED

14h ago

2026-04-17

PUBLISHED

14h ago

2026-04-17

RELEVANCE

6/ 10

AUTHOR

dave-tro