OPEN_SOURCE ↗
REDDIT · REDDIT// 21d agoINFRASTRUCTURE
First Home Server Build Targets AI Hosting
A platform engineer wants a first home server to host web services, run agentic workflows, and eventually experiment with small language models. The parts list is sensible for always-on infra, but the GPU and training ambitions are where the budget can get away from you.
// ANALYSIS
This is a solid “learn by shipping” box, not a serious training rig yet. If the real near-term goal is MVPs plus API-driven agent workflows, the smartest spend is reliability and headroom, not rushing into a GPU purchase.
- –`i5-13400` is fine for containers, databases, and web apps, but `32GB RAM` will get tight once you stack observability, background jobs, vector stores, and model-serving sidecars
- –`RTX 4060 Ti 16GB` is acceptable for 7B-13B quantized inference, but it’s bandwidth-limited and not the card you buy for meaningful SLM training
- –`1TB SSD` is the first spec I’d upgrade; model files, Docker layers, logs, and datasets eat space embarrassingly fast
- –`750W PSU` is more than enough, which is good for future GPU swaps, but 24/7 efficiency and noise matter more than raw wattage
- –If local AI is still “maybe later,” skip the GPU for now and put the savings into `64GB RAM`, a larger NVMe drive, and better cooling
// TAGS
self-hostedinfrastructuregpullmagentautomationdevtoolai-coding
DISCOVERED
21d ago
2026-03-21
PUBLISHED
21d ago
2026-03-21
RELEVANCE
5/ 10
AUTHOR
Silly_Definition7531