BACK_TO_FEEDAICRIER_2
Proxmox AI server build weighs Dell, HP options
OPEN_SOURCE ↗
REDDIT · REDDIT// 14d agoINFRASTRUCTURE

Proxmox AI server build weighs Dell, HP options

A Reddit user asks LocalLLaMA to vet two budget-conscious server builds for forecasting, anomaly detection, and image inventory workloads over the next two years, with electricity costs front and center. The choice comes down to a legacy Dell-heavy setup versus a denser single-socket EPYC host paired with HP's ZGX Nano G1n AI Station.

// ANALYSIS

This is a total-cost-of-ownership question disguised as a hardware shopping list. The legacy Dell route buys redundancy, but it also buys idle power, fan noise, RAID complexity, and a lot of old silicon to babysit.

  • Version 1 spreads the budget across multiple aging rack servers, so the electricity and maintenance bill is likely to keep climbing even if the purchase price looks familiar.
  • The thread's early replies already lean away from the R730 generation and toward the R7515, which is the more sensible Proxmox base for the next two years.
  • HP's ZGX Nano G1n is the most modern piece in the mix, with 128GB unified memory, a 4TB NVMe SSD, and a 240W adapter, so it makes sense as a local prototyping or inference node.
  • The smartest split is probably hot compute on one efficient node and cold storage on separate cheap disks, not a cluster of legacy servers trying to do everything.
// TAGS
self-hostedinferencegpuproxmoxhp-zgx-nano-g1n-ai-stationdell-poweredge-r7515dell-poweredge-r730

DISCOVERED

14d ago

2026-03-28

PUBLISHED

14d ago

2026-03-28

RELEVANCE

7/ 10

AUTHOR

Lazy_Invite3133