BACK_TO_FEEDAICRIER_2
RTX PRO 6000 Build Targets 100B Models
OPEN_SOURCE ↗
REDDIT · REDDIT// 25d agoINFRASTRUCTURE

RTX PRO 6000 Build Targets 100B Models

A hardware enthusiast is planning a high-end local AI workstation built around an NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPU and Ryzen 9 9950X3D CPU. The proposed setup pairs 128GB of DDR5-7200 RAM with an ASUS X870E motherboard to run 100B+ models, support multi-user coding and summarization, and handle 4K gaming and idle Monero mining.

// ANALYSIS

The NVIDIA RTX PRO 6000 Blackwell Workstation Edition (96GB VRAM) is a game-changer for local AI, allowing 100B+ parameter models to run mostly or entirely in VRAM at high quantization levels (Q6-Q8).

  • Running 128GB (4x32GB) of DDR5 at 7200MT/s is notoriously unstable on the AM5 platform; users typically must downclock to 3600-5200MT/s, which severely limits LLM inference speeds when offloading is required.
  • For a multi-user environment (4 users), software like vLLM or sglang is a better choice than standard desktop tools like Ollama or LM Studio.
  • Planning for a 1600W PSU now is highly recommended to support the almost inevitable addition of a second high-end GPU for large model parity.
  • The 9950X3D is a powerhouse, but memory bandwidth (not core count) remains the primary constraint for large model CPU offloading.
// TAGS
gpullmworkstationlocal-ainvidia-rtx-pro-6000-blackwell-workstation-edition9950x3dam5

DISCOVERED

25d ago

2026-03-17

PUBLISHED

25d ago

2026-03-17

RELEVANCE

7/ 10

AUTHOR

sasquatch3277