OPEN_SOURCE ↗
REDDIT · REDDIT// 4h agoINFRASTRUCTURE
Dev pools Blackwell, Ada GPUs for local LLMs
A developer upgrading to an RTX Pro 4000 Blackwell explores whether to keep an older RTX 2000 Ada to pool 40GB of total VRAM for running Qwen MoE models via llama.cpp. The query highlights the growing trend of leveraging mismatched enterprise GPUs to maximize local inference capacity.
// ANALYSIS
Mixing GPU architectures and VRAM sizes is the secret weapon of the local LLM community, turning disparate hardware into viable, high-capacity inference servers.
- –llama.cpp natively supports sequential layer splitting across mismatched GPUs, making a combined 24GB and 16GB setup highly effective for fitting larger open-weight models into memory.
- –Standard PCIe slots provide sufficient bandwidth for token generation, though the initial prompt processing (prefill) phase might see slight bottlenecks compared to NVLink.
- –By designating the newer Blackwell card as the primary GPU for KV caching and sampling, developers can maximize generation speed while still fully utilizing the Ada card's VRAM.
// TAGS
llama-cppgpuinferencellmopen-weights
DISCOVERED
4h ago
2026-04-18
PUBLISHED
5h ago
2026-04-18
RELEVANCE
7/ 10
AUTHOR
bromatofiel