BACK_TO_FEEDAICRIER_2
llama.cpp RPC spreads 72B across two PCs
OPEN_SOURCE ↗
REDDIT · REDDIT// 32d agoINFRASTRUCTURE

llama.cpp RPC spreads 72B across two PCs

A LocalLLaMA post shows llama.cpp's RPC backend can split a 36GB Qwen2.5-72B-Instruct quant across an RTX 3090, an old RTX 3060, and about 4.3GB of CPU RAM, yielding roughly 3.76 tokens per second over 1GbE. The bigger takeaway is that local LLM users can turn spare hardware into usable VRAM, though the setup still required a custom Docker build because the stock image lacked RPC support.

// ANALYSIS

This is exactly the kind of scrappy infrastructure hack that makes local inference more practical, but it also shows llama.cpp RPC is still a power-user feature rather than a polished deployment path.

  • llama.cpp's own RPC docs frame the backend as proof-of-concept and warn against using it on open networks, so this is impressive but not production-ready
  • The reported throughput is modest, yet good enough for solo chat and experimentation when the alternative is not running a 72B model at all
  • Automatic tensor distribution across local and remote devices makes the feature unusually approachable once the build works
  • The real bottleneck here is packaging and ergonomics: having to rebuild Docker with `-DGGML_RPC=ON` is still too much friction for most users
// TAGS
llama-cppllminferencegpuself-hostedopen-source

DISCOVERED

32d ago

2026-03-10

PUBLISHED

36d ago

2026-03-06

RELEVANCE

8/ 10

AUTHOR

righcoastmike