OPEN_SOURCE ↗
REDDIT · REDDIT// 24d agoTUTORIAL
Desktop Companion links remote llama.cpp server
This tutorial shows how to point Desktop Companion at a remote llama.cpp or LM Studio backend running on a separate PC. The key move is swapping `localhost` for the AI box's LAN IP and using the OpenAI-compatible `/v1` base URL so the two machines can talk.
// ANALYSIS
The networking part matters more than the app choice: if the server stays bound to `127.0.0.1`, nothing else on the LAN can reach it.
- –llama.cpp's server defaults to `127.0.0.1:8080`; for remote access it needs to listen on `0.0.0.0` or the machine's LAN interface.
- –OpenAI-style clients usually target a `/v1` base URL, with actual requests landing on endpoints like `/v1/chat/completions`.
- –Desktop Companion's LM Studio support is useful here because it can treat the remote box like a local model host once the endpoint is reachable.
- –This setup is a clean way to keep inference off the gaming rig without changing the user-facing app on PC 3.
- –If you want a more polished remote workflow later, LM Studio's LM Link is a better fit than manually punching holes in the network.
// TAGS
desktop-companionllmself-hostedapiinferencelm-studio
DISCOVERED
24d ago
2026-03-18
PUBLISHED
24d ago
2026-03-18
RELEVANCE
7/ 10
AUTHOR
Quiet_Dasy