BACK_TO_FEEDAICRIER_2
LM Studio Claude API setup guide
OPEN_SOURCE ↗
REDDIT · REDDIT// 23d agoTUTORIAL

LM Studio Claude API setup guide

The Reddit post asks where to find a proper manual for wiring LM Studio into Claude-style workflows and reports slow performance with GPT-OSS 20B on a mobile RTX 4080. LM Studio’s docs now cover Anthropic-compatible `/v1/messages`, Claude Code integration, and the `openai/gpt-oss-20b` model page shows the path is supported, but tuning still matters.

// ANALYSIS

The core issue looks less like missing compatibility and more like scattered docs plus performance-tuning friction.

  • LM Studio now documents both OpenAI-compatible and Anthropic-compatible endpoints, so Claude-like clients can usually just point `ANTHROPIC_BASE_URL` at `http://localhost:1234`.
  • The Claude Code walkthrough is the clearest start here path, with `ANTHROPIC_AUTH_TOKEN=lmstudio` and `claude --model openai/gpt-oss-20b`.
  • The `gpt-oss-20b` model page says the model is built for local deployment, uses only 3.6B active parameters, and supports 131k context, so slowdowns are more likely tied to context size, quantization, or offload settings than raw feasibility.
  • For a 16GB VRAM laptop GPU, the model is plausible, but “works” and “feels snappy” are very different bars.
  • This post is a good signal that LM Studio could benefit from one end-to-end setup page for Anthropic/Claude users.
// TAGS
lm-studiollmapiinferenceself-hostedgpucli

DISCOVERED

23d ago

2026-03-19

PUBLISHED

24d ago

2026-03-19

RELEVANCE

8/ 10

AUTHOR

ConstructionRough152