BACK_TO_FEEDAICRIER_2
LM Studio tool calls stall on gpt-oss
OPEN_SOURCE ↗
REDDIT · REDDIT// 19d agoINFRASTRUCTURE

LM Studio tool calls stall on gpt-oss

A LocalLLaMA user on a MacBook says LM Studio won't reliably trigger DuckDuckGo web-search calls with a gpt-oss model, even after enabling the plugin. The thread points to template and tool-parsing friction inside LM Studio rather than a simple plugin misconfiguration.

// ANALYSIS

This is the kind of local-agent bug that feels random until you remember how many layers have to line up: model format, prompt template, tool parser, and MCP host. LM Studio's docs and recent release notes suggest that stack is still evolving, so the boring fix is usually the right one.

  • LM Studio says tool use works through both `/v1/chat/completions` and `/v1/responses`, but native tool-support models generally behave better than the fallback path.
  • The gpt-oss docs say those models work best in LM Studio's Responses API compatibility mode, which makes older chat-completions-style tool flows more fragile.
  • LM Studio auto-selects prompt templates from model metadata, but it also lets users override them manually, which is a likely culprit when tool calls vanish or get malformed.
  • Recent LM Studio releases have fixed non-streaming tool-call failures, Jinja prompt bugs, and tool-result/context handling, so an outdated build is a prime suspect.
  • The DuckDuckGo plugin is just an MCP server; if tool confirmations or tool permissions are still pending, the model can appear to fail when it is actually waiting for approval.
// TAGS
lm-studiollmmcpagentinferencesearchself-hosted

DISCOVERED

19d ago

2026-03-23

PUBLISHED

19d ago

2026-03-23

RELEVANCE

8/ 10

AUTHOR

chinese_virus3