BACK_TO_FEEDAICRIER_2
Claude Cowork tests third-party inference backends
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoINFRASTRUCTURE

Claude Cowork tests third-party inference backends

This Reddit post is a practical setup question, not a launch announcement: the author is asking whether Claude Cowork can be pointed at Ollama through a local proxy after Anthropic added third-party inference support for Cowork. The relevant takeaway from current docs and community testing is that Cowork expects an Anthropic Messages API-compatible endpoint, so raw Ollama usually is not enough by itself; people are routing it through gateways like LiteLLM or other compatible proxies instead.

// ANALYSIS

Hot take: this is less “does it work?” and more “how much glue do you need to make it look like Anthropic?”

  • Anthropic’s own docs say Cowork on third-party inference is meant for provider-compatible backends such as Bedrock, Vertex AI, Azure Foundry, or an LLM gateway you control.
  • Community reports point to a working pattern: put a gateway like LiteLLM in front of Ollama and expose an Anthropic-style `/v1/messages` endpoint.
  • The friction is expected: Cowork is not just a generic chat client, so endpoint shape and tool-calling semantics matter more than raw model access.
  • If someone got this running “properly,” the likely success criterion is gateway compatibility, not direct Ollama support.
// TAGS
claudecoworkanthropicollamalitellmproxyllm-gatewaylocal-llmthird-party-inferenceopenai-compatible

DISCOVERED

3h ago

2026-04-24

PUBLISHED

5h ago

2026-04-23

RELEVANCE

7/ 10

AUTHOR

Purple_Wear_5397