llm unifies LLM APIs for C
`llm` is an Apache-2.0, header-only C library for calling LLM providers through a single interface, plus a companion CLI. The project targets C99 and uses libcurl and pthreads, with support for providers like OpenAI, Anthropic, Groq, Ollama, Together AI, Mistral, Cohere, Gemini, DeepSeek, OpenRouter, Perplexity, Fireworks, vLLM, and custom endpoints. The README shows support for sync, streaming, async, batch, and tool-calling workflows, along with configurable retries, timeouts, proxy settings, and stats reporting.
Hot take: this is a practical glue layer for people who want LLM access in C without pulling in a heavyweight SDK stack, and the CLI makes it immediately useful even before you embed it.
- –The strongest angle is portability: header-only C plus libcurl/pthreads lowers integration friction for systems and tooling projects.
- –Provider coverage is broad, but the main value is the unified API and OpenAI-compatible request path for most vendors.
- –The feature set is unusually complete for a small library: streaming, async callbacks, batch execution, tool calling, retries, and per-request stats.
- –The project is still early in adoption, so the real question is maintenance burden across many provider APIs over time.
DISCOVERED
3h ago
2026-04-29
PUBLISHED
5h ago
2026-04-29
RELEVANCE
AUTHOR
IntrepidAttention56