BACK_TO_FEEDAICRIER_2
Open WebUI stack turns local GPU into ChatGPT
OPEN_SOURCE ↗
REDDIT · REDDIT// 1d agoTUTORIAL

Open WebUI stack turns local GPU into ChatGPT

This tutorial packages a one-liner Docker Compose stack for a local ChatGPT-style setup using vLLM, Open WebUI, SearXNG, and Open Terminal. It is centered on a Gemma 4 quant and includes a workaround for vLLM’s current Transformers gap.

// ANALYSIS

This is a practical homelab blueprint, not a polished consumer product: if you have the GPU and patience, it gets you a surprisingly complete local assistant stack in one shot.

  • vLLM serves the model, Open WebUI provides the chat UX, SearXNG handles web search, and Open Terminal adds shell execution, so the stack covers most of the “ChatGPT plus tools” experience locally
  • The `transformers>=5.5.0` entrypoint hack is the real story here: it works, but it also signals this is still bleeding-edge plumbing
  • The `--tool-call-parser gemma4` addition matters because it makes tool use much more viable for Open WebUI and other agentic clients
  • This setup is strongly NVIDIA/GPU-centric and assumes comfortable self-hosting, so it is better described as a power-user recipe than a mainstream deployment
  • The post is useful because it collapses a multi-component local AI setup into a reproducible compose file, which is exactly the kind of friction reduction the ecosystem needs
// TAGS
open-webuivllmsearxngopen-terminalself-hostedllmsearch

DISCOVERED

1d ago

2026-04-10

PUBLISHED

1d ago

2026-04-10

RELEVANCE

8/ 10

AUTHOR

Opening-Broccoli9190