BACK_TO_FEEDAICRIER_2
Xenith packs Alexa-like assistant into browser
OPEN_SOURCE ↗
REDDIT · REDDIT// 25d agoOPENSOURCE RELEASE

Xenith packs Alexa-like assistant into browser

Xenith is an open-source, fully in-browser voice assistant platform that runs STT, TTS, VAD, and LLM inference locally via WebAssembly/WebGPU, with configurable wake words, voices, and model choice. It is currently a proof of concept, but demonstrates a credible path to private, install-free local AI assistants.

// ANALYSIS

Hot take: this is the strongest argument yet that “local AI for normal users” could be a URL, not a desktop install, but browser compute limits still make it early-adopter territory.

  • The stack is real, not hand-wavy: Whisper (STT), VITS (TTS), Silero VAD, and WebLLM are wired together in-browser.
  • Architecture choices like separate web workers and streaming/queueing for speech output show practical engineering beyond a demo clip.
  • The main constraint is hardware and browser support: WebGPU + FP16 + significant VRAM are effectively required for usable performance.
  • Model load/unload tradeoffs (to fit browser memory limits) highlight the current UX tax of fully local browser inference.
// TAGS
xenithllmspeechchatbotedge-aiinferenceopen-source

DISCOVERED

25d ago

2026-03-17

PUBLISHED

25d ago

2026-03-17

RELEVANCE

8/ 10

AUTHOR

cppshane