OPEN_SOURCE ↗
HN · HACKER_NEWS// 26d agoTUTORIAL
Home Assistant Voice turns reliably local
A detailed community guide shows how one user replaced Google/Nest voice control with a fully local Home Assistant Voice setup built around Home Assistant Assist, llama.cpp, Kokoro TTS, ONNX/OpenVINO speech recognition, and custom prompt engineering. The key takeaway is that reliable local voice is now possible, but only if you tune the whole stack instead of relying on default settings.
// ANALYSIS
This is the most convincing case yet that open, self-hosted voice can feel good in daily use, not just in demos. It also underlines how far the ecosystem still is from true plug-and-play.
- –The post is strongest where it gets specific: tested GPUs, model sizes, response times, and which STT/TTS components actually held up in real-world use.
- –Speech-to-text looks like the real make-or-break layer here; the author got much better results from Parakeet-based Wyoming ONNX ASR than from weaker default local setups.
- –llama.cpp plus better-quantized local models noticeably improved tool calling, context handling, and recovery from misheard commands, which is exactly where local voice assistants usually fall apart.
- –Home Assistant’s extensibility shines through integrations like LLM Conversation and llm-intents, but the need for custom prompts and sentence automations shows the product still expects hobbyist-level effort.
- –For AI developers, the interesting part is less “smart home” and more the stack design lesson: local UX quality comes from pipeline engineering, not just dropping in an LLM.
// TAGS
home-assistant-voicellmspeechself-hostedopen-sourceedge-ai
DISCOVERED
26d ago
2026-03-16
PUBLISHED
26d ago
2026-03-16
RELEVANCE
7/ 10
AUTHOR
Vaslo