BACK_TO_FEEDAICRIER_2
Local Home Assistant tests small-model tool calling
OPEN_SOURCE ↗
REDDIT · REDDIT// 19d agoTUTORIAL

Local Home Assistant tests small-model tool calling

Pau Labarta Bajo built a browser-based local Home Assistant proof of concept using LFM2.5-1.2B-Instruct or LFM2-350M via llama.cpp and an OpenAI-compatible API. The goal is to benchmark how reliably sub-2B models turn natural language into tool calls, with intent_unclear handling ambiguity instead of hallucinated actions.

// ANALYSIS

The smart part here isn’t the dashboard, it’s the refusal path. Small models don’t just need better prompts; they need a clean way to say “I can’t safely act” before they invent a room, device, or intent.

  • `intent_unclear` is the key pattern: explicit refusal beats forced tool selection when the request is ambiguous or unsupported.
  • The local stack, `llama.cpp` plus an OpenAI-compatible endpoint, makes the demo reproducible and easy to inspect on personal hardware.
  • Benchmark-first thinking is the right move here; you need a baseline before fine-tuning can prove it actually helps.
  • This is more useful as an agentic evaluation harness than as a smart-home product, and that’s why it matters.
// TAGS
local-home-assistantllmagentinferenceautomationself-hostedtestingbenchmark

DISCOVERED

19d ago

2026-03-24

PUBLISHED

19d ago

2026-03-23

RELEVANCE

8/ 10

AUTHOR

PauLabartaBajo