BACK_TO_FEEDAICRIER_2
Frigate user hunts vision model
OPEN_SOURCE ↗
REDDIT · REDDIT// 4h agoINFRASTRUCTURE

Frigate user hunts vision model

An Intel Arc A380 owner wants a small vision model for Frigate and a few Home Assistant tasks, starting with Gemma 4 E2B in llama.cpp. The real problem is finding a setup that stays responsive enough for always-on local vision, not squeezing out benchmark wins.

// ANALYSIS

This is the right framing for local AI at the edge: the best model is the one that gives useful answers fast enough to keep the automation loop intact.

  • Frigate is built around local AI object detection and Home Assistant integration, so latency and efficiency matter more than raw model size
  • Gemma 4 E2B is already a sensible edge-sized baseline, and the reported slowness suggests the A380 is the bottleneck
  • For tasks like counting lights, a smaller or more tightly scoped vision model may be a better practical fit than a general-purpose multimodal model
  • The post points to a common local-AI tradeoff: aggressive quantization and narrow prompts often matter more than chasing the strongest model family
// TAGS
frigategemma-4home-assistantmultimodalinferenceedge-aigpu

DISCOVERED

4h ago

2026-04-21

PUBLISHED

8h ago

2026-04-21

RELEVANCE

5/ 10

AUTHOR

mcgeezy-e