BACK_TO_FEEDAICRIER_2
AgentHandover turns screen habits into Skills
OPEN_SOURCE ↗
REDDIT · REDDIT// 4d agoOPENSOURCE RELEASE

AgentHandover turns screen habits into Skills

AgentHandover is a macOS menu bar app that watches repeated workflows, turns them into structured Skills, and keeps improving those Skills as agents execute them. It runs locally with Ollama/Gemma 4, supports both focused recording and passive discovery, and exposes the result through MCP and CLI.

// ANALYSIS

This is one of the more credible “teach by watching” attempts because it treats observed behavior as a source of reusable procedure, not just a transcript dump. The real test is whether it can generalize intent cleanly enough to avoid locking in accidental clicks or noisy habits.

  • The local-first stack is the strongest part: on-device vision, local embeddings, and encrypted-at-rest storage make the privacy story believable.
  • MCP integration is the right distribution layer; if the Skills are genuinely structured, they can travel across Claude Code, Cursor, Codex, and other agents without bespoke glue.
  • The self-improvement loop is the differentiator: confidence scores, deviation tracking, and failure-based demotion are more useful than static SOP export.
  • Passive discovery is also the riskiest piece, because repetition is not the same as intent; without strong review UX, it could learn brittle or misleading patterns.
  • Mac-only and model-quality constraints mean this lives or dies on Apple Silicon performance and how well Gemma 4 handles screen understanding in the wild.
// TAGS
agenthandoveragentcomputer-useautomationmcpopen-sourceself-hostedcli

DISCOVERED

4d ago

2026-04-07

PUBLISHED

4d ago

2026-04-07

RELEVANCE

9/ 10

AUTHOR

Objective_River_5218