Matt Pocock's skills add /review workflow
The new `/review` skill tells an agent to check changes against the original spec and coding standards, then propose fixes to both the code and the agent loop that produced it. It pushes AI coding toward explicit quality control instead of just faster output.
The useful part here is the shift in posture: the agent stops acting like an implementation engine and starts acting like a reviewer that can call out process failures too. That’s a more honest workflow for AI coding, where the biggest misses are usually misalignment, missing tests, and too much unexamined code churn.
- –Good review output is opinionated and failure-focused: spec drift, standard violations, regressions, and missing checks come first
- –Asking the skill to propose changes to the agent loop is the right move; many code quality problems are workflow bugs, not just code bugs
- –This fits Matt Pocock’s broader pattern: small, composable skills that encode engineering habits instead of giant catch-all prompts
- –The main risk is over-trusting the reviewer agent; human judgment still needs to own ambiguous tradeoffs and architecture calls
- –If paired with TDD and planning skills, this becomes a practical guardrail stack for AI-assisted development
DISCOVERED
2h ago
2026-05-07
PUBLISHED
2h ago
2026-05-07
RELEVANCE
AUTHOR
mattpocockuk