BACK_TO_FEEDAICRIER_2
Windows On Theory charts AI safety gaps
OPEN_SOURCE ↗
REDDIT · REDDIT// 12d agoNEWS

Windows On Theory charts AI safety gaps

Boaz Barak's March 30, 2026 Windows On Theory post uses four "fake graphs" to argue that capabilities are still compounding quickly, alignment is improving but not fast enough, and society is nowhere near ready for the deployment risks. The piece's most hopeful signal is model-on-model monitoring; its bleakest is that institutions are still not keeping pace.

// ANALYSIS

Barak's take is cautious optimism with a policy hangover: technical safety is moving, but not on a curve that makes anyone comfortable. The bigger bottleneck may now be institutional readiness, not just model science.

  • Capability growth still looks exponential, and AI-assisted AI development may be steepening the curve.
  • Alignment metrics are improving, but adversarial robustness, dishonesty, and reward hacking remain open problems.
  • Model-on-model monitoring is the best near-term technical signal, but it only works while scheming and collusion stay limited.
  • Barak rejects both the "one clever idea" narrative and the idea that an AI pause is a realistic fix.
  • The policy gap is especially stark for bio, cyber, open-source models, and democratic safeguards.
// TAGS
windows-on-theorysafetyresearchllmagentopen-sourceregulation

DISCOVERED

12d ago

2026-03-30

PUBLISHED

12d ago

2026-03-30

RELEVANCE

8/ 10

AUTHOR

tekz