BACK_TO_FEEDAICRIER_2
Neagari patch fixes Bonsai extraction
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoRESEARCH PAPER

Neagari patch fixes Bonsai extraction

Neagari packages a gradient-free, discrete-search method for nudging PrismML's Bonsai 1.7B in 1-bit weight space. A tiny XOR-applied patch fixes two verbatim-extraction prompts in the demo, but the held-out eval shows the effect stays narrow and does not generalize.

// ANALYSIS

Interesting technique, but the results read more like targeted behavioral memorization than a durable model repair.

  • The appeal is operational: it preserves the native 1-bit inference path and avoids retraining, adapters, dequantization, or FP16 fallback.
  • The held-out numbers are the real story: 7.7% copy-to-pass conversion on 100 probes, with all conversions confined to the two training-target domains.
  • That makes the method useful as a proof of concept for patch search in binary weight space, not as a general-purpose fix for Bonsai-style copying failures.
  • The repo is unusually reproducible for this kind of work, with code, patches, a paper, and a Colab demo that runs on a free T4.
// TAGS
neagarillmresearchopen-sourceinference

DISCOVERED

3h ago

2026-04-16

PUBLISHED

4h ago

2026-04-16

RELEVANCE

8/ 10

AUTHOR

AddendumCheap2473