BACK_TO_FEEDAICRIER_2
BDH fast weights add transformer memory
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoOPENSOURCE RELEASE

BDH fast weights add transformer memory

BDH Fast Weights is an open-source implementation of a Hebbian synaptic plasticity mechanism for the Dragon Hatchling (BDH) architecture. It enables frozen transformers to learn and persist new facts at inference time via gradient descent, achieving 99% accuracy in associative recall benchmarks compared to 1% for standard models. By leveraging an asymmetric decay rule in a dedicated fast-weight buffer, the system allows encoded facts to survive cold reloads and process kills with minimal cross-contamination.

// ANALYSIS

BDH Fast Weights provides a functional "hippocampus" for LLMs, successfully decoupling episodic learning from the static backbone.

  • Implements a functional write-back mechanism that theorized research previously lacked, enabling real-time association formation.
  • Superior performance over standard in-context learning for long-term fact retention in small models (15M params).
  • Bridges the gap between traditional Fast Weight Programmers and modern Test-Time Training (TTT) architectures.
  • Addresses the "salience" problem by selectively updating rows based on co-activation to preserve signal integrity.
  • High-fidelity fact encoding survives serialization, allowing "cold reloads" where the model retains learned info across sessions.
// TAGS
llmresearchopen-sourcetransformerbdh-fast-weightsinference

DISCOVERED

3h ago

2026-04-13

PUBLISHED

3h ago

2026-04-12

RELEVANCE

8/ 10

AUTHOR

fleebrun83