BACK_TO_FEEDAICRIER_2
Neuro-Symbolic-SNN demos continual learning on MNIST
OPEN_SOURCE ↗
REDDIT · REDDIT// 14d agoOPENSOURCE RELEASE

Neuro-Symbolic-SNN demos continual learning on MNIST

This GitHub prototype pairs a spiking neural network with a local Ollama/llama3 planner that adjusts curriculum weights and can veto suspicious samples. The author reports 100% on 5 unseen MNIST samples after 15 passes of 500 steps, but the evidence is still a tiny, self-reported demo.

// ANALYSIS

Most of the apparent stability here comes from the training policy around the SNN, not from any magical new neuron design. That makes it interesting as a prototype, but the evidence is nowhere near strong enough to call it a general continual-learning breakthrough.

  • The core network is a standard LIF SNN with surrogate gradients and LayerNorm, so the novelty is mostly in orchestration.
  • Replay weighting and the decaying plasticity schedule are sensible anti-forgetting tricks, and they likely explain much of the observed stability.
  • The LLM is acting as a curriculum planner and poisoning gate through a local Ollama model, which is clever but prompt-sensitive and easy to overtrust.
  • The repo is still very early, basically a single-file demo with no releases, so the MNIST result should be read as anecdotal rather than benchmark-grade.
// TAGS
neuro-symbolic-snnllmresearchopen-sourceself-hosted

DISCOVERED

14d ago

2026-03-28

PUBLISHED

14d ago

2026-03-28

RELEVANCE

7/ 10

AUTHOR

Proletariussy