๐งฌ Hebbian Memory โ Self-Organizing Associative Memory¶
This simulation implements a Hopfield network โ a fully connected network of binary neurons (ยฑ1) that stores patterns via Hebb's rule and recalls them from noisy or partial input.
๐ง Idea¶
- Storage: Each pattern is imprinted into the weight matrix using the outer-product (Hebbian) rule:
W += p โ p / N
No teacher, no backpropagation โ just correlation-based learning.
- Recall: Given a noisy probe, neurons update asynchronously:
sแตข โ sign(ฮฃโฑผ Wแตขโฑผ sโฑผ)
The network descends a well-defined energy function:
E = โยฝ sแต W s
and converges to the nearest stored pattern (attractor).
This is the simplest model of content-addressable memory: you give the system a partial cue and it reconstructs the full memory.
๐ What You See¶
The visualisation shows three rows:
| Row | Contents |
|---|---|
| Top | The 4 stored patterns (T, X, Checker, Diamond) |
| Middle | Noisy probe โ animated convergence โ final recalled pattern |
| Bottom | Energy descent over update sweeps; overlap with all stored patterns |
The network recalls each pattern in turn, animating the convergence from noisy input to clean attractor.
๐ Connection to System Intelligence¶
- Predictive Power (P): The weight matrix encodes the statistical structure of the stored patterns โ the network "knows" what a valid pattern looks like.
- Adaptive Capacity (A): New patterns can be stored incrementally by adding their outer product to W.
- Attractor dynamics: The energy landscape creates basins of attraction โ a form of self-regulation toward valid states.
โถ Run¶
Experiment ideas¶
- Increase
NOISE_LEVELto 0.4 or 0.5 โ when does recall fail? - Store 6 or 7 patterns in 64 neurons โ observe capacity limits (the theoretical limit is ~0.14 N โ 9 patterns for N=64)
- Try replacing hand-crafted patterns with random ones via
NUM_PATTERNSand removingBUILT_IN_PATTERNS