đ Meta-Learning Regime Shift¶
This simulation extends the nested-learning two-state model by adding regime shifts and a meta-learner that adapts the learning rate in response to prediction error.
đ§ Idea¶
A two-state Markov system changes its transition probability abruptly every N steps (regime shift). Two agents learn the system dynamics side by side:
| Agent | Learning Rate |
|---|---|
| Fixed-LR | Constant Ρ â good baseline |
| Meta-Learning | Ρ is adjusted dynamically: high surprise â raise Ρ, low surprise â lower Ρ |
The meta-learner demonstrates Adaptive Capacity (A) from the System Intelligence Index: the ability to change one's own learning behaviour when conditions change.
đŧ Visualisation¶
A 3-panel matplotlib figure:
- Prediction error over time â both agents, with regime-shift markers
- Learning rate Ρ of the meta-learner (log scale)
- Learned vs. true transition probability â convergence tracking