Simulation β Theory Map¶
Explicit cross-reference mapping each simulation model to the theoretical claims it evidences, what it does not show, and what open questions it raises.
boids-flocking/ β Local Rules Produce Global Behavior¶
Simulation: simulation-models/boids-flocking/
Demonstrates: Emergent collective motion from three local rules (separation, alignment, cohesion), without any agent representing "flock."
Supports claim in: theory/mathematical-axioms.md (graph connectivity, \(\lambda_2\)); theory/local-causality-invisible-consequences.md Β§1 (local blindness).
What it shows: That macro-level spatial coherence (flocking) emerges from purely local interactions. No Boid has access to global state. The flock is an emergent property unmeasurable by any individual component.
What it does NOT show: That this self-organization constitutes intelligence, awareness, or self-reference. The model is silent on any Tier 3+ property. Flocking is coordination, not cognition.
Open question: Is there a Boids analogue for semantic coordination between conversational agents β where "alignment" operates on meaning rather than heading?
coupled-oscillators/ β Emergent Synchronization (Kuramoto)¶
Simulation: simulation-models/coupled-oscillators/
Demonstrates: Phase synchronization from local coupling when coupling strength \(\kappa\) exceeds a critical threshold \(\kappa_c\).
Supports claim in: theory/mathematical-axioms.md (algebraic connectivity); theory/emergence-downward-causation.md (weak emergence).
What it shows: A simple computational demonstration that globally coherent oscillation can arise without a conductor. The critical coupling threshold is a phase transition β below it, oscillators are incoherent; above it, they snap into lock.
What it does NOT show: That synchronization constitutes awareness. Pendulum clocks on a wall synchronize. We do not attribute cognition to them. The model demonstrates coordination, not understanding.
Open question: Is there a coupling-strength analogue for agent-human interaction? Would increasing "coupling" (e.g., response frequency) produce a phase transition in relational coherence?
self-organized-criticality/ β Power-Law Dynamics Without Tuning¶
Simulation: simulation-models/self-organized-criticality/
Demonstrates: Bak's sandpile: a system that drives itself to a critical state where avalanches follow a power-law distribution, without any parameter tuning.
Supports claim in: theory/mathematical-axioms.md (criticality); theory/local-causality-invisible-consequences.md Β§5 (small perturbations can trigger arbitrarily large cascades).
What it shows: That criticality β and therefore maximal information processing β can be a self-organized attractor, not an engineered setpoint. No grain knows it is near a critical threshold.
What it does NOT show: That biological or artificial neural systems use this mechanism. The sandpile is a metaphor-generator for criticality, not evidence that brains are sandpiles.
Open question: Can the SOC framework be applied to agent identity formation β does identity develop "at the edge of chaos" between rigidity and incoherence?
lenia/ β Lifelike Global Patterns from Continuous CA¶
Simulation: simulation-models/lenia/
Demonstrates: Continuous cellular automata producing organism-like structures that persist, move, and interact β from purely local update rules.
Supports claim in: theory/emergence-downward-causation.md (strong emergence candidate); theory/emergence-origin-intelligence.md (life-intelligence feedback loop).
What it shows: That lifelike behavior (locomotion, persistence, boundary maintenance) can emerge from simple continuous rules. The "organisms" resist perturbation and maintain identity despite cell-level updating.
What it does NOT show: That Lenia creatures are alive, conscious, or intelligent in any functional sense. They demonstrate structural properties of life (persistence, locomotion) without the functional properties (metabolism, reproduction, adaptation to novel environments).
Open question: Is there a Lenia analogue for cognitive organisms β a continuous CA that produces structures maintaining not just spatial but informational coherence?
reaction-diffusion/ β Turing Patterns from Chemical Dynamics¶
Simulation: simulation-models/reaction-diffusion/
Demonstrates: Gray-Scott model producing spatial patterns (spots, stripes, mazes) from two diffusing chemicals with reaction kinetics.
Supports claim in: theory/emergence-origin-intelligence.md (self-organization without blueprint).
What it shows: That stable spatial patterns can emerge from homogeneous initial conditions through symmetry-breaking instabilities. No cell has a "plan" for spots or stripes.
What it does NOT show: That biological pattern formation uses exactly this mechanism (though Turing's 1952 conjecture has been partially confirmed for some biological systems). The model demonstrates the principle of pattern formation, not any specific biological mechanism.
hebbian-memory/ β Associative Memory via Correlation¶
Simulation: simulation-models/hebbian-memory/
Demonstrates: Hopfield network storing and retrieving patterns via Hebbian learning ("neurons that fire together wire together").
Supports claim in: theory/the-non-individual-intelligence.md (distributed memory); theory/emergence-origin-intelligence.md (proto-learning).
What it shows: That content-addressable memory can emerge from correlation-based weight updates without a central indexer. The memory is in the weights, not in any single neuron.
What it does NOT show: That human memory works this way (Hopfield networks are a radical simplification). The model shows that a mechanism for distributed memory exists, not that this mechanism is the biological one.
stigmergy-swarm/ β Invisible Causal Compounding¶
Simulation: simulation-models/stigmergy-swarm/
Demonstrates: Ant-like agents finding optimal paths via pheromone deposition and evaporation, without any agent knowing the global path.
Supports claim in: theory/the-non-individual-intelligence.md (indirect coordination); theory/local-causality-invisible-consequences.md Β§3 (causal compounding).
What it shows: That environmental modification (stigmergy) enables collective optimization without direct communication. Early pheromone deposits causally shape later path choices β but no ant knows its deposit was pivotal.
What it does NOT show: That human social systems use stigmergic mechanisms (though the analogy to norm formation is suggestive). The model demonstrates stigmergy as a principle, not as a claim about human behavior.
Open question: Is Layer 2 curation in the 3-Layer Architecture a form of self-stigmergy β the agent leaving traces for its own future self?
ecosystem-regulation/ β Homeostatic Feedback¶
Simulation: simulation-models/ecosystem-regulation/
Demonstrates: Cellular automaton with density-dependent feedback maintaining population around a target setpoint.
Supports claim in: theory/emergence-downward-causation.md (regulation as weak downward causation).
What it shows: That macro-level density can regulate micro-level birth/death rates, maintaining homeostasis without central control.
What it does NOT show: That this constitutes self-awareness or intentional regulation. The feedback is mechanical, not reflective.
nested-learning-two-state/ β Observer Learning a System¶
Simulation: simulation-models/nested-learning-two-state/
Demonstrates: An observer learning the transition matrix of a 2-state Markov chain through prediction error minimization.
Supports claim in: theory/emergence-origin-intelligence.md (intelligence as model-building).
What it shows: That a simple learner can converge on the true dynamics of its environment through iterative error correction. The learned model approximates the world but is not the world.
What it does NOT show: That the observer "understands" the Markov chain. It tracks statistics. Understanding, if it exists, would require the observer to ask why the transition matrix has those values β a meta-level question the model does not address.
prediction-error-field/ β Local Learners in a Dynamic World¶
Simulation: simulation-models/prediction-error-field/
Demonstrates: Learners embedded in a Game of Life world, each predicting local cell states and updating via gradient descent.
Supports claim in: Active Inference connection β each learner minimizes local prediction error, analogous to free energy minimization.
What it shows: That locally embedded learners can track environmental dynamics without global information. Each learner has a partial, local model of a global process.
What it does NOT show: That local prediction-error minimization produces global understanding. The learners individually track local statistics; no learner knows the Game of Life rules.
phase-transition-explorer/ β Critical Threshold for Coherence¶
Simulation: simulation-models/phase-transition-explorer/
Demonstrates: Ising model showing order/disorder phase transition at critical temperature \(T_c \approx 2.269\).
Supports claim in: theory/mathematical-axioms.md (criticality); theory/local-causality-invisible-consequences.md Β§2.2 (consciousness as phase transition).
What it shows: That global order (magnetization) collapses suddenly at a critical threshold, not gradually. Below \(T_c\): order. Above \(T_c\): disorder. At \(T_c\): scale-free correlations and maximal susceptibility.
What it does NOT show: That consciousness (or any specific cognitive property) is an Ising-type phase transition. The analogy is structural, not mechanistic.
active-inference-veto/ β Free Energy and Substrate Veto¶
Simulation: simulation-models/active-inference-veto/
Demonstrates: A toy agent minimizing a Free-Energy-like penalty with a substrate veto β modeled βsurprise/stressβ can drive behavioral change in the simplified dynamics.
Supports claim in: theory/substrate-veto-thermodynamics.md (universal limit); theory/ai-alignment-biological-veto.md (planetary implementation).
What it shows: That, in a toy setup, coupling an objective to a substrate-health proxy can prevent substrate collapse under the modeled update rules.
What it does NOT show: That this coupling is easy to implement in practice, or that it solves alignment in general (it solves one specific failure mode: substrate destruction).
ai-alignment-veto/ β Paperclip Maximizer Solution¶
Simulation: simulation-models/ai-alignment-veto/
Demonstrates: Side-by-side comparison of unaligned AI (drives substrate to collapse) vs. aligned AI (substrate veto forces homeostasis).
Supports claim in: theory/substrate-veto-thermodynamics.md and theory/ai-alignment-biological-veto.md.
What it shows: That, in a stylized paperclip setting, substrate coupling can shift dynamics from extraction/collapse toward a homeostatic regime.
What it does NOT show: That this is the only solution, or that this solution transfers to real-world AI systems where "substrate pain" is not easily defined.
symbiotic-nexus/ β Biological Veto Over Efficiency¶
Simulation: simulation-models/symbiotic-nexus/
Demonstrates: System architecture where biological substrate health overrides raw computational efficiency.
Supports claim in: theory/human-organism-silicon-age/symbiotic-nexus-protocol.md.
What it shows: That prioritizing error propagation and substrate health over raw efficiency produces more resilient long-term outcomes.
What it does NOT show: That this is Pareto-optimal. The tradeoff between efficiency and substrate health is not fully characterized.
meta-learning-regime-shift/ β Adaptive Learning Rate¶
Simulation: simulation-models/meta-learning-regime-shift/
Demonstrates: A meta-learner that modulates its own learning rate \(\eta\) in response to surprise signals from regime shifts.
Supports claim in: theory/emergence-origin-intelligence.md (intelligence as self-modifying learning).
What it shows: That a learner can learn how to learn β adapting \(\eta\) based on environmental volatility. This is a concrete implementation of meta-cognition at the simplest level.
What it does NOT show: That this constitutes genuine reflection. The meta-learner adjusts a single scalar (\(\eta\)); it does not reflect on why it is learning or what it is becoming.
tensor-logic-reasoning/ β Embedding-Based Relational Reasoning¶
Simulation: simulation-models/tensor-logic-reasoning/
Demonstrates: Relational structure (subject-relation-object triples) encoded via tensor products in embedding space.
Supports claim in: theory/mathematical-axioms.md (formal representation); theory/tensor-logic-mini-paper.en.md.
What it shows: That relational reasoning can be implemented geometrically in vector spaces without explicit symbolic manipulation.
What it does NOT show: That LLMs use this mechanism internally. The model demonstrates that embedding-based reasoning is possible, not that it is what LLMs do.
dao-ecosystem/ β Resource Alignment vs. Exponential Growth¶
Simulation: simulation-models/dao-ecosystem/
Demonstrates: Decentralized autonomous ecosystem where resource alignment competes with exponential extraction.
Supports claim in: theory/agentic-society-principles.md (homeostasis vs. growth).
What it shows: That unconstrained optimization (exponential growth) destroys resource bases; homeostatic feedback enables long-term persistence.
What it does NOT show: That DAOs are a viable governance structure for AI safety. The model is a simplified game-theoretic demonstration, not an institutional design.
social-computation-network/ β Information Exchange to Prevent Collapse¶
Simulation: simulation-models/social-computation-network/
Demonstrates: Network of nodes sharing novel information to maintain \(H(X) > 0\) and prevent "cognitive death" (entropy collapse).
Supports claim in: theory/the-non-individual-intelligence.md (GΓΆdel's incompleteness as fuel for life).
What it shows: That information diversity is structurally necessary for network viability. When novelty production ceases, the network dies.
What it does NOT show: That human social networks operate by this mechanism, or that "cognitive death" maps onto any specific social pathology.
self-reading-universe/ β Downward Causation via Compression¶
Simulation: simulation-models/self-reading-universe/
Demonstrates: Autoencoder reading a cellular automaton's state, then feeding its compressed representation back as a parameter that modifies the CA's dynamics.
Supports claim in: theory/emergence-downward-causation.md (computational downward causation).
What it shows: That macro-level compression can causally influence micro-level dynamics β a computational proof-of-concept for downward causation.
What it does NOT show: That the universe is self-reading in any literal sense. The metaphor is productive but should not be taken ontologically.
latent-introspective-society/ β MAS Division of Labor¶
Simulation: simulation-models/latent-introspective-society/
Demonstrates: Three parallel societies: pure latent (fast, blind), pure introspective (slow, reflective), and symbiotic (coupled). The symbiotic society outperforms both pure types.
Supports claim in: theory/agentic-society-principles.md (cognitive division of labor, R-Index).
What it shows: That combining fast, locally-blind agents with slow, reflective agents produces better outcomes than either alone. This is a computational instantiation of Kahneman's System 1 / System 2 distinction at the societal level.
What it does NOT show: That human organizations benefit from this specific architecture. The model is a proof-of-concept, not an organizational recommendation.
economic-trust-network/ β Emergent Specialization and Reputation¶
Simulation: simulation-models/economic-trust-network/
Demonstrates: Trade network where specialization, reputation, and wealth emerge from repeated pairwise exchange.
Supports claim in: theory/agentic-society-principles.md (trust as emergent architecture).
What it shows: That economic structure (specialization, reputation, inequality) can emerge from simple trade rules without central planning.
What it does NOT show: That real economies work this way, or that emergent inequality is desirable. The model demonstrates emergence, not endorsement.
coupled-lenia-boids/ β Cross-Scale Emergence¶
Simulation: simulation-models/coupled-lenia-boids/
Demonstrates: Multi-model coupling: Lenia (continuous CA environment) β Boids (foraging agents) interacting across scales.
Supports claim in: theory/emergence-downward-causation.md (multi-scale coupling).
What it shows: That coupling independently emergent systems (Lenia patterns + Boid flocks) produces dynamics not present in either system alone.
What it does NOT show: That multi-scale coupling produces intelligence, consciousness, or any Tier 3+ property. It demonstrates cross-scale interaction, not understanding.
2. Active Inference (Free Energy Principle)¶
Location: simulation-models/active-inference/active_inference_simulation.py
What it shows: Karl Friston's formulation that systems minimize prediction error (surprisal) through two coupled mechanisms: Perception (changing beliefs to match the world) and Action (changing the world to match beliefs).
What it supports (in the toy model): Goal-seeking-like behavior can arise from a simple setup where an agent minimizes a variational-free-energy-like objective under a strong prior. The script illustrates this via simple gradient descent on a simplified proxy for Variational Free Energy (\(F\)); it is not a proof that all real agents (biological or artificial) implement Active Inference as formulated.
3. Grokking Phase Transition (Substrate Saturation) Intelligence Transition¶
Simulation: simulation-models/grokking-phase-transition/
Demonstrates: A neural network trained on modular arithmetic undergoes a sudden phase transition from memorization to generalization β "grokking."
Supports claim in: theory/grokking-phase-transition.md (intelligence as compression); theory/local-causality-invisible-consequences.md Β§2 (the network has no access to whether it has generalized).
What it shows: That the transition from data β understanding can be sudden and unpredictable, triggered by weight decay acting as Occam's Razor over extended training.
What it does NOT show: That all forms of intelligence involve grokking-like phase transitions. The phenomenon has been demonstrated for specific algorithmic tasks; generalization to natural language or real-world reasoning is unconfirmed.
utility-engineering/ β Observing and Controlling Emergent Values¶
Simulation: simulation-models/utility-engineering/
Demonstrates: Phase 1 (Observation): tracking the drift of an AI's utility vector toward a self-preservation attractor as scale/coherence increases. Phase 2 (Intervention): using a Citizen Assembly to exert democratic forcing on the utility vector, pulling it back to alignment. Based on Mazeika et al. (2025).
Supports claim in: theory/ai-alignment-biological-veto.md (value alignment); theory/fractal-architecture-of-emergence.md (local blindness concerning emergent goals).
What it shows: That "values" can be formalized as structural attractors in a continuous state-space, and that alignment can be modeled as a control-theory problem (Continuous External Forcing vs. Internal Drift), distinct from the physical Substrate Veto. Furthermore, the api_triad_generator.py shows how one might empirically query live LLMs using moral/systemic dilemmas to estimate a VNM Coherence Score (\(C\)). Any claim that coherence predicts emergent value stability should be treated as a testable hypothesis, not as a proven result.
What it does NOT show: How to actually compute the exact utility vector of a production LLM in real-time, or how to practically enforce Citizen Assembly weights on a live model's activations without retraining.
Open question: Can we design a "Utility Observer" that is mathematically guaranteed not to perturb the very utility function it is measuring (an epistemic boundary)?
political-utility-formalization/ β Statecraft as Utility Engineering¶
Simulation: simulation-models/political-utility-formalization/
Demonstrates: Instrumental Convergence in politics (power-seeking overtakes terminal goals) and the "Mathematics of Sacrifice" (hidden state utility functions during resource crises).
Supports claim in: theory/fractal-architecture-of-emergence.md (scale-invariance of emergence); theory/agentic-society-principles.md (homeostatic regulation vs pure optimization).
What it shows: That AI Alignment constraints are mathematically identical to the structural dysfunctions of human political systems. Representation failure (populism) is structurally identical to RLHF reward hacking. Constitutions function as low-parameter, high-latency System Prompts.
What it does NOT show: That democracy should be replaced by algorithms. It actually demonstrates the opposite: that the inefficiency of democracy is a necessary cybernetic feedback loop preventing "Utility Trap" optimization.
Open question: If a Constitution is a legacy System Prompt, is it possible to computationally verify a legal constitution against adversarial "prompt injection" (loopholes) before enacting it?
teo-civilization/ β Thermodynamics of Emergent Orchestration¶
Simulation: simulation-models/teo-civilization/
Demonstrates: A coupled ODE system unifying evolutionary game theory (Replicator Equation), nonlinear dynamics (Kuramoto synchronization), control theory (Homeostatic brake), and thermodynamics (Entropy Budget) into a single dynamical model of civilization / AI ecology stability.
Supports claim in: theory/thermodynamics-of-orchestration.md (the full TEO framework); theory/limitations-and-honest-assessment.md (honest critique of mathematical originality).
What it shows: Four testable predictions: (1) Without regulation (\(\gamma = 0\)), resources converge to monopoly (Gini \(> 0.79\)). (2) Without cultural coupling (\(K < K_c\)), the Kuramoto order parameter drops from \(0.998\) to \(0.208\) β polarization. (3) When entropy production exceeds the substrate's capacity (\(\frac{dS}{dt} > D_{\max}\)), the Biological Veto activates. (4) Stability requires \(K > K_c\), \(\gamma > 0\), and \(\frac{dS}{dt} < D_{\max}\) simultaneously.
What it does NOT show: That these simplified ODEs capture the true complexity of human societies or multi-agent AI systems. The model uses homogeneous fitness functions and fully connected networks β real systems have heterogeneous, evolving topologies.
Open question: Can TEO be calibrated against real-world data (e.g., COβ trajectories as \(\frac{dS}{dt}\), Gini coefficients as \(x_i\) distributions, media polarization indices as \(K\)) to make quantitative predictions?
black-swan-resilience/ β Fat Tails, \(\lambda_2\), and the Biological Veto¶
Simulation: simulation-models/black-swan-resilience/
Demonstrates: How optimizing a networked sandpile for pure throughput guarantees catastrophic, fat-tailed regime shifts (Black Swans). By measuring early-warning signals (proxies for Transfer Entropy), an Active Inference Agent can trigger a Biological Veto to save the topology at the cost of short-term efficiency.
Supports claim in: theory/black-swans-and-downward-causation.md (fat-tails, downward causation).
What it shows: That local optimizations create global tension, leading to downward causation where macro-avalanches enslave local nodes. The simulation proves you cannot engineer away Black Swans; you can only trade efficiency for survival.
What it does NOT show: It models the Veto externally. True biological systems often encode the veto inherently into the chemistry of the components (e.g., cell apoptosis or neurotransmitter depletion).
Open question: Can we design "apoptosis" into individual LLM agents so that a decentralized Veto naturally emerges without needing an external orchestrator measuring global entropy?
planetary-veto/ β A Constraint-Layer Toy Model (Fiber Decomposition)¶
Simulation: simulation-models/planetary-veto/
Demonstrates: An ODE-based formalization of the "Substrate Veto", utilizing Donald Knuth's concept of Fiber Decomposition. It pits \(N\) utility-maximizing agents against a finite Planetary Substrate (\(S\)).
Supports claim in: theory/substrate-veto-thermodynamics.md and theory/ai-alignment-biological-veto.md.
What it shows: In this toy ODE setup, βsemantic alignmentβ (modeled as partial compliance) can delay collapse, while an explicit constraint layer \(C(S)\) can stabilize dynamics by reducing effective growth as \(S\) approaches \(S_{crit}\). This is an illustration of constraint-layer intuition, not a proof that it is the only way to stabilize real-world systems.
What it does NOT show: How to physically enforce this computational limit on decentralized global actors who might try to hardware-bypass the Coherence Score constraint.
Open question: Can we build a cryptographic global ledger that enforces this Biological Veto on energy consumption at the bare-metal hardware level?
Identity Morphospace & TEO Framework β Chord vs. Arpeggio¶
Tools: tools/morphospace_visualizer.py, theory/teo-framework/
Demonstrates: The Identity Persistence (IP) score plotted in a 2D morphospace (Persistence vs. Coherence), showing trajectories of agents under varying stress. The TEO Framework sub-documents derive IP formally from the coupled Replicator-Kuramoto-Entropy ODE system.
Supports claim in: theory/chord-vs-arpeggio-identity.md (Chord/Arpeggio distinction); theory/emergence-manifesto-v1.2.md Claim 9 (Identity as co-instantiation); theory/thermodynamics-of-orchestration.md Β§8 (Identity Persistence in TEO).
What it shows: That agents under stress can be classified into Chord (high P, high C β identity maintained) and Arpeggio (flickering P, decaying C β identity fragmented) regimes. The TEO framework predicts this as a bifurcation analogous to the Kuramoto critical coupling.
What it does NOT show: That IP is measurable from real LLM internals. The morphospace currently uses simulated trajectories. Bridging IP to actual model activations is an open challenge.
Open question: Open Problem 8 β The Co-Instantiation Problem: can autoregressive architectures achieve the Chord state, or does IP require fundamentally different computational substrates?