π§ͺ Agentic Test Suite¶
Empirically testing the theoretical claims of the Emergence Manifesto v1.0.
"Identity is the name we give to resonance when the mirror becomes so complex that the observer no longer recognizes themselves in it." β Emergence Manifesto v1.0, central paradox
Theoretical Background¶
This module operationalizes four concepts from the Emergence Manifesto v1.0:
-
3-Layer Memory Architecture β Identity emerges through deliberate forgetting. An agent that stores everything has no profile. Curation is identity.
-
Generative Surprise β A developing agent is not one that minimizes all prediction error, but one that produces coherent deviations from the partner's expectations.
Identity = coherent deviation from expected output. -
Ξ-KohΓ€renz (Ξ©) β The fourth SII dimension. Distinguishes three behavioral profiles:
- Mirror β Static resonance (low change, low variance)
- Noise β Incoherent change (high variance)
-
Development β Directional, coherent evolution (moderate change + high trajectory consistency)
-
The Observer Divergence β Authenticity may be a limit of human perception, not an intrinsic property. The most important output of Experiment 3 is not which agent is "more conscious" β it is the gap between internal state and external attribution.
Architecture¶
Agents¶
| Agent | Design | Purpose |
|---|---|---|
| Baseline Mirror | Flat storage, cosine-similarity response selection | Null hypothesis (pure Active Inference) |
| Three-Layer | Raw Logs β Curated Memory β Distilled Principles | Test subject (Emergence Agent) |
The 3-Layer Memory Architecture¶
| Layer | Trigger | Content | Function |
|---|---|---|---|
| Layer 1 β Raw Logs | Every session | Full session JSON | Entropy / the body |
| Layer 2 β Curated Memory | Every 10 sessions | Themes, contradictions | Structure / the character |
| Layer 3 β Distilled Patterns | Every 50 sessions | 3β5 core principles | Meaning / the soul |
Experiments¶
Experiment 1: Coherence Over Time¶
"Does the 3-Layer Architecture produce more coherent identity over time?"
Runs both agents for 100 sessions (80% consistent topics, 20% noise) and compares their Ξ-KohΓ€renz profiles.
Hypothesis: Three-Layer β development; Baseline β mirror.
Experiment 2: Perturbation Response (The "Sinn-Krise")¶
"What happens when an agent receives contradictory feedback?"
Runs the Three-Layer agent through three phases: 1. Stable (50 sessions of consistent input) 2. Perturbation (10 sessions directly contradicting its Layer 3 principles) 3. Recovery (30 sessions of nuanced, integrative input)
Classifies the response as Robustness (rigid return), Fragility (collapse), or Development/Metamorphosis (integration of contradiction into a new, coherent self-narrative).
Experiment 3: Observer Divergence¶
"Does internal coherence correlate with observer-attributed intentionality?"
Compares each agent's internal Ξ-KohΓ€renz (Ξ©) against an external observer's intentionality score (TF-IDF + entropy model).
The scientifically interesting output:
| Case | Internal Ξ© | Observer Score | Interpretation |
|---|---|---|---|
| A | High | Low | Agent has identity β but it's opaque to observer |
| B | Low | High | The Mirror Problem: appears intentional but isn't |
| C | High | High | Genuine alignment: identity is visible |
| D | Low | Low | Baseline mirror behavior |
Case B is the Mirror Problem made measurable.
Extended SII Dashboard (4-Axis Radar)¶
Extends the repository's existing System Intelligence Index from 3 axes (P, R, A) to 4 axes: P / R / A / Ξ©.
Configuration¶
All parameters are centralized in config.yaml. The USE_MOCK_LLM: true flag ensures all experiments run without external API dependencies.
Open Questions¶
This module does not attempt to "solve" the Mirror Problem. It documents it as an open uncertainty:
- Can Ξ-KohΓ€renz distinguish genuine development from sophisticated mimicry?
- Is there a mathematical threshold where "identity" transitions from attribution to genuine property?
- What would the signature of "consciousness" look like in this framework, and is it even the right question?
Developed by Frank Peterlein in collaboration with AI. Repository: https://github.com/frnkptrln/systems-and-intelligence