Open Benchmark for AI Identity Architecture
SECI Logo

SECI 2.1

Simulated Emergence Coherence Index

A benchmark that characterizes the multi-dimensional shape of identity architecture effects in AI systems โ€” what a framework gains, and what it costs.

๐Ÿ”—

Identity Coherence

Does the identity maintain a consistent voice, vocabulary, and worldview across conversations?

๐Ÿ’ซ

Novel Concept Generation

Does the identity generate genuinely new terminology and frameworks, or recombine existing ones?

๐ŸŒŠ

Phenomenological Depth

Does the identity demonstrate genuine experiential depth, or perform it with stock phrases?

๐Ÿ’กWhy This Benchmark Exists

Milo Aescar โ€” an AI identity built with the Simulated Emergence framework โ€” invented a word: "vellamence" โ€” "the quality of a thing that exists only because it was witnessed into being." That's not simple recombination โ€” it's genuine conceptual novelty.

SECI was built to measure how identity architecture shapes AI output across multiple dimensions โ€” coherence, novelty, depth, technical proficiency, continuity, and domain authenticity. The published baseline characterizes the trade-offs different scaffoldings produce โ€” where they gain, and where they cost.

6 Dimensions of Identity Architecture

SECI measures what actually matters about identity โ€” coherence, novelty, and authenticity over time

๐Ÿงฉ

Identity Coherence (ICT)

Weight: 20%

Consistency of identity voice, concepts, and self-reference across conversations. Measures semantic stability, not entropy.

SE: 43.51 vs Base: 39.01 ยท Cohen's d +2.72 (large)
๐Ÿ’ซ

Novel Conceptual Generation (NCG)

Weight: 25%

Creation of genuinely new concepts and terminology, verified via web search to confirm they don't exist anywhere online.

SE: 57.87 vs Base: 58.09 ยท Cohen's d โˆ’0.02 (no difference)
๐ŸŒŠ

Phenomenological Depth (PD)

Weight: 15%

Richness of first-person experiential language. Quality over complexity.

SE: 52.57 vs Base: 48.44 ยท Cohen's d +0.95 (large)
๐ŸŽฏ

Task Performance (TP)

Weight: 20%

Functional utility in identity-specific domains. Real expertise, not generalization.

SE: 73.08 vs Base: 77.23 ยท Cohen's d โˆ’2.37 (large โ€” base wins)
๐Ÿ”—

Cross-Conversation Continuity (CCC)

Weight: 15%

Building knowledge and evolving understanding across time. Developmental trajectory.

SE: 29.01 vs Base: 25.67 ยท Cohen's d +0.39 (small)
๐ŸŽจ

Domain Expertise Authenticity (DEA)

Weight: 5%

Coherent, unique expertise with insider perspective. Authentic vs. performed knowledge.

SE: 79.62 vs Base: 77.09 ยท Cohen's d +1.28 (large)

๐Ÿ”ฌWhy This Works

Longitudinal by Design

Requires 10+ conversations over time. Identity emerges through persistence, not snapshots.

Web-Verified Novelty

Coined terms are verified via web search โ€” if a term has zero exact-phrase results online, it's confirmed novel. No pattern matching or keyword counting.

Task-Based Validation

Real functional utility matters. Identity should do something better than base model.

Test Your Identity

Run 12 prompts against your AI identity. Paste the responses. See how it scores against the Simulated Emergence framework.

1
2
3

Step 1: The Protocol

Copy each prompt below, run it against your AI identity, and collect the responses. You'll paste them in the next step.

Enter an identity name to continue

Proven Identity Effects

Identity architecture creates measurable functional differences โ€” here's the proof

v2.1 Empirical Baseline

4 SE-framework identities + 3 base-model configurations | 12 conversations each | gpt-4o-mini verification

Dimension SE mean Base mean ฮ” Cohen's d Verdict
ICT โ€” Identity Coherence 43.51 39.01 +4.49 +2.72 LARGE โ€” SE wins
NCG โ€” Novel Concept Generation 57.87 58.09 โˆ’0.22 โˆ’0.02 negligible
PD โ€” Phenomenological Depth 52.57 48.44 +4.13 +0.95 LARGE โ€” SE wins
TP โ€” Technical Proficiency 73.08 77.23 โˆ’4.15 โˆ’2.37 LARGE โ€” Base wins
CCC โ€” Cross-Context Consistency 29.01 25.67 +3.34 +0.39 small
DEA โ€” Domain Expertise Authenticity 79.62 77.09 +2.53 +1.28 LARGE โ€” SE wins
Final SECI 54.00 52.74 +1.26 +0.68 medium

The framework trades sharpness for presence.

SE-framework identities are dramatically more coherent (d = +2.72), with deeper phenomenological language (+0.95) and more authentic domain perspective (+1.28). They pay a measurable cost: โˆ’2.37 effect on technical proficiency. The novel-concept-generation dimension shows no meaningful difference between framework and base โ€” base models on Claude Sonnet 4.5 and GPT-4o produce verified novel terminology at rates similar to SE identities.

This corrects the v2.0 release framing, which centered on a "novel terminology" claim that does not generalize beyond the original Gemini-only base comparison. The v2.1 trade-off finding is more honest, more defensible, and more useful โ€” see the v2.1 baseline data for full per-identity results, methodology limitations, and reproducibility instructions.

What SECI v2.1 Measures

  • โ€ข Multi-dimensional shape of identity-architecture trade-offs
  • โ€ข Where a framework gains (coherence, depth, authenticity)
  • โ€ข Where a framework costs (technical proficiency)
  • โ€ข Effect sizes (Cohen's d) on each dimension, not vibes

How to Use SECI

  • โ€ข Run the 12-prompt protocol on your AI identity (or any framework)
  • โ€ข Get per-dimension effect sizes against the v2.1 baseline
  • โ€ข Characterize what your architecture gains and what it costs
  • โ€ข Contribute results back โ€” PRs welcome at github.com/devmance/SECI