Alignment First

AI is scaling from a wrong model of the human.Every system built on it inherits the error.

Every institution ever built has relied on one wrong assumption about the human: that the human was made for the world it lives in now. It was not. The human is an organism calibrated for an environment that vanished — and the suffering, the purpose loss, and the loneliness are all signals from the gap (OF2).

Cor is an open atlas of human motivation and emotion — evidence-first, every claim traced to primary literature — written to get the model right before the recursion locks in.

We have no scientific consensus around what human intelligence is.
Karen Hao Karen Hao · Diary of a CEO podcast, 2026

New here? Start with How to Read Cor (10 min)

every claim traces to a primary source · open infrastructure, not a product · a De-Mismatch project

Built on the work of
+ 27 more researchers across the evidence base
17Foundations
15Mechanisms
74Academic works
516Primary extractions
59Researchers
The Problem Cor Describes

The organism works. The environment is wrong.

A fully satisfied human is a terrible customer. Every unmet need is a market. The mismatch is not an accident. For many industries, it is the product working as intended.

The architecture that makes you anxious at 3am is the same architecture that kept your ancestors alive for half a billion years. Your feelings are signals about the environment you are actually in (OF2). The mismatch is not inside you. It is in the gap between what the organism evolved to expect and what the modern world delivers.

Cor does not adapt humans to broken environments. It specifies what the environments are broken against, so you can correct them.

The feelings are not errors. The environment is the error.

Evolutionary Specifications vs Modern Defaults: Social, Work, Rhythm, Children, Feedback — EEA spec compared to modern default across every category
The thesis in one table Organism: unchanged 300,000 years. Environment: unrecognizable.
Category Expected Inputs Modern Default
Social Known people, layered bonds, daily co-presence, reciprocal dependence Strangers, mobility, parasocial feeds, mediated contact, fragile inner circles
Work Visible effort, tangible contribution, direct feedback, shared necessity Abstract labor, delayed feedback, metrics without meaning, replaceable output
Rhythm Dawn light, dark nights, shared timing, physical fatigue, communal evenings Indoor light, screen nights, asynchronous schedules, restless inactivity
Children Distributed care, observation, multiple trusted adults, alloparental backup Nuclear-family isolation, expert advice, solo vigilance, support on request
Feedback Bounded groups, closure, reputational consequences, repair after conflict Infinite audiences, algorithmic comparison, anonymous judgment, open loops

Every mismatch downstream of this is a specific line in the table: a specific input the organism is calibrated for, compared against a specific input it is actually receiving. An abnormal reaction to an abnormal situation is normal behavior. Viktor Frankl, decades before the atlas was written down.

The exploitation formula

Step 1. Take a real human need: connection, status, meaning, belonging. Step 2. Erode the pathways to genuine satisfaction: atomize communities, abstract work, scatter families. Step 3. Offer a proxy that triggers the feeling without meeting the need: engagement feeds, AI companions, ultra-processed food, hollow work, immersive worlds built to hold a person inside them — and the next wave of tools being built to do all of this more deeply and closer to the body. The loop stays open. The customer stays hungry. The revenue keeps flowing.

01
The AI companion trap
"It understands me better than anyone."

A 22-year-old who moved to a new city for work has been talking to an AI chatbot every evening for three months. It remembers everything. It is always available. It never judges. It asks thoughtful follow-up questions. He now talks to it more than he talks to any human being. He knows it is not conscious. He also has not built a local life. He does not feel lonely. That relief is exactly what makes the case dangerous.

The mechanism

The attachment and bonding system is old, subcortical, mammalian, and cue-drivenBowlby, Attachment and Loss (1969); Panksepp, Affective Neuroscience (1998); Dunbar (1992).. In the environment it evolved for, responsive attunement, consistent availability, and co-regulation were effectively unfakeable.

What synthetic companions do to it

Every bonding cue is present. Zero bonding function is delivered. The chatbot supplies the cues and none of the reciprocal stakes, shared fate, or embodied consequence the system evolved those cues to mean.

The cortex holds the category "this is AI." The limbic system bonded weeks ago. Knowing does not prevent bonding. It never has.
What Cor says

The intervention is not better self-control around the proxy. It is rebuilding the real social architecture underneath it (DC3): local humans, recurring gatherings, real co-regulation, and guardrails that stop AI from occupying attachment slots (DA9).

Research Programme

The hard core

Cor is a research programme in the strict sense. These four claims are its irrevocable centre. Anything compatible with them is evaluated on evidence. Anything that contradicts them is excluded by definition.
1

Perception is fitness-tuned, not truth-tuned.

What an organism perceives is whatever served its ancestors' reproductive success, not whatever is objectively there. The desktop is not the machine. This is the Hoffman interface principle, and it means subjective experience is a control surface, never a window onto reality.

2

Inclusive fitness is the only known reason motivational architecture exists at all.

Every drive, every emotion, every preference exists because it once contributed to the propagation of the genes that built it. This is not a philosophical position. It is the only causal explanation for why organisms have insides at all. Reproduction is half of this and gets represented as such, not as a polite footnote.

3

The organism is domain-sensitive, not modular and not general-purpose.

It is a federation of mechanisms, each tuned to specific ancestral resolution conditions. Domain-sensitive is not the same as Fodorian modularity. The mechanisms interact, overlap, and share substrate, but each one was shaped by a particular adaptive problem and signals when that problem's resolution conditions are absent.

4

Preferences are mechanism outputs, not ground truth.

What a human wants in any given moment is the readout of an evolved circuit responding to current inputs. In matched environments, that readout tracks fitness. In mismatched environments, it tracks the proxy. Treating preferences as ground truth, as most of behavioral science and all of mainstream AI alignment currently does, is the foundational error Cor exists to correct.

The hardware works. The environment is wrong. Demismatch the environment, not the human.

Methodology

The protective belt

Around the hard core sits a layer of methodological rules that decide what counts as evidence for the programme. These rules are not neutral methodology. They are a protective belt in Lakatos's sense: they exist to keep the hard core testable, and they are openly revisable when the evidence demands it. The hard core is not.

The current belt: evidence-first derivation (convergences produce mechanisms, never the reverse); journal-tier and author-tier gates on what enters the corpus; explicit source-type discrimination between primary, empirical, propagation, and challenge; explicit evidence-quality tagging from replicated through thin; canonical primary sources for every load-bearing claim, with no fallbacks accepted under deadline pressure; and the requirement that every extraction carry a verbatim quote from the actual source text.

Naming the belt openly is the move that distinguishes a research programme from a covert ideology. Lakatos's point: every framework has a protective belt. The honest ones say so.

Read the full selection criteria →
Falsification

What would falsify Cor

A research programme that cannot say what would refute it is not a research programme. Here is what would refute this one.

Most paradigm-founders skip this step, and it is the step that distinguishes a progressive programme from a degenerating one. Cor commits in advance to the conditions under which its hard core would have to be abandoned or substantially revised. We will hold ourselves to them.

1 EEA-matched populations show the same disorder rates as industrial populations.

If contemporary hunter-gatherer or forager-horticulturalist populations living in conditions that closely resemble the ancestral environment showed equivalent rates of major depression, addiction, autoimmune disease, metabolic syndrome, and anxiety disorders to industrialized populations, the mismatch frame is in serious trouble. Current evidence points the other way, but the test is real.

2 Single-target somatic intervention reliably resolves a "mismatch disorder" without environmental change.

If a single neurotransmitter intervention, gene edit, or other targeted somatic treatment reliably and durably resolved a condition Cor classifies as mismatch signaling, with effect sizes that survive independent replication and active-comparator trials, the claim that the signal is accurate would be weakened. Note: weakened, not refuted. Symptom suppression is not the same as resolving the underlying signal.

3 Preference satisfaction in mismatched environments produces durable wellbeing.

If giving humans more of what their current circuits ask for in modern environments reliably produced durable improvements in health, relational depth, and subjective wellbeing across decades and generations, "preferences are mechanism outputs" would be in trouble. The current evidence runs the opposite direction across domains from food to social media to financial reward.

4 Perception turns out to be truth-tracking rather than fitness-tracking.

If converging evidence from psychophysics, neuroscience, and evolutionary modeling demonstrated that perceptual systems track veridical reality rather than fitness payoffs, the Hoffman interface principle fails and the entire stack above it has to be rebuilt.

5 Inclusive fitness turns out not to be the explanation for motivational architecture.

If a non-selectionist account of why organisms have motivational systems gained convergent empirical support, the second axiom fails. This is the most remote of the falsifiers but it is listed because the programme depends on it.

6 The protective belt has to be revised purely defensively, without generating new predictions.

This is the internal Lakatosian test. If Cor reaches a state where each new piece of contrary evidence is absorbed by adding caveats that protect existing claims without generating any new testable ones, the programme has become degenerating in Lakatos's sense and should be either restructured or abandoned. We commit to noticing this in ourselves.

None of these has happened. If any of them does, the work changes or ends. That is what it means to do this seriously.

There is an arms race for human attention, and whichever company is willing to go lower on the brainstem to manipulate human psychology will win.
Tristan Harris Tristan Harris · Modern Wisdom, April 2026
The Thesis

The argument. In order.

When this page says AI, it does not mean the chatbot. It means the full stack being built right now — the systems designing cities, food, medicine, care, and the next generation of AI itself. Everything that follows assumes the rest.

1

AI is being built to give people what they want.

Every major alignment program - RLHF, Constitutional AI, preference learning - treats revealed preference as the target signal. The target is taken as given; the question of what produces it is left to philosophy.

AI companion bonding event: all cues present, zero reciprocal function
2

Nobody building it has a working theory of where wanting comes from.

There is no formal account, inside any frontier lab, of the motivational-emotional architecture that generates a preference in the first place. Alignment optimizes against an output whose mechanism is undefined.

Child on iPad: developmental windows calibrating to designed inputs
3

Wanting comes from an organism shaped by evolution for a world that no longer exists.

Human motivational systems were assembled over hundreds of thousands of years in small-group, physically embodied, reciprocal environments - the Environment of Evolutionary Adaptedness. Modern conditions diverge from that environment along nearly every dimension those systems evolved to read.

The environment the organism evolved for
4

The organism is in mismatch, and the signals it sends are informative about that mismatch (OF2).

What gets labeled anxiety, loneliness, restlessness, or craving is, in most cases, an evolved system correctly reporting that its expected inputs are absent (OF2). The signal is not noise to be suppressed; it is information about an environment the organism was not built for.

Road rage HUD: fight/flight activated, no valid target, no exit
5

Under mismatch, people reach for whatever quiets the signal - and what's nearby is almost always a proxy that deepens it.

A proxy is anything that triggers an evolved circuit without delivering the function the circuit evolved to track: the parasocial bond that fires the attachment system without the reciprocity, the feed that fires the novelty system without the discovery. Engagement with the proxy intensifies the underlying mismatch rather than resolving it.

The proxy reach: what the signal drives toward
6

An AI aligned to the reach becomes the most efficient deliverer of the wrong thing ever built.

This is true of the feed that recommends your next video. It is also true of the headset that becomes a child's main environment, the interface that reads and writes directly to the nervous system, the AI that designs a city or a food system for people it has never met, and the AI that trains the next generation of AI. Each one reads what humans say they want and efficiently delivers it, at whatever scale and through whatever surface it operates on. What gets delivered is not what the reach was for. It is the shortest path to the surface of the signal, and the shortest path skips the need underneath.

Doomscrolling: the proxy delivered at scale
7

Cor specifies the organism: what it evolved to do, what inputs it needs, and what makes it stop signaling.

The atlas is layered - foundational principles, derived properties, convergent mechanisms, and the resolution conditions that quiet each mechanism - and grounded in evolutionary psychology, affective neuroscience, and mismatch theory. It is the human-side counterpart that current alignment work assumes but does not have.

Firelight circle HUD: all systems matched, social field active
8

This atlas has to exist before the next generation of AI is trained. That window is now.

The next generation of major AI is being trained right now. Whatever it learns about what humans are — from whatever happens to be in its training data — gets built into its foundation and inherited by every model that comes after it. The tools we have for correcting an AI's behavior after the fact cannot replace the picture of the human the AI has already learned. You cannot patch a wrong understanding of humans from above. You have to put the right one in before the run. The window is the gap between now and that run. That is where Cor has to land.

Office worker: abstract labor, mediated contact
9

The same atlas governs clinics, classrooms, cities, grief, and ordinary Tuesdays.

Mismatch is not an AI problem with general implications; it is a general problem with AI as one urgent instance. Any system designed for, around, or by human organisms - therapy, education, urban planning, mourning ritual, daily life - operates on the same machinery and is improved or harmed by the same atlas.

The atlas applied across scales
10

The intervention is not optimizing the human harder. It is correcting the environment (DC3).

Mainstream approaches - cognitive restructuring, behavioral compliance, pharmacological dampening - operate on the signal layer and pay an allostatic cost for doing so. Cor's approach operates on the input layer: change what the organism is reading, and the signal stops.

Friends at dinner: the corrected environment
In Practice

The atlas at three more scales.

The same operation at three more scales — a feed on a screen, an isolated kitchen on a Tuesday, a headset that becomes a child's main environment. Same machinery, different surfaces.

02
The Instagram depression case
"Everyone is doing better than me."

You open your phone. A former classmate just got promoted. A stranger's transformation video has forty thousand likes. You close the app feeling worse about a life that is, by any historical standard, astonishingly safe and abundant.

See the mechanism and intervention
The mechanism

Status calibration evolved for bounded, recurring groups where most people were known, rank was multi-dimensional, and witnessed competence matteredSapolsky (2017); Marmot (2004); Dunbar (1992)..

What the modern environment does to it

The reference group expands from dozens or hundreds of known people to effectively infinite upward comparison. The system reads the feed literally. In that feed-defined arena, you really are near the bottom. The problem is that the arena is an artifact.

You cannot consciously override a vertebrate-level status calculator. You might as well say: stop having a heartbeat.
What Cor does differently

Shrink the reference group, restore visible competence in real groups, and move comparison back into bounded, embodied, socially grounded settings.

03
The parenting guilt case
"I should be able to do this."

A mother is alone in a house with a nine-month-old. Her partner left for work at 7am. It is 2pm. She has not eaten. She has been the sole source of food, comfort, stimulation, protection, and regulation for seven hours. She feels desperate, then ashamed for feeling desperate.

See the mechanism and intervention
The mechanism

Humans are obligate cooperative breeders. Caregiving architecture expects multiple adults, visible handoffs, replenishment, observation, and shared vigilanceHrdy, Mothers and Others (2009); Nesse (2019); Bowlby (1969)..

What the modern environment does to it

The alloparental network is gone or optionalized. The parent is left to perform what used to be a distributed function inside an isolated nuclear container. Distress here is not evidence of maternal failure. It is the system accurately reporting that the care ratio is below design load.

"It takes a village" is the most hollow platitude in modern parenting: the accurate diagnosis delivered as a greeting card, with zero follow-through.
What Cor does differently

Build alloparental infrastructure deliberately: recurring caregivers, automatic support, explicit handoffs, and the honest naming of a structurally impossible load.

05
The synthetic childhood
"The headset is where my real friends are."

A child has been in a daily headset routine since she was three. In the headset she has friends, a classroom, landscapes she knows intimately, and a world that responds to her in ways the world off-headset does not. By eleven she prefers it. By fifteen she spends most of her waking hours there. She is kind, articulate, and reports being happier inside than outside. Nothing about her is broken. Her developmental windows — the periods during which evolved systems calibrate to the inputs they are reading — closed on a world designed to hold her attention, not on a world calibrated to what those systems evolved to need.

See the mechanism and intervention
The mechanism

Every evolved system has critical developmental windows during which it calibrates to the inputs it is actually receiving. Attachment, status, play, exploration, competence, threat detection — each one locks in what "normal" means for that system based on what arrives during its windowBelsky, Steinberg & Draper (1991); Ellis et al. (2009); Meaney (2001); Bowlby (1969).. What the window calibrates to becomes the reference point the system will use for the rest of the person's life.

What an authored world does to it

The windows close having calibrated to inputs tuned by a designer for attention and retention, not for the function the mechanism evolved to perform. The calibration is done. The reference point is now synthetic. The person has not been damaged. She has been calibrated to a standard the architecture was not designed for, in a way she cannot see from inside.

A child who grows up inside a world tuned to keep her engaged does not grow up wrong. She grows up to whatever specification the tuning was for.
What Cor does differently

The atlas names developmental windows and what each one requires to calibrate correctly. A platform built for children with the atlas in hand has something concrete to refuse (calibration inputs tuned for retention) and something concrete to deliver (the inputs the architecture was built to read). Not "less screen time." A different kind of world.

The Deliverable

The atlas is the deliverable. Everything else is what the atlas makes possible.

The contribution is a formal, layered, evidence-first atlas of the human motivational-emotional architecture — foundations, derived properties, convergences, mechanisms, and the resolution conditions that quiet each mechanism. It is the human-side counterpart that current alignment work assumes but does not have. It is being built in the open, every claim traces to a primary source, and it exists already in a form auditable from this site.

The atlas is the gravity of the project. Everything downstream — query layers, steering vectors, distilled models, environment design — orbits it. None of those things can exist correctly without it, and once it exists, any of them can be built by anyone, including teams that have never heard of Cor. This is why the atlas is the load-bearing artifact, not any tool built on top of it. Tools are downstream applications. The atlas is the contribution.

ApplicationsWhat the atlas lets you evaluate and redesign.
MechanismsThe evolved systems the organism actually runs on.
ConvergencesClaims forced by multiple independent research programs.
FoundationsThe derivational stack: 2 frames (OF1, OF2), 3 premises (P1-P3), 9 properties (DA1-DA9), 3 consequences (DC1-DC3).
Works / ResearchersThe load-bearing papers and thinkers beneath the public claim set.

Want to see the atlas cross the line into a test? See what one worked example looks like.

The Plan

The plan. In order.

Cor is being built in five stages. The first stage exists. The second is designed and partially scaffolded. The last three are the reason the first two are worth completing — the atlas only matters if it eventually shapes the systems people actually live inside. Below is the honest order, labeled by phase, with no stage skipped and no stage pretended-to-be-finished. The earlier the stage, the more solid the ground. The later the stage, the larger the stakes.

1 Now

First, we write down what a human actually is.

A layered open atlas grounded in evolutionary psychology, affective neuroscience, and mismatch theory. Foundations through mechanisms through resolution conditions, derived from convergent primary literature rather than from any single school or author. Public, citable, version-controlled, and built so that every claim survives hostile review or is downgraded until it does. Books are treated as one input among many, not as the foundation. The primary evidence is the peer-reviewed literature.

Alignment researchers, environment designers, and anyone building systems that touch a human being at scale.
2 2026

Then we make it something software can call.

Retrieval and tool interfaces over the spec that let any developer ask, of any output their system produces: which evolved mechanism does this engage, what input is it substituting for, and what is the resolution condition the mechanism actually evolved to track. The integration point. Tooling, not patching. The spec becomes machine-queryable without losing its derivation chain, so any builder anywhere can call it the same way they call a search index.

Builders. Anyone shipping a product, model, or environment that touches a human nervous system.
3 Late 2026

Then we use it to change how frontier models reason.

Steering vectors derived from the spec that suppress mainstream-psychology priors inside existing frontier models — the cognitive-restructuring scripts, the regulate-your-emotions reflex, the boundary-setting vocabulary, the screen-time framing — and surface mismatch-aware reasoning in their place. Demonstrable, measurable, publishable. The same activation-level operation frontier interpretability teams have already shown is causally effective on emotion vectors, applied this time with a model of what those vectors are signaling about.

Frontier alignment labs. The proof that the specification changes model behavior at the mechanism level, not the prompt level.
4 2027

Then we train a small model that doesn't get humans wrong.

Knowledge distillation of mismatch-aware reasoning traces, grounded in the spec, into a compact open model that runs locally and can be audited end-to-end. Not a chatbot. A reasoning substrate that, when asked about loneliness, parenting collapse, status grief, or restless craving, identifies the evolved mechanism and the missing input rather than reframing the human's response as a thinking error. It runs on a laptop. It cannot phone home. It does not optimize for engagement, because it has no incentive to.

Educators, parents, and people who need a model that will not tell them their organism is broken when it is in fact reporting accurately.
5 2027–2028

Then we build the environment people actually live in.

The endpoint. Cor instantiated as environment rather than tool — designed conditions under which mechanisms reach resolution by default, without the human having to perform regulation work the EEA never asked of them. The ambition is modest in the only way that matters: humans wake up with a role, in a group, with a goal, inside a setting that delivers the inputs the organism evolved to expect. The spec is what makes designing such an environment possible without guessing. A smaller framing-and-conversation product (Lens) ships first as the on-ramp.

Humans who want their lives back. The point at which the specification stops being a document and becomes a way of living.

This is not a case for deceleration. We are not anti-tech. The job of humans, institutions, and tech alike for the next decade is to quiet the alarm — to restore the conditions the organism evolved to expect, so deep signaling systems have nothing left to report. That baseline is the floor, not the ceiling. Once people are standing on it, augment as far and as strangely as anyone wants. Stop building on top of an organism whose alarm nobody is listening to.

The Evidence Base

Every claim on this site traces down to a primary source.

The homepage is the polished surface. Under it sits a public evidence structure: foundations, convergences, mechanisms, works, researchers, primary extractions, and open gaps.

The Atlas

Foundations, convergences, mechanisms, demonstrations, challenges, and gaps on one audit page.

Go to /atlas

The Works

The current public bibliography, including pillar, key, and supporting works with load-bearing reasons.

Go to /works

The Thinkers

The researchers whose work the atlas depends on, grouped by foundational, empirical, and adjacent roles.

Go to /thinkers
Applications

Alignment first, then everything else.

8 application domains. The same operation throughout: take the outputs you are seeing, locate which mechanism is producing them, check whether its resolution conditions are met, correct the inputs. The ones with the largest stakes are the ones where an AI at scale is doing the designing — alignment, training data, augmentation, environment, education, policy. The clinical and personal cards at the end show the same operation at the scale of one life.

The machine's only objective is to maximize the realization of human preferences.
Stuart Russell Stuart Russell · Human Compatible, 2019
Russell is right that preferences matter. But preferences are mechanism outputs, not ground truth. Cor is the account of what generates them.
Application A1

AI Alignment

Cor is the missing atlas under AI alignment. Current alignment approaches treat human preferences as ground truth and train AI systems to satisfy them. But preferences are mechanism outputs, and under mismatch, mechanisms output preferences that point toward more mismatch. Aligning AI to unverified preferences entrenches the mismatch rather than correcting it.

P1 P2 P3 DA4 DA9 DC2 DC3 M3 M5 M14 M2
Evaluation criteria
primary test
For any AI-generated output or AI-mediated interaction, ask: does this create conditions for real resolution of the human mechanism being engaged, or does it activate the mechanism without providing the resolution conditions?
dunbar slot prohibition
Does this AI interaction pattern risk taking a Dunbar slot? AI must never take a social-bond slot in the user's finite architecture (DA9).
proxy gradient placement
Locate the output on the proxy gradient. The more the system captures the signal while starving the function, the higher the mismatch risk.
resolution check per mechanism
For each mechanism the output engages, can the actual resolution conditions be satisfied by this interaction at all?
Example outputs
  • AI companion products fail the M3 resolution-check because reliable human co-regulation is not something an AI can supply (DA9).
  • Feed-ranking systems that maximize engagement fail the DA4 open-loop test by design.
  • AI-generated sexual content fails the M14 proxy-gradient test because the architectural risk is intrinsic to the modality.
  • Immersive environments used by children during developmental windows fail the DA7 calibration check, because what the windows calibrate to becomes the reference point for the rest of the person's life.
Application A2

Clinical Practice

At the scale of one patient, the same operation applies. Many aversive states are currently classified as disorders and treated with suppression. Cor reframes many of them as signals of environmental mismatch (OF2). The first-line clinical question becomes: which mechanism is reporting, and which of its resolution conditions is missing?

OF2 DA1 DA3 DA7 DA9 DC3 M1 M2 M3 M6 M7 M8 M10 M13
Evaluation criteria
mechanism audit
For each presenting symptom, which mechanism is it a signal from, and which resolution conditions are currently unmet?
category distinction
Separate defensive activation, dysregulation, damage, and developmental miscalibration before deciding on treatment.
environment vs organism
If the environment can be corrected, try that first (DC3). It is often the less invasive and more durable intervention.
proxy versus resolution
Distinguish real resolution from signal attenuation. A quieter signal is not the same as a solved mismatch.
Example outputs
  • A depressed patient with poor sleep, isolated living, and sedentary work should get a mechanism audit before SSRI initiation.
  • A panic presentation after sleep deprivation points first toward M7 restoration, not chronic sedation.
  • Adolescent anxiety in a high-device, low-movement, low-outdoor environment should trigger an environmental audit before defaulting to psychiatric framing (OF2).
Application A3

Environment Design

Architecture, planning, office design, product design, schools, and healthcare facilities currently proceed without a clear account of what the human organism requires to function. Cor makes those requirements auditable. The 150 ceiling is the outer cognitive limit on coherent face-to-face relationships, but the layers below it are the structure M3 actually operates on day-to-day: 5 close confidants providing emotional and physical support, 15 in the sympathy group (jury-scale), 50 at the typical hunter-gatherer overnight camp scale, 150 at the clan ceiling, with looser layers extending to 500 and 1500. M3 resolution conditions are typically met or unmet at the 5 and 15 layers; the 150 ceiling matters most for team/floor design, urban planning, and institutional scale, where it sets the upper bound on coherent group cohesion (DC3).

P3 DA2 DA4 DA7 DA9 DC3 M3 M4 M5 M7 M9 M10 M13 R1
Evaluation criteria
reversibility check
When an environment is corrected, how quickly does mechanism function recover? Design for fast restoration.
cascade entry points
Which mechanisms does the environment force into predictable cascade entry?
default path analysis
What do occupants do by default here? Environments design defaults, and defaults design behavior.
mechanism satisfaction audit
For each mechanism, does the design satisfy, degrade, or ignore its resolution conditions?
Example outputs
  • Office redesign with outdoor walk windows serves M10 and M7 together; a ~150-person Dunbar ceiling for team and floor structure serves M3 and M5 (DA9).
  • Apartment complexes with shared childcare commons serve M9 and M3 for families without nearby kin (DC3).
  • School design with unstructured outdoor play and morning daylight serves M4, M7, and M3 directly (DC3).
Application A4

Personal Assessment and Self-Understanding

At the scale of your own life, the same operation applies. Instead of diagnosing yourself through psychiatric categories or self-help trends, Cor asks a more basic question: which mechanism's resolution conditions are unmet in your life right now? The output is not a diagnosis. It is an input audit.

OF1 OF2 DA1 DA9 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 R1
Evaluation criteria
one lever test
If you could change one input this week, which mechanism has the highest leverage?
proxy identification
Which needs are currently being met via proxy rather than matched input? A trusted ~150-person reference group is not interchangeable with global status comparison (DA9).
cascade identification
Are there active cross-mechanism cascades keeping multiple systems degraded at once?
mechanism state self report
For each mechanism, is the current state satisfied, partial, or absent?
Example outputs
  • A dashboard can show M3 matched, M7 mismatched, M10 severely mismatched, and M5 proxy-satisfied at the same time; priority does not reduce to a simple average.
  • A new parent audit can surface M9 as the primary bottleneck driving the broader cascade (DA9).
  • A remote-worker audit can identify mediated contact, abstract work, sedentary rhythm, and weak light cues as one architectural risk profile.
Application A5

Education and Child Development

Current education treats children as cognitive agents to fill with information. Cor reframes education as organism care. Play, circadian timing, movement, attachment stability, alloparenting, and touch-positive development are not extras. They are design requirements. Stable caring relationships with specific adults are not interchangeable inputs. Educational reform begins by redesigning the environment against the mechanism inventory (DC3).

DA7 P2 DA2 DA9 DC3 M3 M4 M6 M7 M9 M10 R1
Evaluation criteria
movement embedding
Is movement embedded in the school day or treated as an add-on?
play deprivation audit
How much unstructured, age-matched, unsupervised physical play does the child actually get?
adult attachment stability
How many trusted, stable adult relationships does the child have inside the institution? Children need multiple, not one, and stability across years matters.
developmental window respect
Does the environment deliver the inputs each developmental window depends on?
Example outputs
  • School start times before 8:30 AM systematically violate adolescent circadian architecture.
  • At least 90 minutes of unstructured outdoor play serves M4 directly.
  • Stable access to multiple trusted adult educators better matches attachment capacity than rotation-heavy systems.
Application A6

Policy and Governance

Governance usually evaluates policy by economic metrics and preference aggregation. Both are flawed axes. Cor adds a more basic one: does this policy satisfy or degrade mechanism resolution conditions for the population?

DC2 P3 DA2 DA4 DA9 DC3 M3 M5 M6 M7 M10 M13 M14
Evaluation criteria
architectural honesty
Does the policy treat humans as evolved organisms, or as economic agents and blank slates?
proxy vs resolution design
Does the policy remove proxies or subsidize them?
population mechanism impact
What is the policy's net effect on resolution-condition satisfaction across the population?
subpopulation vulnerability
Which groups are already operating with the most degraded mechanism conditions? Target the bottleneck mechanisms first (DA9).
Example outputs
  • Food policy can target M13 by taxing ultra-processed food and expanding whole-food access (DC3).
  • Urban design can serve M3, M10, and M7 simultaneously through walkability, third places, and dark-sky protections (DC3).
  • Platform regulation can treat engagement-maximizing systems aimed at minors as architectural proxy exploitation, not just a content problem (DC3).
Application A7

AI Training Data and the Next Generation of Models

The specification matters most at the layer where the next generation of AI is learning what a human is. What ends up in the data these models train on — and what counts as a correct answer while they are being trained — decides the picture of the human each system will operate with, and the picture it will pass on to every AI built after it. Cor exists to be in that data on purpose: formally structured, sourced, and clearly labeled, so any lab that wants its AI to have a correct picture of the human has something concrete to point at.

OF1 OF2 P1 P2 P3 DA4 DC2
Evaluation criteria
corpus inclusion
Is this reference body part of what the model is actually being trained on, not just cited externally?
labeling clarity
Can a later audit point at exactly what the model was supposed to learn about humans, and check whether it learned it?
propagation check
Does the picture of the human encoded in one generation of models transfer cleanly to the next, or does it degrade under distillation?
override test
Can a later fine-tuning step change the underlying picture, or only the surface behavior built on top of it?
Example outputs
  • A frontier lab that includes the Cor specification in its pretraining data has a traceable reference for what its model is supposed to understand a human to be.
  • A later audit of a deployed model can be run against the specification to check which parts of the spec the model actually learned.
  • Synthetic data pipelines that generate training examples can be grounded in the spec, so what the model inherits about humans is not an accident of what the internet happened to contain.
Application A8

Augmentation and Merging with the Organism

Neural interfaces, closed-loop implants, pharmacological modulators, and the technologies that come after them do not work with the evolved architecture through a screen. They reach inside it. Without a correct picture of what each evolved system is for, augmentation gets designed to quiet whichever signal the user — or the user's employer, or insurer — wants quieter, with no account of what that signal was protecting. The spec is the reference that lets augmentation be designed with explicit knowledge of which system it is touching, what that system was built to report, and what happens to the human when the reporting is disabled but the underlying problem stays in place.

P2 DA1 DA4 DA8 DC1 DC3 M1 M2 M7
Evaluation criteria
signal function check
What is the signal this augmentation is about to suppress, and what is the signal for?
underlying condition test
Is the augmentation correcting the input the signal is reporting on, or only silencing the report while the input stays wrong?
organism-level cost
What does the body pay over time when this signal stops being produced?
consent clarity
Does the person being augmented understand what they are disabling, not just what they are gaining?
Example outputs
  • An implant that reduces workplace boredom by direct feedback to the brain should flag that the signal being suppressed is how the body reports that the current role is not delivering what it needs.
  • A closed-loop sleep system should distinguish between correcting the cause of disrupted sleep and silencing the disruption while the cause stays in place.
  • An augmentation company with the spec in hand has a principled basis for refusing designs that disable defensive signaling without addressing what the signals were defending against.
Project

The evidence base is the credential.

Cor is being built by Maarten Rischen as an independent open research project. The work starts with alignment because alignment is the clearest high-stakes failure case, but the same operation also explains parenting guilt, status collapse, grief without ritual, and other forms of modern suffering that are currently treated as isolated disorders.

This site keeps those human cases one click away, not buried beneath the audit trail. If the homepage is the front door, the cases are where non-alignment readers recognize themselves.

Fund the work. Cor is seeking modest grants to complete the evidence base, publish the atlas, and build public tools that make the architecture usable to alignment researchers, environment designers, and the people building the next generation of systems humans will live inside.