Evidence completion, editorial cleanup, public snapshot maintenance, and a fully coherent public release of the atlas.
Fund the work.
Cor is seeking modest grants to complete the atlas, publish it as a public reference, and build the operationalization layer that turns it into something AI labs and environment designers can use directly.
Who the atlas is for, and how they use it now
AI alignment researchers use the atlas as a reference for what current optimization targets — engagement, user preference, helpfulness scores — are missing about the human they are trying to align to. The mechanism layer gives candidate hazard models (attachment hijack, wanting–liking dissociation, allostatic accumulation, developmental calibration) that current eval suites do not directly test for.
Environment, product, and policy designers use the application rubrics to ask whether a specific design serves the architecture or exploits it. The rubrics are not yet operational evals — they are structured questions, with citations, that anchor a design conversation in the same reference instead of competing intuitions.
Clinicians and clinical researchers working on the post-DSM transition use the atlas as a structured language for the critique they have already been making, and as a bridge to evolutionary, developmental, and stress-physiology literatures that DSM-trained practice does not natively integrate.
The current phase is the atlas itself. The next phase, beginning with one mechanism worked through to operational eval, is the specification. Funding the project at this stage funds the path from atlas to specification — which is the path from a reference document to something an AI lab can run against a deployed model.
What support buys
The immediate goal is a publishable atlas plus a public site that stays readable on the front end and auditable underneath. The second goal is turning that atlas into usable artifacts: case libraries, measurement prototypes, and evaluation patterns that other people can actually work with.
A focused research year: evidence expansion, challenge-layer strengthening, peer feedback, and public case + audit tooling.
Full program support: completion, outside review, measurement-protocol prototyping, and alignment-facing evaluation artifacts.
Who this fits
If you fund open infrastructure for AI alignment, AI safety, human empowerment, evidence-based intervention design, or independent scientific atlas work, and the operational layer being built on top of it, this project is built for that conversation.