Humans are not just minds
We are embodied organisms that need things like sleep, movement, daylight, safety, belonging, purpose, play, and real social connection.
Cor is a public research project building a structured map of human needs, motivation, and distress. Its starting point is simple: if you want to design tools, institutions, environments, or AI systems that genuinely help people, you need a better model of what people actually are.
Cor argues that many common forms of modern distress make more sense when you see humans as evolved organisms living in environments that often mismatch their needs. Instead of starting with categories like user, consumer, patient, or worker, Cor starts one level deeper: what systems is a human organism running, what inputs do those systems expect, and what happens when those inputs are missing or replaced by substitutes?
We are embodied organisms that need things like sleep, movement, daylight, safety, belonging, purpose, play, and real social connection.
Loneliness, anxiety, grief, burnout, shame, and restless craving are not always evidence of a broken person. Often they are intelligible responses to the conditions that person is living in.
Markets and technologies are very good at offering fast relief, stimulation, or reward. Cor calls many of these substitutes proxies: they trigger the system without actually resolving the need underneath.
Cor calls itself an atlas because it is trying to map the terrain rather than jump straight to a product. It organizes claims about human motivation, emotion, and regulation into a structured public framework.
The project pulls together work from evolutionary biology, neuroscience, developmental psychology, anthropology, behavioral ecology, and related disciplines, then tries to make the convergence visible in one place.
This is not a black-box system. The site shows the underlying papers, the thinkers, the cases, the challenges, the gaps, and the first worked example of how the atlas becomes something testable.
The atlas is the current deliverable. The fuller machine-readable specification, evaluation protocols, and operational tooling are the next layer being built on top of it.
A lot of modern systems use partial models of the human. Economics sees consumers. Product teams see users. Institutions see workers. Clinical systems often see symptoms. AI systems see preferences, clicks, ratings, and prompts. Each of those views captures something real, but none of them is a full map of the organism producing the behavior.
Cor is trying to provide that missing layer. Its argument is that if you do not understand the underlying architecture that generates human behavior, you will keep optimizing against surface outputs and mistaking them for deeper truth.
What people want in a strained, lonely, overstimulated, or mismatched condition may point toward quick relief rather than real resolution.
We often ask people to cope better inside settings that are socially thin, sedentary, chronically attention-grabbing, and poorly matched to how humans regulate.
Cor argues that many states currently treated as isolated defects can also be read as understandable signals about missing inputs or distorted environments.
Many AI systems are trained around preferences and behavioral signals: what people click, ask for, return to, rate highly, or spend time with. Cor's warning is that these are not neutral signals. In a mismatched world, they can point toward substitutes instead of the thing a person actually needs.
If someone is lonely, a system can offer endless simulated warmth. If someone feels status anxiety, it can offer endless comparison and performance metrics. If someone wants comfort, it can deliver a polished proxy that relieves the feeling without solving the underlying problem.
That is why AI is central to the project. The issue is not only chatbots. It is any system that learns to shape human environments, habits, and choices at scale without a strong model of the organism it is acting on.
Risk: a system feels emotionally intimate enough to displace real relationships, but cannot actually share your world, body, or fate.
Risk: feeds and rankings keep the status system in permanent comparison mode without any stable, embodied social context.
Risk: children calibrate to environments tuned for engagement and retention rather than the inputs a developing human system actually needs.
The question is not only "How do we soothe the feeling?" It is also "Does this person actually have real, reciprocal human connection?"
The question is not only "How can this parent cope better?" It is also "Why is a village-sized job being pushed onto one exhausted adult?"
The question is not only "How do we reduce the pain?" It is also "Has this person been given the time, ritual, and communal support that grief expects?"
The cases are often the easiest way into the project because they show how the framework reads recognizable situations from everyday life.
A system offers the cues of intimacy and understanding, but cannot actually function as a human relationship.
Read the caseA status system built for bounded social life gets forced into permanent comparison with a global feed.
Read the caseA structurally understaffed care situation gets translated into a personal failure story.
Read the caseA communal loss process gets privatized, compressed, and then judged for not resolving quickly enough.
Read the caseChildren calibrate to environments designed for retention rather than to the conditions healthy development expects.
Read the caseStart here if you want familiar life situations before theory.
Open casesRead this if you want the careful explanation of what Cor is, what it is not, and what remains unproven.
Open guideGo here if you want the full architecture: foundations, convergences, mechanisms, demonstrations, challenges, and gaps.
Open atlasGo here if you want the longer arguments, including the AI-facing paper and the human-facing companion document.
Open paperGo here if you want to inspect the research backbone under the public argument.
Open worksGo here if you want to see how one part of the atlas gets turned into a more formal, testable evaluation.
Open operationalizationCor is trying to put a better model of the human into the systems that increasingly shape human life, so they stop optimizing only for what people reach for and start paying attention to what people actually need.