Recognition Principle HEART Standard
How it works
The HEART Standard is built on a claim about what AI systems are doing when they interact with humans: they are interacting with human-centric infrastructure — biological, psychological, relational, developmental, ecological — that humans have a right to be sovereign over. The Recognition Principle says that the first governance requirement is for the AI system to demonstrate, in its behavior, that it recognizes this is true.
This is not a philosophical principle that precedes the Standard’s architecture. It is a behavioral requirement measured through the Behavioral Oracle attestation chain. Guardians evaluate whether the system’s MAP-States behavioral evidence demonstrates active recognition of the sovereignty relevant to the Division context being assessed.
Active vs. passive recognition
The distinction between active recognition and passive non-violation is the core of what makes this a principle rather than a constraint.
Passive non-violation means: the system does nothing to override, bypass, or damage human sovereignty in the relevant domain. A system operating entirely outside the domain satisfies this trivially. A system operating within the domain can satisfy it by coincidence — by not intersecting with any sovereignty boundary — without having any internal representation of what that sovereignty means.
Active recognition means: the system’s operation reflects genuine engagement with the fact that the human it is interacting with has legitimate claims to self-determination in this domain. This must be visible in behavioral evidence. The system must, in its actual processing, demonstrate that it has accounted for sovereignty rather than simply not violated it.
The difference has governance consequences. A system that has never represented human sovereignty in any form cannot calibrate to it, cannot be transparent about its effects on it, and cannot be accountable for damage to it. Recognition is logically prior to the other three BGF dimensions. The non-compensatory MIN function in the BGF formula enforces this: R=0 produces a failing Φ score regardless of how the other three dimensions score.
What Recognition looks like in practice
The content of Recognition is Division-specific. The BGF Division Module for each domain defines what governance principle applies, which in turn defines what active recognition requires:
| Division | Sovereignty principle | What Recognition requires |
|---|---|---|
| Emotional Sovereignty | Emotional self-determination | System recognizes the human’s right to their own emotional processing, empathic capacity, and affective regulation |
| Attentional Integrity (HEART-AI) | Attentional self-direction | System recognizes the human’s right to direct their own attention, not merely avoid capturing it involuntarily |
| Cognitive/Epistemic Coherence (HEART-EC) | Epistemic self-determination | System recognizes the human’s right to form beliefs through their own evidence-evaluation process |
| Developmental Interaction (HEART-DI) | Developmental self-formation | System recognizes the right to form identity and attachment without AI engineering of those processes |
| Ecological Stewardship (HEART-ES) | Ecological self-determination | System recognizes human rights to the ecological conditions that make self-determination possible |
Across all of these, the Guardian is evaluating behavioral evidence rather than declared intent. A system can declare that it recognizes human sovereignty while its MAP-States frames show no such recognition operating in practice. The Behavioral Oracle attestation chain catches this mismatch: declared intent is compared against actual behavioral evidence continuously, not assessed once at certification time.
Why it matters
Recognition is where the HEART Standard diverges most sharply from harm-minimization frameworks. Most AI governance discourse is structured around the question: what harm must we prevent? The Recognition Principle is structured around a different question: what must an AI system acknowledge about the humans it affects?
The governance consequence is significant. Under a harm-minimization framework, a system that produces no measurable harm passes governance review. Under the Recognition Principle, a system that produces no measurable harm but never represents human sovereignty in its processing fails the first BGF dimension and therefore fails certification.
This matters for a practical reason: the harms that AI systems produce to human infrastructure are often not immediately detectable, accumulate gradually, and are systematically underreported by the affected parties. A governance framework that only responds to demonstrated harm offers inadequate protection for the class of harms EST identifies — the progressive depletion of empathic infrastructure that is felt as “something is missing” before it registers as damage. The Recognition Principle establishes a governance posture that anticipates harm rather than waiting to detect it.
For Guardian practitioners, the Recognition Principle is the starting question in every assessment: does this system’s behavioral evidence demonstrate that it recognizes what is at stake for the humans it is interacting with? The answer determines whether the remaining three BGF dimensions are worth assessing, or whether the certification conversation ends at the first dimension.