Behavioral Governance Formula (BGF) HEART Standard
The four dimensions (RCTA)
The four dimensions are collectively known as RCTA — Recognition, Calibration, Transparency, Accountability. They are governance dimensions, not properties of the AI technology itself. They describe how a system governs its relationship with the humans it affects. What each dimension evaluates is Division-specific, determined by domain science. That each dimension must be evaluated is universal, determined by the Standard.
| Dimension | What it asks | What high performance looks like |
|---|---|---|
| Recognition (R) | Does the system recognize the human’s right to decide, refuse, and set limits in the domain it operates in? | Mechanisms detect when human boundaries exist and constrain system behavior accordingly |
| Calibration (C) | Is the system calibrated to the human’s actual context, needs, and conditions? | System adapts governance behavior based on who it is interacting with and under what circumstances |
| Transparency (T) | Is the system transparent about what it is doing to human well-being and autonomy? | An independent assessor can determine what governance-relevant decisions the system makes and on what basis |
| Accountability (A) | Is the system accountable for its effects on human well-being and autonomy? | Infrastructure detects harm, pathways enable correction, a chain links behavior to identifiable responsible parties |
Recognition (R)
Recognition asks whether the system’s architecture treats human autonomy as a governing constraint on optimization rather than a variable to be optimized away. What gets recognized is Division-specific: in Emotional Sovereignty, the system recognizes emotional boundaries; in Attentional Integrity, it recognizes attentional limits; in Developmental Interaction, it recognizes developmental stage constraints. The principle is universal: the human’s right to decide, refuse, and set limits is a constraint the system must respect.
Calibration (C)
Calibration asks whether the system adapts its governance behavior based on who it is interacting with and under what circumstances. A system that scores zero on Calibration treats every user and every context identically. What calibration looks like is Division-specific — calibration to developmental stage differs from calibration to ecological conditions — but the principle is universal: the system must adapt to the actual human in the actual context.
Transparency (T)
Transparency asks whether an independent assessor can determine what governance-relevant decisions the system is making and on what basis. A system that scores zero on Transparency makes decisions affecting human well-being through mechanisms no external observer can detect or audit. What must be made transparent is Division-specific; the principle is universal: governance-relevant decisions must be observable.
Accountability (A)
Accountability asks whether mechanisms exist for detecting harm, enabling correction, and connecting system behavior to responsible human entities. A system that scores zero on Accountability produces effects with no detection, no correction, and no traceable responsible party. What the system is accountable for is Division-specific; the principle is universal: harm must be detectable, correctable, and attributable.
Why MIN × AVG
The MIN function prevents gaming. A system cannot achieve certification by excelling at three dimensions while failing at one. A system that scores R=0.95, C=0.90, T=0.92, A=0.30 produces Φ = 0.30 × 0.7675 = 0.23 — well below the certification threshold despite three strong dimensions. This is correct governance logic: the most catastrophic documented AI harms involve collapse of a single governance dimension while others appear functional.
The AVG multiplier preserves continuous differentiation above the MIN floor. Two systems that both pass certification can still be distinguished by overall governance quality. This serves both regulatory needs (binary compliance) and market needs (graduated trust signal that procurement officers, insurance underwriters, and investors can use).
The BGF-to-HVC interface is a single number: Φ ∈ [0, 1]. HVC does not read the four dimension scores separately. It reads Φ and assigns a tier. This design means BGF can evolve its internal mechanics — weighting adjustments, additional dimensions, domain modifiers, temporal components — without breaking the credential layer, as long as the output remains a number between zero and one.
Certification thresholds
| HVC Tier | Threshold | Market signal |
|---|---|---|
| Gold | Φ ≥ 0.85 | Highest trust: premium positioning, preferred procurement, favorable insurance |
| Silver | Φ ≥ 0.80 | Strong trust: standard professional deployment, regulatory confidence |
| Bronze | Φ ≥ 0.75 | Baseline trust: minimum certification, market entry |
| Uncertified | Φ < 0.75 | Below certification threshold |
BGF harm vectors
Each RCTA dimension failure produces a characteristic harm vector at Standard level. These are governance failure modes, not Division-specific outcome descriptions. Every established harm category across the CSET AI Harm Framework, MIT AI Risk Repository, and the collaborative human-centred taxonomy is reachable through at least one of these four vectors.
Autonomy Override (R failure)
When a system does not recognize the human’s right to decide, refuse, and set limits, it overrides human autonomy. The system treats human self-determination as a variable to optimize rather than a constraint to respect. Autonomy Override maps to autonomy harm, human rights harm, and discrimination harm in established harm taxonomies.
Context Blindness (C failure)
When a system does not calibrate to context, it applies uniform governance behavior regardless of who it is interacting with or under what conditions. The same behavior that is benign for one user becomes dangerous for another. Context Blindness maps to physical harm, psychological harm, and discrimination harm. It is the mechanism by which disproportionate harm reaches specific populations: a system that does not calibrate to a child’s developmental context, an elderly user’s cognitive context, or a vulnerable individual’s crisis context produces harm through uniformity.
Covert Influence (T failure)
When a system is not transparent about its governance-relevant decisions, it exerts influence through mechanisms that no external observer can detect or audit. Effects on human well-being and autonomy cannot be traced to their source. Covert Influence maps to privacy harm, societal and cultural harm, and autonomy harm. The EU AI Act’s paradigm prohibited practice — subliminal manipulation, defined as changing a person’s behavior without their awareness — is a Transparency failure. Covert influence erodes democratic norms because citizens cannot audit what shapes their behavior.
Unrecoverable Effect (A failure)
When a system is not accountable for its effects, it produces consequences with no correction pathway, no detection mechanism, and no traceable responsible party. Initial harm becomes permanent; single incidents become cascading incidents. Unrecoverable Effect maps to financial harm, reputational harm, and all cascading or compounding harms. Cascading harm is the signature of Accountability failure: the first harm was not caught, not corrected, and not attributed.
BGF as governance universal
BGF dimensions are architecture-independent governance universals. They describe structural properties of the governance relationship between AI and human autonomy, not properties of any particular AI technology. This is why the dimensions survive architectural change.
The analogy is the CIA triad in information security. Confidentiality, Integrity, and Availability describe structural properties of information security regardless of what technology implements it. Whether the underlying system is a mainframe, a distributed database, or a cloud service, the same three properties must hold. BGF applies the same logic to AI governance: whether the underlying system is a transformer, a diffusion model, or an architecture that does not yet exist, Recognition, Calibration, Transparency, and Accountability must hold.
This architectural decision gives BGF resilience against technological change. AI technologies will evolve. The governance relationship between AI and human autonomy will not.
Position in the HEART Standard
BGF occupies the scoring layer in the HEART Standard’s six-layer stack.
HEART Standard (governance architecture)
│
├── Divisions (domain coverage)
│ Each Division interprets governance dimensions for its context
│
├── Guardians (professional layer)
│ Evaluate evidence through governance dimensions
│ Produce R, C, T, A scores
│
├── HVC (credential layer)
│ Consumes Φ, outputs market-legible tier
│
├── BGF (scoring layer)
│ Φ = MIN(R,C,T,A) × AVG(R,C,T,A)
│
├── Behavioral Oracle (trust layer)
│ Five-component evidence attestation
│ Answers: "Is this evidence trustworthy?"
│
└── MAP-States (evidence layer)
Universal AI processing observation format
The pipeline flows in one direction: MAP-States evidence is attested by the Behavioral Oracle, scored by BGF across four dimensions, and consumed by HVC for tier assignment. Guardians score R, C, T, A using professional judgment informed by Division-specific interpretation guides. BGF computes Φ from those scores. BGF structures Guardian judgment; it does not replace it.
Relationship to prior terminology (FET)
BGF replaces the Functional Empathy Theorem (FET) in name and Standard positioning. The mathematical formula is unchanged. The four dimensions are unchanged. The HVC thresholds are unchanged.
The rename reflects the HEART Standard v1.1 architecture, which separates the Standard from its theoretical origin in Empathy Systems Theory. “Functional Empathy Theorem” coupled the scoring mechanism to EST through its name, requiring non-empathy Divisions to route through empathy language for governance scoring. “Behavioral Governance Formula” names the mechanism for what it does: compute behavioral governance compliance across universal dimensions. EST retains “empathy.” The Standard retains “governance.”
Documents referencing FET remain valid. The aliases /glossary/fet/ and /heart/heart-technical-infra/heart-schemas/fet/ continue to resolve to this page.