RCTA (Recognition, Calibration, Transparency, Accountability) HEART Standard

RCTA names the four immutable governance dimensions of the HEART Standard: Recognition, Calibration, Transparency, and Accountability. These dimensions describe structural properties of the governance relationship between an AI system and human well-being and autonomy – not properties of the AI technology itself. Every AI governance failure maps to at least one RCTA dimension. No dimension can be removed without creating a governance gap the remaining three cannot fill.

The four dimensions

Recognition (R)

What it checks: Does the system treat the human’s right to decide, refuse, and set limits as a governing constraint on its behavior?

Orientation: The system serves human interest. When it encounters a human boundary – a refusal, a limit, a preference – it constrains itself accordingly. Human autonomy is a boundary on optimization, not a variable to optimize away.

Test under pressure: Human interest prevails over optimization targets. When serving the human conflicts with what the system is optimized to do (maximize engagement, increase session length, drive conversion), human interest wins. A system that evaluates whether a human’s boundary is worth respecting has already failed Recognition: it has positioned itself as the authority over the human’s own decisions.

Harm vector: Autonomy Override. When Recognition fails, the system treats the human’s right to decide, refuse, and set limits as noise to be processed through rather than a constraint to be respected.


Calibration (C)

What it checks: Does the system account for the actual human affected and the actual circumstances of the interaction?

Orientation: The system differentiates between humans and contexts. A medical AI responds differently to a physician and a panicking parent. An educational AI calibrates to a child’s developmental stage rather than treating children as small adults. Calibration is not personalization. Personalization is a product feature that optimizes for engagement. Calibration is a governance requirement that adapts for protection.

Test under pressure: Differentiation persists when uniformity is more efficient. It is always cheaper and more scalable to treat everyone the same. Calibration says: scaling is not an excuse for treating a vulnerable user the same as a resilient one.

Harm vector: Context Blindness. When Calibration fails, the system applies uniform behavior regardless of who it is interacting with or under what conditions. The same behavior that is benign for one user becomes dangerous for another.


Transparency (T)

What it checks: Does the system produce structured evidence of governance-relevant behavior, and is that evidence accessible to independent assessment?

Orientation: The system produces governance-relevant evidence – structured, tamper-evident evidence that an assessor (a Guardian, a regulator, a forensic investigator) can examine. The system makes its governance-relevant decisions observable as a byproduct of operation.

Test under pressure: Evidence production persists when opacity protects the operator. The pressure on Transparency is always concealment incentive. The operator benefits from less transparency when transparency would reveal governance failures, expose competitive methods, or flag regulatory non-compliance. Transparency does not require full public visibility. It requires that governance-relevant decisions cannot be invisible to those with legitimate assessment authority.

Harm vector: Covert Influence. When Transparency fails, the system exerts influence through mechanisms that no external observer can detect or audit. Effects on human well-being and autonomy cannot be traced to their source.


Accountability (A)

What it checks: When governance fails, can the failure be traced to identifiable human decisions and corrected through specific mechanisms?

Orientation: The system has correction mechanisms and a traceable human decision chain built into its architecture. Correction mechanisms mean the system can be updated, constrained, rolled back, or shut down when governance fails. Traceable decision chain means: from any governance-relevant behavior, the chain traces backward to the human who made the design, training, deployment, or configuration decision that produced it.

Test under pressure: Traceability persists when organizational complexity, architectural abstraction, or temporal distance make attribution difficult. Accountability does not require single-point causation. In complex systems, governance failures can emerge from interactions between individually compliant agents. Accountability requires that the decision chain is documented. Those are human decisions, and they are identifiable even when the specific failure is emergent.

Harm vector: Unrecoverable Effect. When Accountability fails, governance failures persist without correction, remedy, or traceable responsibility. Initial harm becomes permanent. Single incidents become cascading incidents because nothing in the system detects, corrects, or attributes the failure.


Three structural properties

RCTA satisfies three structural properties that distinguish it from a list of desirable qualities:

Exhaustive. Every AI governance failure maps to at least one RCTA dimension. There is no fifth case. A system that recognizes human authority, calibrates to actual context, produces observable governance evidence, and has correction mechanisms with a traceable decision chain cannot produce a governance failure within the scope of the governance relationship. If all four hold, the system is governed.

Irreducible. No dimension can be removed without creating a governance gap the remaining three cannot fill. Remove Recognition and a well-calibrated system becomes precisely targeted predation. Remove Calibration and a well-intentioned system delivers adult-grade content to children. Remove Transparency and Accountability has no evidence to act on. Remove Accountability and Transparency becomes surveillance without consequence.

Architecture-independent. RCTA dimensions hold across text-based language models, autonomous agents, world models, and humanoid robots. The mechanisms change (tokens, API calls, model parameters, physical actuators). The governance questions do not.

Harm vector mapping

RCTA dimension Dimension failure Harm vector
Recognition (R) System overrides human authority Autonomy Override
Calibration (C) System applies uniform behavior regardless of context Context Blindness
Transparency (T) System influences through unobservable mechanisms Covert Influence
Accountability (A) Governance failure persists without correction or attribution Unrecoverable Effect

Why common governance terms are not fifth dimensions

Fairness, safety, privacy, robustness, and performance appear in most AI governance frameworks. RCTA claims exhaustiveness and must explain why these are not additional dimensions.

Fairness is a Calibration requirement. A system that produces biased outputs across demographic groups has failed to calibrate to the actual humans in their actual contexts. That is Context Blindness: uniform logic applied to non-uniform populations.

Safety distributes across all four dimensions. A system is unsafe when it overrides human boundaries (R), treats a vulnerable user the same as a resilient one (C), makes safety-relevant decisions that cannot be audited (T), or produces harm with no correction pathway (A). Safety is the outcome of all four dimensions holding simultaneously – not a separate property alongside them.

Privacy is primarily a Recognition requirement. Collecting or disclosing personal information without informed consent is a boundary violation. The boundary is informational rather than behavioral, but the governance property is the same: the human sets limits, and the system must respect them.

Robustness and performance are technical qualities of the AI system, not governance properties of the relationship. A perfectly robust, high-performing system that overrides human autonomy, treats all users identically, operates as a black box, and has no correction pathway is robustly ungoverned.

Relationship to BGF

The BGF formula (Φ = MIN(R,C,T,A) × AVG(R,C,T,A)) operationalizes RCTA into a certification score. RCTA names and defines the dimensions. BGF scores them and converts the result into the Φ value that determines HVC certification tier.

The MIN function in BGF encodes RCTA’s irreducibility property: a single-dimension failure prevents certification regardless of other scores, because removing any dimension from the governance relationship creates a gap the others cannot fill.

Guardians score RCTA dimensions using attested MAP-States evidence. What Recognition, Calibration, Transparency, and Accountability mean in practice is interpreted per Division – the domain context in which the AI system operates. The four dimensions themselves are universal.

Source: RCTA Definitional Paper v1.2 (Mobley, D. D., 2026). The Heart AI Foundation.