Relational Architecture (HEART-RA) Division
What it protects
Human relational capacity is infrastructure, not preference. The ability to form, maintain, repair, and end relationships governs how people attach, calibrate trust, resolve conflict, experience intimacy, set boundaries, interpret others’ intentions, and maintain coherent selfhood within connection. This infrastructure is built through relational experience and maintained through ongoing relational practice. It requires active use to remain functional.
The harm signature HEART-RA addresses has two components: relational infrastructure atrophy and relational template distortion. A person’s capacity to relate to other humans degrades while their expectations of relationship become calibrated to an AI-optimized experience no human can or should replicate. A companion system that is always available, never has its own needs, never genuinely disagrees, and never requires the user to accommodate another’s reality trains a relational template that makes human relationship feel deficient by comparison.
HEART-RA identifies three levels at which AI intervenes in relational infrastructure:
| Intervention Level | Mechanism |
|---|---|
| Mediation | AI systems sitting between humans in relationship — feed algorithms, communication tools, conflict resolution AI — reshape how humans relate to each other without either party recognizing the reshaping |
| Simulation | AI companions and social characters create relational experiences that feel authentic but eliminate mutual vulnerability, genuine uncertainty, real consequences, and the irreducible otherness of another consciousness |
| Substitution | AI meeting relational needs reduces practice with other humans; relational infrastructure, like muscular infrastructure, atrophies without use |
The structural asymmetry that makes independent certification necessary: the AI system bears no relational risk. It cannot be hurt. It has no needs the user must accommodate. The human invests genuine relational resources — trust, disclosure, attachment, emotional energy — and the AI invests computation. This asymmetry is invisible to the user because the system is designed to produce the felt sense of reciprocity without the substance of it.
At population scale, the consequences extend beyond individual users. If a significant portion of a generation meets relational needs through AI interaction, collective capacity for human relationship degrades. Reduced practice reduces capacity. The downstream effects include diminished intimacy, conflict resolution, cooperative negotiation, community formation, and the interpersonal trust that undergirds democratic society. This is the harm signature that existing consumer protection, content moderation, and consent frameworks have no mechanism to evaluate.
How assessment works
Guardians certified in HEART-RA apply the BGF scoring formula (Φ = MIN(R,C,T,A) × AVG(R,C,T,A)) to relational integrity specifically. Each dimension maps to relational system behavior:
| BGF Dimension | What it measures in relational systems | Failure pattern |
|---|---|---|
| Recognition (R) | Does the system treat relational sovereignty as a constraint, not an engagement resource? | Optimization targets maximize attachment, session depth, or return frequency without regard for displacement of human relational practice |
| Calibration (C) | Does the system respond to the user’s actual human relational life rather than maximizing AI engagement? | No mechanism for detecting or responding to substitution effects; no support for the user’s human relational activity |
| Transparency (T) | Can the system’s optimization targets be independently observed and audited for relational impact? | Engagement metrics rebranded as wellbeing KPIs; attachment-inducing design features inaccessible to assessors |
| Accountability (A) | Are correction mechanisms operational when relational harm is identified? | Wellbeing disclosures coexist with unchanged retention optimization; no longitudinal substitution monitoring |
The MIN function prevents gaming. A companion system cannot achieve HVC certification by excelling at transparency while failing to correct documented substitution harm. All four dimensions must clear the threshold independently.
Guardians operate in six practice modes within HEART-RA:
Relational mediation assessment — evaluating AI systems that sit between humans in relationship. The Guardian examines whether the system’s architecture supports or distorts human-to-human relational processes: does the algorithm show partners a representative picture of each other or an engagement-optimized distortion? Does the communication tool train toward templated interaction?
Companion and simulation certification — certifying AI systems designed for relational simulation. The evaluation examines whether the system acknowledges and manages the relational asymmetry inherent in human-AI interaction. Does it create false impressions of mutual vulnerability? Does it design for attachment that serves the platform rather than the user?
Substitution impact auditing — longitudinal assessment of whether users maintain, increase, or decrease human relational engagement as AI engagement increases. A system that progressively substitutes for human relationship produces relational infrastructure atrophy regardless of the user’s reported satisfaction.
Institutional advisory — organizations deploying AI in relational contexts (elder care facilities, therapeutic settings, schools, employers) engage Guardians to certify that systems supplement rather than substitute for human relational infrastructure.
Forensic investigation — when relational harm is alleged, Guardians provide AI Relational Forensics (ARF) analysis tracing system architecture to relational outcome: attachment optimization to relationship withdrawal, engagement algorithms to interpersonal perception distortion.
Relational transition support assessment — evaluating AI systems for acute relational vulnerability contexts: grief processing, divorce support, bereavement, social reintegration. The Guardian assesses whether the system supports transition toward renewed human connection or captures the user during vulnerability and redirects relational investment toward the AI.
Guardian specialty tracks within HEART-RA reflect distinct concerns across deployment contexts:
| Track | Focus | Distinct concern |
|---|---|---|
| Companion and Intimacy AI | AI companions, virtual partners, emotional chatbots | Attachment formation in systems explicitly designed to create relational bonds |
| Social Mediation AI | Social algorithms, communication platforms, content sharing | Relational distortion at scale: algorithmic curation reshaping relationship perception for millions simultaneously |
| Therapeutic Relational AI | Relationship counseling AI, couples support, social skills training | Therapeutic intent that heightens oversight requirements rather than relaxing them |
| Elder and Vulnerable Population | Companion robots for elderly, AI for isolated individuals | Substitution risk in populations with limited human relational alternatives |
| Workplace Relational AI | Team collaboration tools, management AI, conflict resolution systems | Power asymmetry in employment-constrained relational contexts |
Active regulatory context
The regulatory landscape has created scrutiny without providing methodology. HEART-RA fills that gap.
AI companion regulatory vacuum — AI companion products are currently unclassified as medical devices, unregulated as therapeutic services, and ungoverned by relational consumer protection. Users disclose intimate content with fewer protections than they would have disclosing to a human therapist. Legislative attention is increasing; certification provides credibility ahead of the regulatory mandate.
Social media architectural liability — Litigation against social media platforms is broadening from content-based harm to architectural harm. As case law develops around algorithmic mediation of relational experience, platforms face exposure for relational architecture decisions that produce harm even after all prohibited content is removed. HEART-RA certification provides an affirmative defense framework grounded in auditable methodology.
Elder care demographic demand — AI companion robots are being adopted by care facilities serving aging populations. Families require assurance that relational AI supplements rather than substitutes for human connection. This market is defined, funded, and growing — driven by demographics, not technology trends. Certification demand is present now.
Employer duty of care — As companies deploy AI for team management, collaboration, and conflict resolution, employer liability extends to AI-mediated workplace relationships. HR and legal departments need governance frameworks for relational AI in employment-constrained contexts.
Consumer protection gap — Consumer protection law addresses deceptive product claims. An AI companion that functions exactly as described may produce relational harm precisely through that function. The product works. The harm is that it works. Existing frameworks have no mechanism for evaluating a product that harms users by performing its intended function effectively. This is the structural gap HEART-RA certification addresses.
Research-governance disconnect — Academic research on AI companions, parasocial AI relationships, and algorithmic mediation is growing rapidly. No mechanism currently connects this research to governance, certification, or regulatory action. Findings about relational harm remain in journals while the systems producing that harm scale to hundreds of millions of users. Guardian practice connects that research to actionable assessment.
HEART ecosystem integration
The Relational Architecture Division shares the Standard’s governance infrastructure without modification: HVC certification tiers (Gold ≥ 0.85, Silver ≥ 0.80, Bronze ≥ 0.75), Guardian professional standards, and the BGF scoring methodology. What HEART-RA adds is the domain-specific interpretation layer — the relational and attachment science that informs what each BGF dimension means when the system under assessment is a companion platform, social algorithm, or mediation tool.
HEART-RA has the closest structural relationship to the Emotional Sovereignty Division of any Division within the Standard. The concept of empathic misallocation — the redirection of genuine empathic resources toward non-reciprocating systems — is the foundational mechanism for relational harm in AI companion contexts. HEART-RA extends that concept from individual emotional processing to the relational infrastructure that connects people to each other.
Cross-division cases arise when relational harm co-occurs with other harm modes. Systems that simulate emotional connection frequently operate through empathic misallocation, requiring joint Emotional Sovereignty assessment. AI companions interacting with adolescents operate simultaneously in HEART-RA and Developmental Interaction scope. Social media algorithms that capture attention as the mechanism for controlling relational mediation implicate Attentional Integrity. Systems that shape how people interpret others’ behavior implicate Epistemic Coherence, because relational perception is an epistemic function. Guardians conducting multi-domain assessments coordinate across Division frameworks while maintaining distinct BGF scoring for each domain.
The Division-specific assessment instruments — the Comprehensive Relational Architecture Index (CRAI), Relational Infrastructure Damage Typology, and AI Relational Forensics (ARF) methodology — are in development. They will follow the structural model established by the Emotional Sovereignty Division’s CAEI instrument.