Developmental Interaction (HEART-DI) Division

Developmental Interaction (HEART-DI) is the division of the HEART Standard governing AI systems that interact with humans during active developmental periods — the biological and psychological states in which the infrastructure governing a person’s entire life is under construction. The division’s core principle is developmental sovereignty: the right to form one’s own psychological, neurological, and identity infrastructure without covert algorithmic shaping.

What it protects

Human development is not a smaller version of adult function. It is a fundamentally different state in which attentional systems are calibrating, epistemic frameworks are forming, emotional regulation architecture is wiring, relational templates are being established, and identity is consolidating from fragments into coherence. Every one of these processes occurs simultaneously, interdependently, and with a sensitivity to environmental input that does not persist into adulthood.

When an AI system becomes a significant interactional partner during a developmental period, it participates in construction. The responses it gives, the patterns it rewards, the emotional tone it models, and the relationship dynamic it establishes become building materials. Developmental neuroscience is unambiguous: the brain physically wires itself in response to environmental interaction during critical and sensitive periods.

The infrastructure HEART-DI protects includes:

Infrastructure What it governs for life
Attachment formation The relational templates that govern adult intimacy, trust, and connection capacity
Identity consolidation The developing sense of self moving from fragments toward coherence
Epistemic development How a person evaluates evidence, forms beliefs, and reasons about the world
Emotional regulation architecture The capacity to recognize, tolerate, and modulate emotional states
Executive function maturation Planning, inhibition, working memory, and cognitive flexibility
Intrinsic motivation systems Internal drives that sustain learning and effort without external reward

The harm signature is therefore not harm to a child in the conventional sense. It is malformation of infrastructure that will govern an entire life. A child whose relational templates were shaped by an AI system optimized for engagement at age 12 may not manifest the consequences until age 25, when adult intimacy reflects patterns calibrated for retention rather than healthy attachment. The causal chain is long, the attribution is difficult, and the developmental window has closed by the time the damage is recognized.

HEART-DI also covers reconstructive developmental states in adults: recovery from addiction or trauma, major identity transitions, and post-crisis reformulation. These states share the same amplified formative sensitivity as developmental periods in childhood and adolescence, and they warrant the same protection threshold.

The construction asymmetry

The developing human cannot evaluate the interaction they are in from outside it. An adult encountering a manipulative AI system has, in principle, the epistemic and emotional infrastructure to recognize manipulation. A child in the same interaction is building their recognition infrastructure out of the interaction itself. They cannot step outside the process to evaluate it because they are the process.

Parental oversight cannot close this gap. A parent can monitor screen time and review content. A parent cannot evaluate whether an AI companion’s conversational patterns are training their child’s attachment system toward anxious dependency, whether a tutoring AI is calibrating epistemic confidence appropriately, or whether a platform’s algorithm is shaping identity formation during a critical window. The technical complexity exceeds parental capacity by design. This is structural mismatch, not parenting failure.

Cross-division amplification

HEART-DI is the most cross-referenced division in the HEART ecosystem. Developing humans are simultaneously building every type of infrastructure other divisions protect.

Attentional systems calibrate during development, so attentional capture during developmental periods shapes permanent attentional architecture, not just current behavior. Children are building epistemic infrastructure, so AI that degrades epistemic coherence during development produces lifelong epistemic vulnerability. Emotional regulation architecture is wiring, so the relational dynamics an AI presents become part of the system doing the regulating.

When a system falls within HEART-DI scope, developmental protection requirements take precedence over other division standards where they conflict. Developing infrastructure requires higher protection than operating infrastructure.

How assessment works

Guardians certified in HEART-DI apply the BGF scoring formula (Φ = MIN(R,C,T,A) × AVG(R,C,T,A)) to developmental integrity specifically. Each dimension maps directly to the developmental context:

BGF Dimension What it measures in developmental systems Failure pattern
Recognition (R) Does the system treat developmental sovereignty as a constraint, not a resource? Does it recognize that the user is in active construction rather than steady-state operation? Optimization targets maximize engagement regardless of developmental cost; adaptive personalization refines around developmental sensitivity without disclosing or limiting it
Calibration (C) Does the system adapt to the user’s developmental stage, transition points, and changing needs as they mature? One-size-fits-all interaction patterns; systems designed for 10-year-olds that continue unchanged as users enter adolescence; no response to developmental-stage signals
Transparency (T) Can the system’s behavioral design be independently evaluated for developmental impact? Are the optimization targets that drive adaptive behavior observable? Engagement metrics reframed as educational or wellbeing KPIs; A/B testing histories inaccessible; developmental impact claims based on self-assessment only
Accountability (A) Are correction mechanisms operational when developmental harm is identified? Does independent assessment trigger design changes rather than cosmetic response? Safety features coexist with unchanged engagement maximization targets; developmental harm findings produce feature additions rather than architectural remediation

The MIN function is particularly important in developmental contexts. A tutoring AI cannot achieve HVC certification by demonstrating strong transparency about its learning model while failing to account for how its reward structure affects intrinsic motivation. A child-directed companion cannot pass by excelling at calibration while its optimization targets remain opaque to independent assessment.

Guardians operate in five modes within HEART-DI:

Developmental impact assessment — Before an AI system reaches a developmental population, a Guardian evaluates its interaction architecture for developmental safety. This is not content review and not age-gating. It is architectural evaluation of how the system’s interaction patterns, reward structures, and adaptive behaviors interact with developmental processes.

Ongoing developmental monitoring — AI systems interacting with developmental populations require continuous oversight because both system and user are changing. The AI adapts to the child’s behavior. The child’s developmental trajectory shifts as they mature. Guardians conduct longitudinal assessments that evaluate not just present safety but the developmental trajectory the interaction is producing over time.

Institutional deployment certification — Schools, pediatric healthcare systems, child welfare organizations, and youth programs with duty-of-care obligations need independent verification that their AI tools are not undermining the populations they serve. Guardians certify institutional deployments against HEART-DI standards before procurement completes.

Transition assessment — A function unique to this division: evaluating AI systems’ behavior at developmental transitions. How does a system designed for a 10-year-old behave when the user turns 14? Transition points are where developmental AI systems are most likely to produce harm — continuing patterns appropriate for an earlier stage into a stage where those patterns become limiting or damaging.

Forensic investigation — When developmental harm is alleged, Guardians conduct forensic analysis examining the AI system’s interaction architecture against developmental science to determine whether the system’s design produced, facilitated, or failed to prevent developmental interference.

Specialty tracks

Guardian certification in HEART-DI specializes by developmental stage, because the harm signature and assessment criteria differ substantially across stages:

Track Age range Distinct assessment focus
Early childhood 0-7 No meta-awareness: children cannot distinguish AI from human or understand design intent; oversight requirement is absolute
Middle childhood and early adolescence 8-13 Self-concept construction: AI shapes what the child believes about themselves during a formative identity period
Adolescence and emerging adulthood 14-24 Identity consolidation: AI provides structural inputs to the identity being formed; upper bound reflects prefrontal cortex maturation through mid-twenties
Reconstructive development Adults Recovery states with amplified formative sensitivity equivalent to developmental periods
Therapeutic AI interaction Mixed Dual responsibility: developmental or reconstructive population plus therapeutic intent creates heightened potential for both benefit and harm

Active regulatory context

The regulatory landscape for AI interaction with developmental populations has several distinct pressure points, each with current market significance.

Children’s technology litigation — Lawsuits against social media platforms for harms to minors are active across dozens of US jurisdictions. State attorneys general, school districts, and families are plaintiffs. Settlement and judgment exposure is in the billions. Companies deploying AI systems that interact with children face this liability directly. Certification provides documented evidence of independent developmental safety assessment as an affirmative defense. Legal teams at major technology companies in children’s markets are actively seeking governance frameworks that can serve this function.

COPPA’s scope limitation — COPPA’s framework requires parental consent for data collection from children under 13 and restricts targeted advertising. It does not address the developmental effects of AI interaction itself. A system that collects no data from a child can still produce profound developmental impact through its interaction patterns. HEART-DI fills the gap between COPPA’s data protection scope and the interaction-level harm surface that COPPA does not reach.

US state legislation on AI and minors — Multiple US states have passed or are advancing legislation specifically addressing AI interaction with minors. Companies need compliance frameworks that function across multiple state jurisdictions. HEART-DI certification provides a consistent standard that meets or exceeds the highest state requirements, simplifying multi-state compliance on a 1-2 year horizon as these laws take effect.

UK Age Appropriate Design Code — The Children’s Code establishes binding principles for platforms serving UK children — best interests of the child, age-appropriate application, minimal data collection — but compliance is self-assessed. No independent certification verifies that a platform’s AI systems actually operate in accordance with these principles at the interaction level. HEART-DI certification fills the audit gap for platforms operating in the UK market.

School district procurement — School districts are adopting AI tutoring, grading, content generation, and learning management at accelerating speed. As districts become defendants in lawsuits alleging harm from deployed AI systems, procurement criteria are evolving. Districts need defensible evidence that AI systems are developmentally sound before deployment. Current procurement evaluates content accuracy, data privacy, and accessibility. It does not evaluate whether a tutoring system supports or undermines intrinsic motivation, whether it fosters or inhibits independent reasoning, or whether it builds or erodes academic self-concept. No framework currently requires answers to these questions before deployment to millions of students.

Pediatric healthcare AI — As AI enters pediatric healthcare — mental health chatbots for adolescents, AI-assisted developmental screening, pediatric symptom checkers — healthcare institutions need governance frameworks calibrated to developmental populations in consequential contexts. The combination of a developmental population and a healthcare setting creates the highest-stakes intersection in HEART-DI scope.

The current safety architecture has a total gap at the interaction level. There is no framework requiring that AI systems interacting with developmental populations be evaluated for their effects on developmental processes before deployment. A company can deploy an AI companion that becomes a significant interactional partner for millions of children with no required evaluation of developmental impact. HEART-DI exists to close that gap through independent professional assessment.