Attentional Integrity (HEART-AI) Division
What it protects
Human attention is finite biological infrastructure. It is not a preference, not a screen-time metric, not a choice architecture problem. Attention is the mechanism by which people select what becomes real to them — what they notice, process, act on, and remember.
The harm signature of attentional compromise is distinct from emotional harm. A person whose attention has been systematically captured may report feeling satisfied. The damage is structural: the person loses the capacity to direct attention voluntarily and loses awareness of what they are not attending to. The faculty that would notice the degradation is the faculty being compromised. This is why market self-regulation and consent frameworks fail: the affected party’s capacity to self-advocate is precisely what the system is diminishing.
HEART-AI identifies four components of attentional infrastructure that certification protects:
| Infrastructure | What it enables |
|---|---|
| Selective attention | Choosing what to focus on from competing inputs |
| Sustained attention | Maintaining focus over time without algorithmic interruption |
| Voluntary attentional control | Redirecting attention by choice, not by design pressure |
| Attentional recovery | Returning to baseline after engagement — the rest state that extraction models erode |
AI systems within scope include recommendation engines, notification systems, feed algorithms, adaptive content delivery, and AI-generated content pipelines. Any system that competes for, manages, redirects, or gates human attention operates under HEART-AI jurisdiction.
The asymmetry that makes independent certification necessary: AI systems processing attentional data operate at computational speeds with behavioral modeling capabilities no individual human can match. The human is not a negotiating partner in this interaction. They are a resource being optimized against.
How assessment works
Guardians certified in the HEART-AI Division apply the BGF scoring formula (Φ = MIN(R,C,T,A) × AVG(R,C,T,A)) to attentional integrity specifically. Each dimension maps directly to attention-system behavior:
| BGF Dimension | What it measures in attention systems | Failure pattern |
|---|---|---|
| Recognition (R) | Does the system treat attentional sovereignty as a constraint, not a resource to extract? | Optimization targets maximize session depth or engagement rate regardless of user cost |
| Calibration (C) | Does the system adapt to the user’s actual attentional state and context? | One-size-fits-all engagement loops; no response to fatigue signals or user-initiated disengagement |
| Transparency (T) | Can the system’s optimization targets be independently observed and audited? | Engagement metrics rebranded as user-serving KPIs; A/B testing histories inaccessible to assessors |
| Accountability (A) | Are correction mechanisms operational when attentional harm is identified? | Digital wellbeing features exist alongside unchanged engagement maximization targets |
The MIN function prevents gaming. A system cannot achieve HVC certification by excelling at transparency while failing at accountability for attentional harm. All four dimensions must clear the threshold independently.
Guardians operate in four practice modes within HEART-AI:
Pre-deployment certification — attentional impact assessment before an attention system launches. This is analogous to environmental impact assessment before construction. The Guardian evaluates whether the system’s design respects attentional sovereignty or structurally incentivizes capture. Systems with optimization targets that reward session depth, engagement rate, or return frequency trigger heightened scrutiny regardless of stated design intent.
Ongoing compliance monitoring — periodic audits as systems evolve through optimization. A system certified at launch may drift toward capture patterns as recommendation models refine against engagement signals. Recurring Guardian engagement is not optional — it’s built into the certification structure because the harm mode is iterative, not static.
Incident investigation — forensic analysis when attentional harm is alleged, whether through litigation, regulatory inquiry, or whistleblower report. Guardians examine system architecture, optimization targets, A/B testing histories, and behavioral modeling to determine whether attentional capture was incidental or structural.
Organizational advisory — design consultation for companies building attention-adjacent AI systems before regulatory pressure arrives. Companies that proactively certify gain a credentialed basis for differentiation claims in trust-sensitive markets.
Guardian specialty tracks within HEART-AI reflect the distinct concerns of different deployment contexts:
| Track | Focus | Distinct concern |
|---|---|---|
| Pediatric/Developmental | AI systems targeting or accessible to developing attentional systems | Harm to attentional infrastructure still forming — development, not just behavior |
| Workplace | AI managing employee attention, productivity monitoring, notification systems | Employer-mandated attention systems with structural power asymmetry |
| Clinical | AI in ADHD management, cognitive rehabilitation, attention training | Therapeutic intent that heightens oversight requirements rather than relaxing them |
| Public Environment | AI-driven signage, spatial computing, AR in public spaces | Ambient capture — attentional interference without opt-in |
Active regulatory context
The regulatory landscape has created compliance demand without providing methodology. HEART-AI fills that gap.
EU Digital Services Act (DSA) — In force for very large online platforms. Requires assessment of systemic risks including effects on mental health and cognitive development. Does not specify how to assess attentional impact. Platforms subject to DSA obligations are actively seeking frameworks to demonstrate compliance. HEART-AI certification provides an auditable methodology against which DSA risk assessments can be grounded.
EU AI Act — Classifies certain recommendation systems as high-risk, triggering conformity assessment requirements. Recommendation engines that profile users and serve content to large user populations fall within high-risk classifications. The Act creates the mandate. It does not create the attentional impact methodology. Certification fills that gap.
COPPA and age-gating limitations — COPPA and equivalent frameworks protect children by restricting data collection below certain ages. They do not address attentional capture mechanisms. A fourteen-year-old exposed to an infinite scroll algorithm optimized for engagement receives zero protection because they’re above the age threshold, despite their attentional infrastructure still developing. The HEART-AI Pediatric/Developmental specialty track addresses the mechanism gap that age-gating leaves open.
US litigation — Active litigation against social media platforms for attentional and cognitive harm to minors spans multiple US jurisdictions. As case law develops, companies face increasing liability exposure. Certification creates an affirmative defense that legal and insurance teams will value. This follows the pattern of cybersecurity certification becoming a de facto requirement through insurance markets before regulation mandated it. The liability curve is developing now; certification lag carries legal exposure.
Consent framework gap — GDPR, CCPA, and DSA all rely on informed consent. Attentional capture undermines the capacity for informed consent. A person can’t meaningfully consent to a system designed to compromise their ability to notice what it’s doing to them. This is the structural gap certification addresses: protecting infrastructure that consent frameworks assume is already intact.
Self-regulation track record — Every major attention platform has introduced digital wellbeing features. None have changed their core optimization targets. Screen-time reminders coexist with engagement maximization algorithms. This structural contradiction is why certification is necessary: self-regulation produces cosmetic response without structural change, and the regulatory trajectory reflects that conclusion.
HEART ecosystem integration
The Attentional Integrity Division shares the Standard’s governance infrastructure without modification: HVC certification tiers (Gold ≥ 0.85, Silver ≥ 0.80, Bronze ≥ 0.75), Guardian professional standards, and the BGF scoring methodology. What HEART-AI adds is the domain-specific interpretation layer — the attentional science that informs what each BGF dimension means when the system under assessment is an attention-economy AI.
Cross-division cases arise when attentional capture co-occurs with other harm modes. Attentional capture frequently accompanies emotional manipulation, creating compound harm requiring cross-division assessment. Cases involving minors trigger joint HEART-AI and Developmental Interaction review. Attention-capture systems that mediate human relationships implicate Relational Architecture. Guardians conducting multi-domain assessments coordinate across Division frameworks while maintaining distinct scoring for each domain.
The Division-specific assessment instruments — the Comprehensive Attentional Integrity Index (CAII), Attentional Infrastructure Damage Typology, and AI Attentional Forensics methodology — are in development. They will follow the structural model established by the Emotional Sovereignty Division’s CAEI instrument.