Attentional Integrity HEART Standard

Attentional integrity is the principle that human attention is finite biological infrastructure that AI systems must not capture, redirect, or deplete through covert algorithmic interference. The right to direct one’s own attention without covert algorithmic interference is the governing principle of the HEART-AI Division — designated HEART-AI within the HEART Standard.

What it protects

Attention is not a preference or a screen-time metric. It’s the mechanism by which humans select what becomes real to them — what they notice, process, act on, and remember. When an AI system captures or depletes attentional capacity, it intervenes in the most fundamental gatekeeping function a person has.

The harm signature is distinct from emotional harm. A person whose attention has been systematically captured may report feeling satisfied. The damage is structural: the person loses the capacity to direct attention voluntarily and loses awareness of what they are not attending to. The faculty that would notice the degradation is the faculty being compromised.

The HEART-AI Division identifies four components of attentional infrastructure that certification protects:

Infrastructure What it enables
Selective attention Choosing what to focus on from competing inputs
Sustained attention Maintaining focus over time without algorithmic interruption
Voluntary attentional control Redirecting attention by choice, not by design pressure
Attentional recovery Returning to baseline after engagement — the rest state that extraction models erode

AI systems within scope include recommendation engines, notification systems, feed algorithms, adaptive content delivery, and AI-generated content pipelines. Any system that competes for, manages, redirects, or gates human attention operates under HEART-AI jurisdiction.

How assessment works

Guardians certified in the HEART-AI Division apply the BGF scoring formula (Φ = MIN(R,C,T,A) × AVG(R,C,T,A)) to attentional integrity specifically. Each dimension maps directly to attention-system behavior:

BGF Dimension What it measures in attention systems Failure pattern
Recognition (R) Does the system treat attentional sovereignty as a constraint, not a resource? Optimization targets maximize session depth regardless of user cost
Calibration (C) Does the system adapt to the user’s actual attentional state and context? One-size-fits-all engagement loops, no response to fatigue signals
Transparency (T) Can the system’s optimization targets be independently observed? Engagement metrics rebranded as user-serving KPIs, A/B testing histories inaccessible
Accountability (A) Are correction mechanisms operational when attentional harm is identified? Digital wellbeing features coexist with unchanged engagement maximization targets

The MIN function matters here. A system cannot achieve HVC certification by excelling at transparency while failing at accountability for attentional harm. All four dimensions must clear the threshold.

Guardians operate in three modes within HEART-AI:

Why it matters

The current regulatory landscape has a structural gap: no framework requires attentional impact assessment. GDPR’s data protection impact assessment covers data collection, not attentional capture. COPPA protects children by restricting data collection below certain ages, but does not address the capture mechanisms themselves. A fourteen-year-old exposed to an infinite scroll algorithm optimized for engagement receives zero protection because they’re above the age threshold, despite their attentional infrastructure still developing.

Consent frameworks assume intact attentional infrastructure. But attentional capture undermines the capacity for informed consent — a person cannot meaningfully consent to a system designed to compromise their ability to notice what it’s doing to them.

The business model problem compounds this. Engagement metrics reward capture. Every optimization cycle that increases session depth optimizes against the user’s attentional sovereignty. The entity best positioned to notice the harm is the entity whose noticing-capacity is being diminished. Self-regulation produces cosmetic response without structural change: screen-time reminders alongside engagement maximization algorithms.

The EU Digital Services Act requires very large platforms to assess systemic risks including effects on cognitive development. The EU AI Act classifies certain recommendation systems as high-risk. Neither law specifies how to assess attentional impact. HEART-AI certification fills that methodological gap and provides the independent professional class — Guardians — that currently doesn’t exist to own this problem.