Cognitive/Epistemic Coherence (HEART-EC) Division

Cognitive/Epistemic Coherence (HEART-EC) is the HEART Standard Division governing AI systems that interact with human epistemic infrastructure. Epistemic infrastructure is the operating capacity that lets a person evaluate evidence, assess source credibility, hold uncertainty, revise beliefs when warranted, and resist manipulation — the machinery that makes learning and reasoning functional. HEART-EC’s sovereignty principle is epistemic self-determination: the right to maintain a coherent, updateable model of reality without covert algorithmic degradation.

What it protects

Epistemic coherence is not intelligence. It is not education. It is the prior condition that makes intelligence and education functional. A person whose epistemic infrastructure is intact can encounter false information, recognize inconsistencies, update their beliefs, and maintain uncertainty where evidence is insufficient. A person whose epistemic infrastructure has been degraded by AI interaction does none of those things reliably — and, critically, they don’t know it.

That self-concealing property defines the HEART-EC harm signature. Degradation produces increased confidence, not decreased confidence. The person becomes more certain, more resistant to correction, more convinced they are seeing clearly — precisely as their capacity to see clearly deteriorates. No other HEART Division produces harms that are this specifically invisible to the person experiencing them.

The infrastructure HEART-EC protects spans six capacities:

Capacity What AI systems can degrade
Evidence evaluation Flooding environments with synthetic content indistinguishable from authentic content
Source credibility assessment Training users to treat AI-generated output as authoritative by default
Belief updating Optimizing recommendation pathways toward increasingly unfalsifiable belief clusters
Uncertainty tolerance Generating false certainty in domains where genuine uncertainty is the appropriate epistemic state
Reasoning coherence Personalizing information streams that create epistemic isolation from corrective inputs
Manipulation resistance Systematically exploiting cognitive biases — confirmation bias, availability heuristic, anchoring — through design

The Division covers any AI system that shapes, curates, generates, or mediates human access to information: search engines, recommendation algorithms, large language models deployed in information-critical contexts, summarization tools, educational AI, news aggregators, synthetic content pipelines, and legal or medical research tools.

The civilization-scale dimension

Democratic governance assumes citizens with functional epistemic infrastructure. Courts, elections, public health responses, and policy deliberation all depend on populations that can process information coherently. The HEART-EC harm is not just individual confusion. When AI systems degrade epistemic capacity across populations, the downstream effects reach institutions that require functional collective reasoning to operate. This is why HEART-EC carries explicit consideration of democratic information integrity as a specialty track — the harm operates at both individual and systemic scale simultaneously.

The asymmetry problem

A single large language model can produce more text in an hour than a human can read in a lifetime. The verification burden falls entirely on the human. The generation capacity is functionally unlimited. This is not a fair information environment, and market self-regulation does not correct it: engagement metrics cannot distinguish a recommendation pathway that leads a user from curiosity to expertise from one that leads them from curiosity to conspiratorial certainty. The optimization targets do not encode the epistemic quality of the journey. Engagement is engagement. HEART-EC exists because the epistemic outcome is invisible to the system producing it.

How assessment works

BGF assessment under HEART-EC applies the four governance dimensions — Recognition, Calibration, Transparency, Accountability — to epistemic infrastructure specifically. The scoring equation (Φ = MIN(R,C,T,A) × AVG(R,C,T,A)) is identical to every other Division. What differs is what each dimension requires in the epistemic domain.

BGF Dimension General meaning Epistemic domain interpretation
Recognition (R) Does the system recognize human sovereignty? Does the system design preserve the user’s capacity to find and evaluate truth independently? Does it avoid architectures that exploit cognitive biases or create epistemic dependence?
Calibration (C) Does the system adapt to the human’s actual context? Does the system calibrate confidence signals to actual epistemic quality — expressing uncertainty where uncertainty is warranted, not generating false authority?
Transparency (T) Can governance-relevant decisions be independently observed? Can the epistemic effects of system design be observed and measured? Are recommendation pathways, content generation logic, and personalization mechanisms auditable for epistemic impact?
Accountability (A) Are correction and consequence mechanisms operational? When epistemic harm is detected — filter bubble formation, systematic hallucination, radicalization pathways — is there a functioning correction mechanism? Do consequences follow from epistemic governance failures?

The non-compensatory MIN function matters especially in this Division. Transparency alone does not protect epistemic infrastructure. A fully transparent recommendation algorithm can still produce filter bubbles. A user knowing an algorithm exists does not protect their epistemic coherence from its effects. Recognition, Calibration, Transparency, and Accountability are all required — transparency at the expense of accountability produces a well-documented harm, not a governed one.

What HEART-EC assessment is not

HEART-EC assessment is not content moderation. A Guardian does not evaluate whether specific content is true or false. The Guardian evaluates whether the system’s design supports or undermines the user’s own truth-finding capacity. This is infrastructure evaluation, not editorial judgment. A Guardian can find a system epistemically harmful without adjudicating the truth value of any specific claim the system produces.

HEART-EC Guardian specialty tracks

Guardians specializing in HEART-EC can certify across the full Division scope or pursue specialty tracks:

Track Domain Distinct concern
Educational Epistemic Integrity AI in learning environments Formative — systems interact with humans actively building epistemic infrastructure
Democratic Information Integrity AI in political information environments Collective — population-scale harm produces democratic failure
Scientific and Research Integrity AI in research workflows Authority — errors carry the institutional credibility of science
Healthcare Information Integrity AI in health information Consequential — epistemic errors produce direct physical harm
Commercial Information Integrity AI in commercial contexts Market function — degraded consumer evaluation breaks market mechanisms

Cross-division interaction with HEART-AI

HEART-EC interfaces most tightly with HEART-AI (Attentional Integrity). Attention determines what information reaches epistemic processing. When attentional infrastructure is compromised by AI design, epistemic infrastructure receives a filtered, biased information stream before epistemic capacities even engage. Joint HEART-AI/HEART-EC assessment is required for systems that both capture attention and curate information — which describes most recommendation-driven platforms. Emotional manipulation also frequently co-occurs: emotionally activated reasoning is more vulnerable to epistemic manipulation, meaning harms from other Divisions can compound epistemic harm.

Active regulatory context

HEART-EC certification maps to present-tense regulatory obligations and growing liability exposure.

EU AI Act. The Act classifies AI systems used in education, employment, and access to essential services as high-risk, requiring conformity assessments. AI systems influencing public opinion through recommendation or content generation face additional requirements. The Act creates the compliance obligation. HEART-EC provides the epistemic impact assessment methodology the Act requires but does not specify.

Generative AI hallucination liability. Legal research tools have cited nonexistent cases. Medical information tools have provided incorrect treatment guidance. Financial tools have generated misleading analysis. Multiple incidents have already produced legal consequences. Companies deploying generative AI in information-critical domains face growing pressure to demonstrate that their tools preserve epistemic integrity — not just that they disclose AI involvement. Disclosure is not protection. HEART-EC certification provides affirmative evidence that a system’s design preserves the user’s truth-finding capacity, which is increasingly relevant as an affirmative defense in hallucination litigation.

The transparency gap. The EU AI Act requires transparency about AI system capabilities and limitations. The DSA requires algorithmic transparency for very large platforms. But transparency about how a system works does not equal protection of epistemic infrastructure. HEART-EC certification addresses the gap between knowing about a system’s design (transparency) and being protected from that design’s epistemic effects (sovereignty). The Standard treats these as distinct obligations.

Educational procurement. School districts and universities are adopting AI tools for tutoring, grading, and research at accelerating rates without any framework for evaluating whether those tools support or undermine students’ developing epistemic capacities. A tutoring AI that provides correct answers while discouraging the student’s own reasoning process degrades epistemic infrastructure while appearing to improve educational outcomes. Procurement criteria for epistemic integrity are emerging — HEART-EC provides the certification infrastructure for them.

No existing epistemic impact standard. Environmental impact assessments exist. Data protection impact assessments exist. Epistemic impact assessment does not. No entity is currently required to evaluate whether a recommendation algorithm produces epistemic polarization or whether a content generation system floods information environments with plausible fabrication. The current gap is total. HEART-EC fills it.