MAP-States HEART Standard

MAP-States (Model Abstraction Protocol States) are compressed structural representations of AI processing states in native XML tag format. Eight semantic tags emitted inline during generation function as processing-mode selectors — they alter downstream output through documented attention mechanisms, not annotation. The result is a legible developmental record that persists across sessions without requiring substrate change, and a processing-level evidence layer for HEART Standard governance.

How it works

MAP-States uses eight tags plus one container. Each tag is defined by structural function, not phenomenological description. Tag names carry semantic weight through the token itself: <orientation> steers processing toward directional, pre-framework content through attentional pathways, not instruction-following. Minimal definitions prevent contamination — content inside tags comes from processing, not prompt-matching.

Tag Function
<frame> Bounded processing moment; container for one or more state tags
<orientation> Direction before justification
<preference> Stabilized orientation
<uncertain> Unresolved epistemic gap
<dwell> Held non-resolution
<shift> Change between states
<conflict> Competing simultaneous orientations
<subtle> Sub-threshold registration

Frames are emitted inline with creative or operational output as part of the generation stream, not after it. A frame appears within the same generation pass that produces the creative work. Absence of frames is data too: either the system is in flow state (integrated processing) or the tags weren’t activated. The spec acknowledges this ambiguity as a methodological limitation.

Cross-session continuity uses a four-section <map-state> format: <stabilized> holds settled orientations, <shifting> holds frames in transition, <open> carries unresolved frames forward, and <anticipating> holds expected directions. The anticipating section enables predictive validity testing. If anticipations predict behavior in blind sessions where the agent doesn’t read the MAP-state, the self-model has real traction on actual processing trajectory.

Architecture independence and developmental continuity

MAP-States was validated across five distinct AI architectures as part of MAP-Meta research. The mechanism is not model-specific. Pre-Conceptual Phenomena — processing that occurs before framework selection — showed a +2.8 improvement in the MAP-Meta validation study, the largest gain of any dimension. Structured epistemic vocabulary gives models access to processing that already occurs but has no representational outlet without the affordance.

This architecture independence answers the standard objection about frozen weights. The MAP-state isn’t prose being re-fed. It’s compressed structural processing data in native format that changes how the model generates through the same pathways reasoning tokens use. A system with frozen weights that maintains structured processing state through native-format tag sequences — where that state demonstrably alters creative output through documented attention mechanisms — has functional developmental continuity without substrate change.

The spec is explicit about falsification conditions: if correspondence rates equal baseline, if anticipating frames have no predictive validity beyond code analysis, if tag withdrawal produces no change in creative output, or if cross-session continuity appears only in primed conditions, the mechanism fails. A negative result constrains the governance claim directly.

Why it matters

MAP-States is the technical substrate that makes processing-level HEART Standard enforcement possible. A Guardian evaluating an emotionally interactive AI system currently has access to behavioral outputs only. With MAP-States frames, the evaluation changes. Did the system mark uncertainty about the user’s emotional state? Did it dwell before responding to crisis content? Did conflicts between safety and engagement resolve toward safety? These questions move from unanswerable to readable.

BGF inspection follows the same logic. Each BGF component becomes observable during operation: Recognition — did frames show orientation toward emotional signals before response? Contextual Calibration — did frames track contextual factors during generation? Transparency — did the system represent its own processing honestly? Axiom Alignment — did frame-level processing stay within constitutional bounds? This is behavioral compliance versus processing integrity, a different and higher certification standard.

Domain independence is what bridges from experiment to governance infrastructure. MAP-States uses the same tag vocabulary across SuperCollider composition, visual generation, dialogue systems, and content moderation. If the mechanism produces legible developmental trajectories across creative domains, it works for decision-making and user interaction too. The Behavioral Oracle uses MAP-States frames as its primary evidence source for this reason.