AI Act

Article 55

Article 55 of Regulation (EU) 2024/1689 sets out four cumulative obligations for providers of general-purpose AI models with systemic risk (GPAI with systemic risk, in the regulation's nomenclature). The obligations are, in plain terms: model evaluation in accordance with standardised protocols reflecting the state of the art, including documented adversarial testing; assessment and mitigation of systemic risks at Union scale; reporting of serious incidents to the AI Office; and adequate cybersecurity. GPAI models are presumed to have systemic risk when training compute exceeds 10²⁵ FLOPs, a threshold that today captures GPT-4, Claude Opus, Gemini Ultra, and equivalents.

The article matters more to this blog's thesis than Article 15 does, and for a precise reason. Paragraph 1(a) anchors the entire regime in the figure of documented adversarial testing, and GPAI models are exactly the ones where the 2024-2025 empirical research has shown that figure to have limits: alignment faking in Claude 3 Opus, persistence of latent behaviours through safety training in backdoored models, and demonstrated capacity to strategically modulate performance based on detection of evaluation context. The GPAI tier is where the gap between empirical evidence and formal compliance is at its sharpest.

The article has applied since 2 August 2025. Sleeper Agents develops the central thesis. The Faking Machine introduces the empirical evidence that grounds it, in particular the Alignment Faking paper of December 2024. Constitution Without a State addresses Article 55 in its articulation with the Article 56 Code of Practice and with Anthropic's Constitution as a parallel normative object.

Essays referencing this