Canada just published a list of 409 AI systems used across its federal government—and researchers found the transparency was mostly theatre. A new analysis reveals how bureaucratic "AI registers" can create the illusion of accountability while systematically obscuring who's actually responsible when algorithms make decisions about your life.
The Transparency Trick
In November 2025, Canada became one of the first nations to operationalize a Federal AI Register—a public database of every AI system deployed across government departments. On paper, this sounds like a win for accountability. In practice, researchers at multiple Canadian universities found something darker.
Their analysis using the ADMAPS framework uncovered a sharp gap between rhetoric and reality. While 86% of systems are deployed internally for "efficiency," the Register systematically omits the human discretion, training requirements, and uncertainty management needed to actually operate them. It describes AI as "reliable tooling" rather than "contestable decision-making."
What Gets Hidden in Plain Sight
The researchers argue these registers aren't neutral mirrors—they're "instruments of ontological design" that actively configure what counts as accountable. By privileging technical descriptions over sociotechnical context, Canada's register makes AI systems appear more autonomous and less human-dependent than they actually are.
Translation: when something goes wrong, it's harder to trace responsibility back to a specific person or training failure. The register offers "visibility without contestability." You can see that an AI system exists, but not how to challenge its decisions or who trained the humans operating it.
What This Means for Learners
If you're building AI literacy or working in governance, this matters. Transparency isn't just about publishing a list—it's about designing systems that reveal lines of accountability. When evaluating any AI deployment (corporate or governmental), ask: Does this documentation show who's responsible when the model fails? Does it explain the human judgment involved? Or does it just describe the technology?
For professionals entering AI policy or ethics roles, understanding the difference between "transparency theatre" and actual accountability infrastructure is now table stakes. This research provides a framework (ADMAPS) for auditing whether transparency efforts are genuine or performative.