Skip to main content

Governance Tool

AI Governance & Data Boundary Checklist

Where Are the Boundaries for Your AI?

An 18-point governance scorecard + data boundary checklist from Engineering Reliable AI Agents & Workflows

The Problem This Diagnostic Solves

Your AI governance framework probably exists in a PDF somewhere. The problem? PDFs don't control software.

Teams create impressive governance documents—risk matrices, approval workflows, compliance checklists—that have zero connection to what the AI actually does. The policy says "human review for high-risk decisions," but the system has no technical mechanism to flag decisions as high-risk.

This gap undermines AI projects in predictable ways:

  • Audit failures: Auditors arrive asking questions you can't answer ("What decisions did your AI make last Tuesday?")
  • Preventable damage: Systems cause harm that could have been stopped with basic controls
  • Data leakage: Confidential data routed through public APIs without anyone noticing
  • No kill switch: No way to instantly disable AI processing when something goes wrong

The Governance Triangle Scorecard exposes whether your governance exists in reality or just on paper. The Data Boundary Checklist ensures you haven't accidentally routed confidential data through public APIs. Together, they take 15 minutes and reveal gaps that would otherwise surface during an incident or audit.

How the Governance & Data Boundary Tools Work

This resource includes two complementary assessments:

Tool 1

Governance Triangle Scorecard

Evaluates 9 criteria across three control areas (Observability, Boundaries, Reversibility). Each criterion scored 0-2 based on capability maturity.

Tool 2

Data & Boundary Checklist

A binary pass/fail assessment. Classify your data sensitivity level, then verify three critical boundaries are in place.

Your Governance Triangle score places you in one of three zones:

Document-Only

Governance exists on paper, not in code

Partial Controls

Some technical controls, gaps remain

Enforced Governance

Controls implemented and automated

Complete both tools before any AI capability goes live. The scorecard identifies where to invest; the checklist identifies what blocks deployment.

The Assessment Areas

Part 1: Observability

"Can you see what the AI decided?" — Observability measures whether you can reconstruct and explain AI decisions after the fact. This isn't about logging for debugging—it's about governance visibility. Can you answer an auditor's question about a specific decision from a specific time?

Key Question:

☐ Can you list every significant decision your AI will make?

Most teams log model inputs and outputs. Few can enumerate the discrete decision types their system makes. If you can't list them, you can't govern them.

Part 2: Boundaries

"What can't the AI do?" — Boundaries are technical constraints that prevent unauthorized actions regardless of what the model outputs. This includes value limits, confidence thresholds, and user permission inheritance.

Key Question:

☐ Are resource and user-based access restrictions implemented?

The most dangerous boundary failure: AI acting as a "super-user" that bypasses role-based access controls. If your AI can see data the requesting user couldn't access manually, you have a boundary gap.

Part 3: Reversibility

"Can you undo what the AI did?" — Reversibility ensures you can recover when—not if—something goes wrong. Every AI action needs a reversal strategy defined at design time, not discovered during an incident.

Key Question:

☐ Can you quarantine questionable outputs?

Not all actions can be reversed or compensated. For these, quarantine patterns isolate the effect (flagging a report as "pending review") until human verification. If you can't reverse, compensate, or quarantine, you shouldn't automate.

Part 4: Data Classification

"Where is your data allowed to go?" — Data classification isn't optional—it dictates your entire architecture. The sensitivity level of data your AI processes determines which deployment models are permissible.

Key Question:

☐ Level 3: Confidential/PII — Allowed: Private Cloud (VPC Peered only) or On-Premise SLMs. Forbidden: Public APIs.

Many teams default to public APIs for convenience, then discover months later they've been routing customer PII through systems they don't control. Classification must happen before model selection, not after.

Part 5: Critical Boundaries

"Are the non-negotiables in place?" — Three boundaries are binary requirements—present or not. These aren't "nice to haves" that improve your score. They're deployment blockers.

Key Question:

☐ Does the code check an environment variable (e.g., AI_ENABLED=false) before every model call to allow instant disablement?

The global abort switch is embarrassingly simple to implement and catastrophically important when you need it. One environment variable, checked before every model call. If you can't flip a switch and instantly stop all AI processing, you're not ready for production.

What Your Score Tells You

The Governance Triangle Scorecard produces a score from 0-18. Your score places you in one of three governance maturity zones, each with specific guidance on where to focus.

The Data Boundary Checklist is pass/fail. Any "No" on the three critical boundaries—abort switch, user context propagation, hard value limits—blocks production deployment until resolved.

The complete assessment includes:

  • Score interpretation for each zone
  • Priority recommendations based on your lowest-scoring area
  • Vendor due diligence checklist for external API usage
  • Implementation patterns for each boundary type

Who Should Use This Diagnostic

Engineering Leads

Preparing for compliance review or audit

Security Teams

Evaluating AI deployment proposals

Architects

Designing AI governance into new systems

Product Managers

Assessing production readiness

Compliance Officers

Translating policy requirements into technical controls

Team exercise:

Run this assessment with engineering, security, and compliance stakeholders together. Disagreements on scores reveal misalignments that would otherwise surface during an incident.

Frequently Asked Questions

What is an AI governance framework?
An AI governance framework is a structured set of technical controls—not just policies—that ensure AI systems are observable, bounded, and reversible. The three pillars are observability (can you see what the AI decided?), boundaries (can you prevent unauthorized actions?), and reversibility (can you undo mistakes?). Effective governance is enforced in code, not documents.
Why do governance policies fail without technical implementation?
Governance documents that exist only in PDFs don't control software behavior. A policy stating "human review required for high-risk decisions" means nothing if the system has no technical hook to identify high-risk decisions. Controls must be implemented as code—feature flags, access controls, audit logs—not just written procedures.
What data should never be processed by AI systems?
Level 4 "Restricted" data—including trade secrets, passwords, API keys, and certain regulatory data—should not be processed by LLMs at all. For confidential data like customer PII (Level 3), only private cloud deployments or on-premise models are appropriate. Public AI APIs should never receive data above Level 2 without explicit sanitization.
What is the difference between AI governance and AI ethics?
AI ethics addresses philosophical questions about what AI should do. AI governance addresses the practical question of how to enforce constraints on what AI can do. Ethics produces principles; governance produces controls. A well-governed AI system has technical mechanisms to prevent unauthorized actions regardless of model behavior—it doesn't rely on the model being 'ethical.'
How do I implement a global kill switch for AI systems?
The simplest approach is an environment variable check before every AI model call. When AI_ENABLED is set to false, the system bypasses AI processing and falls back to manual handling. This isn't sophisticated, but it provides immediate control when something goes wrong. More advanced implementations include feature flags with instant propagation.

Download the Complete Governance Assessment

Get the full Governance Triangle Scorecard and Data Boundary Checklist.

What you get:

  • All 9 governance criteria with detailed scoring guidance
  • Complete data classification framework (4 levels)
  • 3 critical boundary checks with implementation patterns
  • Vendor due diligence checklist for external APIs
  • Score interpretation and zone recommendations
  • Printable worksheet format with notes section

Related Diagnostics

From the Book

This diagnostic is one of seven assessment tools in Engineering Reliable AI Agents & Workflows. The book explores the three governance controls in depth, including implementation patterns for event sourcing, circuit breakers, and the access control architectures that prevent AI from becoming a "super-user" that bypasses your permission model.

Learn more about the book →