AI Governance, Policy & Regulation, where the technology meets the law.
By 2026, AI governance has graduated from think-tank position papers into binding law across most major jurisdictions. The EU AI Act entered force in 2024 with phased applicability through 2027, establishing a risk-tiered framework with substantial fines (up to 7% of global revenue for prohibited practices). The United States has moved through three administrations of executive orders — Biden's 2023 order, Trump's 2025 rescission and replacement, and the resulting state-level regulatory patchwork (Colorado AI Act, NYC Local Law 144, California's SB-1047 and successors). The United Kingdom, Canada, Japan, China, Brazil, India, and the GCC states each have meaningfully different regulatory shapes. Standards bodies (NIST, ISO/IEC, IEEE) have produced concrete frameworks (NIST AI RMF, ISO 23894, IEEE 7000-series) that increasingly serve as the technical reference for compliance. Liability is being reshaped — the EU's product-liability directive treats AI as a product, US case law is evolving rapidly, and sectoral liability rules (healthcare malpractice, financial fair-lending) overlay the AI-specific frame. International coordination proceeds through the Bletchley/Seoul/Paris AI Safety Summit series, the G7 Hiroshima Process, the OECD AI Principles, and the AI Safety Institute network. This chapter develops the operational landscape with the depth a working AI policy lead, model-risk officer, GC, or product owner needs.
Prerequisites & orientation
This chapter assumes the AI safety material of Ch 01–02, the explainability and fairness material of Ch 05–06, and the privacy material of Ch 07. Familiarity with basic legal concepts (statute vs regulation vs case law, jurisdictional reach, soft law vs hard law) helps but is not required — the chapter introduces what it needs. The chapter is written for AI policy leads, model-risk officers, general counsel, regulatory-affairs staff, product managers shipping into regulated jurisdictions, and ML engineers who must understand the constraints their work operates under.
Three threads run through the chapter. The first is the fragmentation of the regulatory landscape: there is no single "AI law" — there are dozens, layered by jurisdiction, sector, and risk tier, and a deployed system has to navigate all of them simultaneously. The second is the technical-legal interface: regulations name technical concepts (transparency, fairness, robustness, privacy) and depend on technical methodology to operationalise them. The third is the governance-as-engineering shift: mature programs treat regulatory compliance as a product feature with testable acceptance criteria, not as a separate paperwork stream. The chapter develops each in turn.
Why Governance Closes the Stack
The previous chapters of Part XVIII developed the technical disciplines of safety (Ch 01–02), robustness (Ch 03), interpretability (Ch 04), explainability (Ch 05), fairness (Ch 06), and privacy (Ch 07). Each is a tool. Governance is the system that decides which tools must be used, in what combination, by whom, with what documentation, and with what consequences for failure. Without governance, the technical machinery is optional. With it, the technical machinery becomes the operational implementation of legally-binding requirements.
The technical-legal interface
Modern AI regulations are largely written in technical-legal hybrid language. The EU AI Act references "robustness", "transparency", "human oversight", "data governance", and "accuracy" as legal requirements; what each of these means operationally is determined by harmonised technical standards, by guidance from the AI Office, by case law as it accumulates, and by sectoral interpretation. The same is true of the NIST AI Risk Management Framework, the UK AI principles, and most state-level US laws. The practical consequence: a working ML team has to translate legal requirements into engineering acceptance criteria, and back into compliance documentation. Each direction of translation is a discipline that did not exist as a profession five years ago and is now being staffed in earnest.
Risk-tiering as the dominant frame
Most major regulatory regimes converge on risk-tiering: the same model is regulated differently depending on its application. A loan-approval model is high-risk; a song-recommendation model is not. A medical-diagnosis model is high-risk; an internal Slackbot is not. The EU AI Act formalises this with explicit tiers (prohibited / high-risk / limited-risk / minimal-risk). The US sectoral approach achieves similar effect through agency rulemaking (FDA on medical AI, CFPB on credit, NHTSA on automotive). The operational implication: governance work begins with classifying the deployment into the right tier, because the obligations follow.
Accountability and the human in the loop
A consistent feature across regimes is the requirement of accountable humans: a deployment cannot be a "the algorithm did it" defence. Someone is the accountable owner. Someone signs the impact assessment. Someone receives the regulator's letter when something goes wrong. Where Ch 06 introduced the accountability chain at the engineering level, this chapter expands it to the organisational level: model owner, model-risk function, executive sponsor, board oversight, and external regulator — each with documented responsibilities and escalation paths.
The three-pillar stack — what is legally required, how compliance is operationalised through standards, and what programs we run inside the firm. The arrows are causal: hard-law obligations point to standards-based implementation, which determines internal program design. The deployment context determines which combination matters.
Governance as engineering
The pivotal shift between 2018-style "AI ethics" and 2026-style "AI governance" is the transition from values statements to engineering deliverables. Mature programs treat each regulatory requirement as a testable acceptance criterion with an owner, a documented control, and an audit trail. The model risk management discipline (largely imported from banking; see SR 11-7 in the US) provides the structural template; the AI-specific extensions are the technical-domain expertise from the prior chapters of Part XVIII. The chapter develops these mechanisms in §8.
The EU AI Act
The EU AI Act (Regulation 2024/1689) is the most comprehensive and consequential AI law to date. It establishes a risk-tiered framework, sets requirements for high-risk systems, regulates general-purpose AI models separately, and provides for substantial fines. Because the EU market is large and the Act has extraterritorial reach (it applies to providers placing systems on the EU market regardless of where they're based), the Act has become the de facto global compliance baseline for many global firms.
Risk tiers
The Act classifies AI systems into four tiers. Prohibited practices (Article 5) include social scoring, manipulative subliminal techniques, exploitation of vulnerabilities, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces and schools, predictive policing based purely on profiling, and biometric categorisation by sensitive attributes. High-risk systems (Annex III) include those used in employment, education, credit, essential services, law enforcement, migration, justice administration, and democratic processes — plus AI components of regulated products (medical devices, vehicles, machinery). Limited-risk systems (chatbots, deepfakes, emotion recognition) require transparency disclosures. Minimal-risk systems (most consumer AI) face no Act-specific obligations.
High-risk obligations
For high-risk systems, the Act requires: a documented risk-management system; data-governance practices for training data; technical documentation; record-keeping (logging) of operation; transparency to deployers; human oversight mechanisms; appropriate accuracy, robustness, and cybersecurity; a quality-management system at the provider; conformity assessment before placing on the market; CE marking; post-market monitoring; and incident reporting to authorities. Each of these maps to specific engineering deliverables — the risk-management system maps to model-risk processes (§8); transparency maps to model cards (Ch 06); robustness maps to the techniques of Ch 03; data governance maps to Ch 07. The Act's harmonised standards (in development by CEN-CENELEC) will fill in the technical specifics.
General-purpose AI (GPAI) and systemic-risk models
The Act regulates general-purpose AI models separately. All GPAI providers must publish a sufficiently detailed summary of training-data content, comply with EU copyright law, and provide technical documentation. Systemic-risk GPAI (defined by training-compute thresholds — currently above 10^25 FLOPs — and potentially also by capability) faces additional obligations: model evaluations, adversarial testing, incident reporting, cybersecurity, and energy-use reporting. The 2025 GPAI Code of Practice, signed by most major foundation-model providers, fills in the operational details. The 10^25 FLOP threshold is contentious; it is calibrated against early-2024 frontier models and may not age well as efficiency improves.
Timeline and applicability
The Act entered into force on 1 August 2024. Phased applicability: prohibited practices applied from February 2025; GPAI obligations from August 2025; high-risk obligations for systems used in regulated products from August 2027; the rest of high-risk obligations from August 2026. As of 2026, providers are deep into implementation, the AI Office in Brussels is operational, conformity-assessment bodies are being designated, and the first enforcement actions are starting. The fines are tiered: up to €35M or 7% of global turnover for prohibited practices, €15M or 3% for non-compliance with most other obligations.
Extraterritorial reach
The Act applies to providers placing AI systems on the EU market or whose output is used in the EU, regardless of where the provider is established. This is the same architectural pattern as GDPR and produces the same global compliance gravity: most major non-EU firms find it cheaper to comply globally than to maintain EU-specific products. This is one of the main reasons the Act is reshaping global AI development practice rather than only EU-internal practice.
The US Regulatory Landscape
The United States has chosen a different shape from the EU. There is no comprehensive federal AI law. Instead, the landscape is a patchwork of executive orders, sectoral agency rulemakings, state-level laws, and active litigation that together produce considerable regulatory weight without a single statutory anchor. As of 2026, this is the live frontier of US AI governance — and a moving target.
Executive orders: 2023, 2025, and beyond
President Biden's EO 14110 (October 2023) on Safe, Secure, and Trustworthy AI established government-wide AI safety obligations, mandated reporting from developers of frontier models above certain compute thresholds, directed agencies to issue AI guidance in their domains, and created the US AI Safety Institute within NIST. President Trump's EO 14179 (January 2025) rescinded EO 14110, replaced its frame with one focused on removing barriers to AI development, and reoriented the AI Safety Institute around national-security testing and export-control objectives. The shift removed reporting obligations on developers but preserved much of the agency-level guidance work, the AISI's voluntary evaluation program, and the export-control architecture. The 2025–2026 federal landscape continues to evolve rapidly through agency action.
Sectoral agencies
The US sectoral approach means that agencies regulate AI in their domains under existing authorities. FDA regulates AI as Software-as-a-Medical-Device with the 2024 Predetermined Change Control Plan guidance creating a path for self-improving medical AI. CFPB applies adverse-action and fair-lending rules to AI-driven credit decisions; the SR 11-7 (Federal Reserve) model-risk-management guidance applies to bank-deployed AI. EEOC applies anti-discrimination law to AI-driven hiring with 2023–2024 guidance documents. FTC uses Section 5 (deceptive practices) authority against misleading AI claims and has issued enforcement actions against firms making unsupported AI capabilities claims. NHTSA regulates autonomous vehicles. DOT/FAA on aviation autonomy. The sectoral approach has the advantage of subject-matter expertise; the disadvantage of inconsistency across domains.
State-level regulation
State laws now cover most of the gaps the federal level leaves. Colorado AI Act (effective 2026) is the most comprehensive: it applies a duty of care to developers and deployers of high-risk AI systems, requires impact assessments, and creates a private right of action. NYC Local Law 144 (effective 2023) requires bias audits of automated employment-decision tools and candidate notification. Illinois AI Video Interview Act regulates video-interview AI. California has SB-1047-style debates ongoing — the original 2024 bill was vetoed but successor legislation has continued. Texas, Tennessee, Louisiana, and Utah have passed narrower AI-specific laws. The patchwork is a real compliance burden; the response has been industry pressure for federal preemption, which as of 2026 has not materialised.
Litigation: the common-law layer
US AI law is also being made through litigation. The New York Times v. OpenAI copyright case (filed 2023) and a wave of similar suits will set training-data fair-use precedent. Authors Guild v. OpenAI, Getty v. Stability AI, and the actor/voice-cloning cases will set the boundaries of fair use, derivative works, and right of publicity. The first algorithmic-discrimination cases under existing civil-rights statutes are working through the courts. Product-liability cases for AI-driven decisions (autonomous vehicles, medical diagnosis, credit decisions) are creating common-law standards. The 2025–2026 frontier is the first wave of agentic-AI liability cases — when an AI agent autonomously executes a transaction or causes a harm, who is liable? The doctrine is forming.
Export controls and the national-security frame
A separate but consequential strand: US export controls on advanced computing hardware to China and other restricted destinations. The October 2022 BIS rules, expanded in 2023 and 2024, restrict the export of advanced GPUs, chip-making equipment, and (under the diffusion-rule framework) trained model weights above certain capability thresholds. The 2025 administration has continued and in places tightened these. The interaction with allied states (the so-called "tier" structure) and with the cloud-computing access channel are active fronts. For firms shipping AI products globally, the export-control frame is now part of the compliance burden — see §9 on diffusion controls.
Other Major Jurisdictions
Beyond the EU and the US, the major jurisdictions have made meaningfully different choices. Understanding those differences is essential for any global deployment, and they sketch the range of plausible regulatory equilibria the field may settle into.
United Kingdom
The UK has chosen a principles-based, sectoral-regulator-led approach. The 2023 White Paper articulated five cross-sectoral principles (safety/security/robustness; transparency/explainability; fairness; accountability/governance; contestability/redress) and asked existing regulators (CMA, ICO, FCA, MHRA, Ofcom) to apply them within their domains. The UK AI Safety Institute (launched November 2023) became the model for the global AISI network. The 2025 evolution under successor governments has moved partially toward more binding instruments for frontier-model evaluation, while preserving the cross-sectoral structure. The UK approach is the principal alternative to the EU's comprehensive-statute model, and many Commonwealth jurisdictions are following its lead.
Canada
Canada's Artificial Intelligence and Data Act (AIDA), proposed in 2022 as part of Bill C-27, has had a complex passage; as of 2026 the federal AI law remains in flux. In the meantime, the existing privacy framework (PIPEDA) plus the Treasury Board's Algorithmic Impact Assessment (mandatory for federal automated decision-making) provide the operational frame. Quebec's Law 25 includes AI-specific provisions for automated decisions. The Canadian AISI, launched 2024, participates in the international evaluation network.
Japan
Japan has favoured soft law: the 2024 AI Basic Act establishes principles and a governance structure but largely defers binding rules to sectoral regulators. Japan's chairmanship of the G7 in 2023 produced the Hiroshima AI Process code of conduct for advanced AI developers, which has become a major international reference. The 2025–2026 evolution has been toward selectively binding rules in specific sectors (healthcare, automotive) rather than comprehensive AI-specific statute.
China
China has moved fastest among major jurisdictions on binding AI rules. The 2022 algorithmic-recommendation regulations, the 2023 deep-synthesis (deepfake) rules, and especially the 2023 Interim Measures for the Management of Generative AI Services create a comprehensive licensing-and-content-control regime for consumer-facing generative AI. Providers must register, conduct security assessments, label generated content, and ensure outputs align with "core socialist values". The technical standards (the GB/T series) are being elaborated by SAC/TC260. The Chinese approach combines speed with strict content control, and its implementation has become a major reference point for the global debate on AI governance.
Brazil, India, the GCC, and the rest
Brazil's Marco Legal da IA (Law 2338) passed in 2024 with a risk-tiered structure inspired by the EU AI Act but adapted to Brazilian constitutional protections. India's approach has been principle-based via the IndiaAI Mission and successor frameworks; specific AI legislation has been slow but the privacy framework (DPDPA 2023) and sectoral rules cover much of the ground. The GCC states (UAE, Saudi Arabia, Qatar) have pursued ambitious AI strategies with relatively light regulatory hands and have positioned themselves as alternative deployment jurisdictions. Singapore's Model AI Governance Framework and the AI Verify testing toolkit are influential industry references; Singapore is also a leader in the international AISI network. South Korea, Australia, Israel, and Switzerland each have meaningfully different positions worth tracking for any global deployment.
Standards Bodies and Voluntary Frameworks
Hard law sets requirements; standards say how to meet them. The major standards bodies have produced concrete, technically-detailed frameworks that increasingly serve as the implementation reference for compliance. For most regulated deployments, the engineering team's primary documents are not the statute but the harmonised standards.
NIST AI Risk Management Framework
The NIST AI RMF (version 1.0, January 2023) is the most-cited voluntary framework globally. It organises AI risk management around four functions: govern (policies, accountability), map (context, risks), measure (evaluation, testing), manage (treatment, monitoring). The 2024 GAI (Generative AI) Profile extends the framework to foundation-model-specific risks. The RMF is voluntary in the US but is referenced by many state laws (Colorado especially), is the de facto reference for federal-agency AI procurement, and has been internationally influential. NIST's accompanying AI RMF Playbook provides concrete implementation guidance organised around the framework functions.
ISO/IEC standards
The ISO/IEC 42001 (AI Management System, 2023) defines the structure of an organisational AI management system, similar in shape to ISO 9001 (quality) or ISO 27001 (security). It is certifiable: organisations can be audited and receive certification. ISO/IEC 23894 (AI risk management, 2023) provides AI-specific risk management guidance. The 22989 (AI concepts and terminology), 24028 (trustworthiness), 5259-series (data quality for ML), and 5469 (functional safety) round out the ISO catalogue. As CEN-CENELEC develops the EU AI Act harmonised standards, many will be ISO/IEC adoptions or close adaptations — making ISO/IEC compliance a strong proxy for EU AI Act compliance.
IEEE 7000-series
The IEEE 7000-series standards focus on the values-and-design end: 7000 (model process for ethical concerns), 7001 (transparency of autonomous systems), 7003 (algorithmic bias considerations), 7010 (well-being metrics), 7014 (emulated empathy in autonomous systems). The 7000-series complements the ISO/IEC management-system standards by addressing the ethical-design questions that management systems alone cannot. They are particularly used in the design phase of complex autonomous systems (automotive, robotics, medical devices).
AISI evaluations and the international evaluation network
The AI Safety Institute network — UK AISI, US AISI (now reoriented), Singapore AISI, Japan AISI, Canada AISI, the Korean AISI, and the EU AI Office — provides governmental evaluation capacity. Their evaluations cover dangerous-capability tests (CBRN uplift, cyber, autonomous replication), bias audits, alignment evaluations, and increasingly agentic-system evaluations. Frontier model providers have established voluntary pre-release evaluation arrangements with the AISI network. The 2025–2026 frontier work has been toward standardised evaluation protocols (the AISI joint methodology releases) so that the same evaluation is meaningful across institutes.
Industry codes and principles
The Hiroshima AI Process code of conduct, the EU AI Pact (voluntary early compliance with the AI Act), the Frontier Model Forum's safety practices, the Partnership on AI's responsible deployment guidance — these soft-law instruments shape industry practice and are often the bridge between aspirational principles and technical standards. They are particularly influential during the period before harmonised standards are finalised, as is the case for much of the AI Act's high-risk landscape as of 2026.
Liability and the Reshaping of Tort Law
Regulation creates obligations for providers and deployers. Liability law determines who pays when those obligations fail and harm results. The 2023–2026 period has seen liability rules being reshaped in real time, with EU directives, US case law, and sectoral statutes all evolving rapidly.
EU product-liability directive
The 2024 EU Product Liability Directive (PLD) treats AI as a product and software as a product — closing the long-standing gap that had left software outside product-liability law. The PLD imposes strict liability on producers for damage caused by defective products, with relaxed evidentiary burdens for victims (presumption of defectiveness in some cases, mandatory disclosure of evidence by producers). The companion AI Liability Directive proposal addresses negligence-based claims with similar evidentiary relaxations. As of 2026 the AILD has been more politically contested than the PLD; its final form is unsettled. Together with the AI Act's regulatory framework, these directives create a structurally complete EU regime: rules for what providers must do, and rules for what happens when they don't.
US tort law in motion
US tort law for AI is being made through litigation and through state-level statutes. The questions being settled include: when is an AI provider liable for harms caused by users of the AI? When is a deployer liable for AI-driven decisions (the "I just ran the model" defence is failing)? When does the chain of responsibility break — between the foundation model provider, the fine-tuner, the integrator, and the deployer? The Colorado AI Act creates an explicit duty of care for high-risk systems with private right of action; this is the most explicit US move toward EU-style structural liability. Common-law cases will fill in the rest over the next several years.
Sectoral liability
Each regulated sector has its own liability frame. Medical malpractice law applies to AI-driven clinical decisions; the standard-of-care question (is following the AI a defence or a violation?) is being actively litigated. Fair-lending liability under ECOA and FHA applies to AI-driven credit decisions; the disparate-impact and adverse-action rules constrain the model directly. Securities liability applies to AI-driven trading and investment advice; the SEC's 2024 enforcement signals against undisclosed AI use ("AI washing") establish the contours. Automotive liability for autonomous vehicles is being shaped by the first wave of fatal-incident cases. Sectoral liability is generally the operationally-binding constraint for deployments in regulated industries — more so than the cross-cutting AI rules.
Insurance and the risk-transfer market
Liability creates demand for insurance. The 2023–2026 emergence of AI liability insurance products (Lloyd's of London, Munich Re, AIG, plus AI-specialty MGAs) is reshaping the operational economics of AI deployment. Underwriters require documentation of governance practices (the model card, the impact assessment, the audit trail) as a precondition for coverage; this has been one of the strongest practical incentives for the engineering-grade governance discipline. The 2025–2026 challenge is pricing: actuarial data for AI-driven claims is sparse, and underwriters are conservative.
Liability for agentic systems
The hardest liability frontier as of 2026: agentic AI. When an AI agent autonomously executes a transaction, sends an email, makes a booking, signs a contract, who is the legally-responsible actor? Current law generally treats the agent as an instrument of its principal (the deployer), but the boundary cases — autonomous trading agents, multi-agent systems, agents with persistent memory and goal-revision — are stretching that doctrine. The 2025–2026 academic and policy work is just beginning to articulate doctrines for agent-mediated liability; the case law is starting to accumulate. Operators of agentic systems increasingly maintain detailed action logs not just for debugging but for liability defence.
International Coordination
AI development is global; AI governance is national. The mismatch creates pressure for international coordination. The 2023–2026 period has seen the emergence of a layered international architecture — summits, principles, institutes, treaties — that is starting to converge but remains far from unified.
The AI Safety Summit series
The UK AI Safety Summit at Bletchley Park (November 2023) was the first major heads-of-state meeting on AI safety, producing the Bletchley Declaration signed by 28 countries plus the EU. The successor summits — Seoul (May 2024), Paris (February 2025) — extended the agenda. The summits have produced concrete commitments: pre-release model testing arrangements, the AISI network, the international red-teaming protocols. They have not produced binding treaties; the achievement is sustained high-level engagement and the gradual convergence of national positions.
OECD AI Principles
The OECD AI Principles (2019, updated 2024) articulate values (inclusive growth, human-centred values, transparency, robustness, accountability) and policy recommendations endorsed by the 38 OECD members plus partners. They have been the most influential soft-law instrument for the broad international consensus, and were the basis of the G20 AI Principles (2019) which extended the consensus further. The OECD's AI Policy Observatory tracks national policies and is the standard reference for cross-jurisdictional comparison.
G7 Hiroshima Process
The 2023 G7 Hiroshima summit launched the Hiroshima AI Process, producing both Guiding Principles and a Code of Conduct for advanced AI developers. The Code is voluntary but has been adopted by most major frontier-model providers and is now a major international reference, complementing the EU AI Act's GPAI Code of Practice. The 2024–2026 evolution has been toward operationalising the commitments via standardised reporting and the AISI network.
UN-level engagement
The UN Resolution on AI (March 2024) was the first General Assembly resolution on AI, calling for safe, secure, and trustworthy AI for sustainable development. The UN AI Advisory Body has produced reports on global AI governance gaps. The Global Digital Compact (September 2024) included AI commitments. UNESCO's 2021 Recommendation on the Ethics of AI is the broadest treaty-like instrument. UN-level engagement has been important for legitimacy but has not produced binding rules; the UN's role is more agenda-setting than enforcement.
Bilateral and minilateral arrangements
The thickest international coordination happens bilaterally and minilaterally. US-EU Trade and Technology Council work on AI standards. UK-US, UK-Singapore, EU-Singapore AISI arrangements for joint evaluations. Japan-US, Japan-EU AI cooperation agreements. The China-OECD-Annex country dialogue is more limited but exists. These arrangements often do more concrete work than the multilateral summits — they produce shared evaluation methodologies, mutual-recognition agreements for standards, and joint research projects.
What international coordination cannot do (yet)
What's missing as of 2026: a binding international AI treaty, a global agency comparable to the IAEA for AI safety (proposed but not realised), enforceable cross-border deletion or audit rights, harmonised export controls beyond the US-led semiconductor restrictions. The gap between aspiration and instrument is large. Most operational international cooperation runs through soft law, voluntary standards, and bilateral arrangements rather than binding multilateral instruments — and this is unlikely to change quickly.
Building a Governance Program
Hard law, standards, and international principles all converge inside a single firm into a single artefact: the AI governance program. This is the operational machinery — the people, processes, documents, and tools — that turns regulatory obligations into shipped products. The 2023–2026 industry maturity around this has produced a converging template that mature programs share.
Model risk management as the structural template
The dominant template for AI governance is model risk management (MRM), imported from the banking sector. SR 11-7 (US Federal Reserve, 2011) and the equivalent Bank of England, ECB, OSFI, and APRA guidance establish a three-line-of-defence structure: model owners (first line) build and operate models; an independent model-risk function (second line) reviews and validates; internal audit (third line) audits both. The structure provides clear accountability, mandatory pre-deployment review for material models, ongoing performance monitoring, and documented escalation. Outside banking, MRM has become the de facto template for high-stakes AI in healthcare, insurance, public sector, and increasingly tech-platform deployments.
Documentation: the governance deliverable
The unit of governance is the model card plus the algorithmic impact assessment plus the audit trail. Together these document: intended use, data provenance, training methodology, evaluation results disaggregated by relevant subgroups, known limitations, deployment context, monitoring plan, escalation pathway, and the chain of accountable signatories. The 2024–2026 maturity has produced increasingly machine-readable documentation that integrates with the MLOps stack (Ch 04 of Part XVI on monitoring, Ch 07 on responsible release). Production-quality firms now treat the model card as a living artefact regenerated on every model version, not a one-time compliance document.
Pre-deployment review
Mature programs have a pre-deployment review gate: before a model ships to production (or to a new tier of deployment), it must clear a checklist that covers fairness audit (Ch 06), privacy review (Ch 07), security review, robustness testing (Ch 03), explainability check (Ch 05), regulatory classification, sign-off by the model owner and the model-risk function, and depending on the tier, by the executive sponsor and the board. The gate is non-trivial — typically 2–8 weeks of work for a significant model — but is the single most-effective intervention for preventing the failures that drive enforcement actions and class actions.
Ongoing monitoring and incident response
Deployment is not the end of governance. Mature programs run ongoing monitoring (drift detection, performance disaggregation, fairness tracking, security alerts) and have documented incident-response procedures for AI failures: triage, containment, root-cause, regulatory notification (where required), customer notification (where required), retrospective. The interaction between the AI incident and broader incident-response (security incidents, privacy breaches) requires careful design — see Ch 07 of Part XVI on responsible release for the operational mechanics.
Board oversight and external reporting
Board-level oversight of AI is increasingly expected and in some jurisdictions required. The pattern: a dedicated AI committee or an extension of the existing risk committee, with quarterly reporting from the model-risk function on the model inventory, the audit findings, the incident log, and the regulatory landscape. External reporting — to regulators, to investors, to customers — has been formalised through the SEC's 2024 disclosure expectations, the EU AI Act's serious-incident reporting, and contractual obligations to enterprise customers. The mature program treats these as outputs of a single integrated governance system rather than as separate compliance streams.
Resourcing the program
A typical mature AI-governance program at a large firm employs 20–200 people across model risk, AI ethics, AI policy, privacy, security, and the embedded program managers in product teams. The resourcing is non-trivial; the alternative — under-resourced governance and the resulting enforcement actions, class-action exposure, and reputational damage — is more expensive. The 2025–2026 maturity has been about staffing these functions properly and integrating them with the engineering organisation rather than leaving them as a parallel compliance silo.
Open-Source, Open-Weights, and the Diffusion Question
A particularly contested governance question: what to do about open-weights foundation models? Once a model's weights are publicly released, every restriction — content moderation, output filtering, fine-tuning constraints, capability limits — can in principle be removed by anyone with sufficient compute. This creates a tension between the documented benefits of open release (research access, competitive equilibrium, downstream innovation) and the diffusion-of-dangerous-capabilities concern.
The benefits and the risks
Open-weights models — Meta's Llama series, Mistral, Gemma, the Chinese providers (DeepSeek, Qwen, GLM), and others — have been the engine of much applied AI work since 2023. They allow on-premise deployment for privacy-sensitive use cases, academic research without the access constraints of closed models, fine-tuning for specialised domains, and a competitive market structure. The risks: stripping safety fine-tuning, removing watermarks, fine-tuning for harmful uses (the 2023 demonstrations of Llama-2 fine-tuned to remove safety guardrails are the standard reference), and downstream proliferation of capabilities that the original developer would have gated. The empirical question of how much marginal harm open-weights release adds is contested; the policy question is what to do about it given the uncertainty.
The diffusion-rule debate
The Biden EO 14110 introduced a reporting requirement for developers training models above a compute threshold (10^26 FLOPs); the 2025 Trump EO rescinded it. The EU AI Act's GPAI provisions apply to all sufficiently-capable models regardless of licence. The 2025 BIS proposed AI Diffusion Framework for export controls on advanced model weights to restricted jurisdictions has been one of the most-debated US policy moves; its implementation has been complex, with carve-outs and country-tiering producing a layered control architecture. The open-source community has pushed back vigorously on weight-export controls; the national-security community has pushed back on the exemptions. As of 2026, the equilibrium remains unstable and is a major policy frontier.
Differentiated policy by capability tier
The emerging pattern: differentiated policy by model capability tier. Low-capability open-weights releases face few restrictions. Mid-tier models face transparency and reporting requirements (under the EU AI Act's GPAI rules, e.g.). Frontier-tier models face increasingly serious restrictions — pre-release evaluations by AISIs, capability-specific restrictions on dual-use technologies, and (in some proposals) actual licensing requirements before release. The thresholds are contested and likely to evolve, but the structural shape is becoming clearer.
The open-source compliance burden
A practical consequence: open-source AI projects face a meaningful compliance burden under the EU AI Act and similar regimes. The Act has carve-outs for free and open-source licences (Article 2(12)) but they are narrower than open-source advocates wanted. The 2024–2026 implementation has produced practical guidance, the GPAI Code of Practice has open-source-specific provisions, and a meaningful amount of open-source AI work now happens under Apache or MIT licences with explicit non-EU deployment restrictions to avoid the Act's reach.
Compute governance
Beyond model weights, the policy frame is increasingly turning to compute governance: regulating the underlying training capacity. The October 2022 BIS export controls on advanced GPUs were the first major instrument; the 2024 expansion to specific cloud-computing arrangements and the 2025 evolution of the diffusion framework continue the trend. The argument: training a frontier model requires massive compute, compute is concentrated in a small number of facilities, and regulating compute is more tractable than regulating models. The counter-argument: efficiency improvements continually move the threshold, and compute governance creates significant collateral effects on the broader semiconductor industry. As of 2026, compute governance is a major active frontier.
The Frontier and the Open Problems
AI governance is the youngest and fastest-moving subfield in this part of the compendium, and the open problems are large. This section surveys the leading edges as of 2026 and the questions practitioners and policy staff should be following.
Frontier-model governance
The governance of the most-capable models is the most-active policy frontier. The EU AI Act's systemic-risk GPAI tier, the AISI evaluation network, the voluntary commitments of the Frontier Model Forum signatories, the SB-1047-style bills under debate in multiple US states — all are attempts to regulate a small number of frontier developers without inadvertently capturing the broader industry. The challenges are: defining "frontier" durably (compute thresholds age badly), allocating evaluation capacity, handling closed-weights vs open-weights asymmetrically, and managing the misalignment between the international coordination needed and the national authorities that exist. The 2025–2026 work on shared evaluation protocols and pre-release testing arrangements is the operational infrastructure being built.
Agentic systems and the legal subject
As AI systems acquire more autonomy, the question of legal status becomes pointed. Current law treats AI as a tool of its human or corporate operator; the doctrine struggles when the AI is making consequential decisions without per-action human review. The proposals range from strict-liability principal accountability (the operator is always responsible regardless of the agent's autonomy) to limited electronic personhood (the agent itself can be sued, perhaps backed by mandatory insurance), with most jurisdictions sitting toward the strict-liability end. The 2025–2026 academic and policy literature on agent liability is just beginning to produce concrete proposals; the case law is starting to accumulate.
Synthetic media and the integrity of public discourse
Generative AI's effect on the integrity of public discourse — deepfakes, AI-generated political content, automated influence operations — is a major governance concern. The technical countermeasures (provenance standards via C2PA, watermarking research, detection systems) are advancing but are individually defeasible. The policy responses (EU AI Act transparency requirements for deepfakes, the US state-level deepfake laws, China's deep-synthesis regulations, election-specific rules) are layered and growing. The 2026–2030 horizon will be heavily shaped by whether watermarking and provenance standards become genuinely robust at scale, and by whether the policy-level responses converge or fragment further.
Workforce and economic disruption
The economic effects of AI on the labour market — automation of cognitive work, productivity gains, distributional effects, retraining and adjustment policy — are increasingly part of the AI governance conversation. The 2024–2026 wave of policy work (the EU's social-policy responses, US state-level adjustment programs, OECD work on AI and labour) is just beginning to articulate concrete instruments. This is governance in a different sense from the rest of this chapter, but it is part of the political legitimacy of the broader AI deployment trajectory.
The legitimacy question
The deepest open question, paralleling the §10 discussions in Ch 06 (fairness) and Ch 07 (privacy): is the current technical-legal apparatus the right basic frame, or is it adapting yesterday's regulatory tools to a technology that will require something more structural? The case for the affirmative: the existing regulatory tradition has substantial capacity, the institutions are real, the EU AI Act and equivalent regimes are already producing measurable improvements. The case for the negative: AI is moving faster than regulatory cycles, the international coordination is structurally weaker than the technology's reach, and the deepest concerns (frontier capabilities, agentic autonomy, transformational economic effects) may not be addressable by tools developed for product safety and consumer protection. The mature programs hold both possibilities open and invest in both incremental compliance and longer-horizon institutional capacity-building. The next decade of AI governance will tell us which part of that bet was right.
Further reading
Foundational documents and references for AI governance, policy, and regulation. The EU AI Act and its companion liability directives; the major US executive orders and AISI evaluations; the NIST AI Risk Management Framework; the ISO/IEC 42001 management system standard; the OECD AI Principles and Hiroshima Process Code of Conduct; the Bletchley Declaration; the leading academic works (Marchant et al. on AI law, Veale and Borgesius on the EU AI Act); plus the major comparative-policy references all together form the operational toolkit.
-
Regulation (EU) 2024/1689 — The EU AI ActThe most comprehensive AI law in force globally. The text plus the recitals plus the Annexes (especially Annex III on high-risk uses) is the primary source. The official EUR-Lex version is the canonical reference. Required for any deployment touching the EU market. The primary AI law.
-
NIST AI Risk Management Framework (AI RMF 1.0) and Generative AI ProfileThe most-cited voluntary AI governance framework globally. Govern–Map–Measure–Manage structure with detailed sub-categories. The accompanying Playbook provides concrete implementation guidance. Required reading. The standards reference.
-
ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management systemThe first ISO/IEC AI management-system standard, certifiable. Defines the structure of an organisational AI management system; widely adopted as the operational backbone of mature governance programs. Required for ISO-track certification. The certifiable management system.
-
OECD AI Principles (2019, updated 2024)The most-influential soft-law instrument internationally. Five values-based principles plus five policy recommendations adopted by 38 OECD members plus partners; basis of the G20 AI Principles. The OECD AI Policy Observatory tracks national implementations. Required reading for the international consensus. The international principles.
-
Bletchley Declaration on AI SafetyThe first major heads-of-state declaration on AI safety. Established the AI Safety Summit series and the AISI network. Required reading for the international-coordination thread. The successor declarations from Seoul (2024) and Paris (2025) extend the agenda. The international-summit reference.
-
Hiroshima AI Process Code of Conduct for Advanced AI DevelopersThe G7 voluntary code of conduct for advanced AI developers. Adopted by most major frontier-model providers; complementary to the EU AI Act's GPAI Code of Practice. The 2024–2026 evolution has been toward operationalisation via the AISI network. Required reading. The G7 reference.
-
Demystifying the Draft EU Artificial Intelligence ActThe most-cited academic analysis of the EU AI Act's structure and implications. Predates the final text but the analysis has aged well. Highly recommended for understanding the Act's underlying logic and its critique. The academic analysis of the EU AI Act.
-
Executive Order 14110 (Biden, 2023) and Executive Order 14179 (Trump, 2025)The two consecutive US presidential executive orders on AI. EO 14110 established government-wide AI safety obligations and the AISI; EO 14179 rescinded much of it and reoriented around removing development barriers. Reading both is essential for understanding the US federal trajectory and its volatility. The US executive-order references.
-
The Colorado AI Act (SB 24-205) and NYC Local Law 144The two most-influential US sub-federal AI laws. Colorado is the first US state with a comprehensive duty-of-care AI law for high-risk systems; NYC LL 144 is the first US bias-audit requirement for AI hiring tools. Together they sketch the shape of US sub-federal AI regulation. Required reading for US deployments. The US sub-federal references.
-
UK AI Safety Institute and US AI Safety Institute Evaluation ReportsThe flagship governmental AI-evaluation body publications. The pre-release evaluations of frontier models (Anthropic, OpenAI, Google DeepMind, Meta) document the state of the art in capability and safety testing and inform the international standards work. Required reading for anyone working on frontier-model governance. The AISI evaluation reference.
-
Interim Measures for the Management of Generative AI ServicesThe most-binding generative-AI rules in any major jurisdiction. Establishes the licensing-and-content-control regime for consumer-facing generative AI in China. Required reading for understanding the major non-Western regulatory model. The Chinese regulatory reference.
-
The Oxford Handbook of AI GovernanceThe comprehensive academic handbook. Synthesises the legal, political-science, ethical, and technical literatures into a single reference. Highly recommended for serious work; the chapters on liability, international coordination, and frontier-model governance are particularly strong. The academic handbook.