ThinkTech

AI Policy Tracker

A regularly updated overview of AI regulation and governance frameworks worldwide. Each entry includes the regulation status, scope, and implications for builders and procurement teams.

Last reviewed: April 2026

EU AI Act

High Risk
Jurisdiction: European UnionStatus: In ForceEffective: August 2025 (phased)

Risk-based classification of AI systems. Bans certain uses (social scoring, real-time biometric surveillance with exceptions). Requires conformity assessments for high-risk systems. Foundation model obligations for general-purpose AI.

Official source

US Executive Order 14110

Medium Risk
Jurisdiction: United StatesStatus: ActiveEffective: October 2023

Requires safety testing and reporting for powerful AI models. Directs agencies to develop AI governance frameworks. Establishes AI Safety Institute at NIST. Addresses AI in critical infrastructure, healthcare, and hiring.

Official source

UK AI Safety Institute

Low Risk
Jurisdiction: United KingdomStatus: OperationalEffective: November 2023

Government body focused on evaluating frontier AI risks. Conducts pre-release safety testing of advanced models. Published framework for evaluating societal impacts. Voluntary cooperation model rather than binding regulation.

Official source

Canada AIDA (C-27)

Medium Risk
Jurisdiction: CanadaStatus: ProposedEffective: TBD

Artificial Intelligence and Data Act. Part of broader digital charter. Creates obligations for high-impact AI systems. Requires impact assessments and mitigation measures. Criminal penalties for reckless deployment causing harm.

Official source

China AI Regulations

High Risk
Jurisdiction: ChinaStatus: Multiple ActiveEffective: 2023-present

Series of regulations covering generative AI, deepfakes, recommendation algorithms, and synthetic content. Requires algorithmic filing and security assessments. Content must align with socialist core values. Mandates AI-generated content labeling.

Official source

About this tracker

This tracker covers major AI governance frameworks with global impact. It is reviewed monthly and updated when significant changes occur. The risk levels indicate the regulatory burden on AI system deployers, not a judgment on the regulation quality. For methodology details, see the source standards page.