Artificial Intelligence

Countdown to Compliance: Essential EU AI Act Milestones


The EU's Artificial Intelligence Act (AI Act), formally Regulation (EU) 2024/1689, establishes a comprehensive regulatory framework for artificial intelligence systems. Effective since August 2024, the AI Act outlines clear rules for developing, deploying, and overseeing AI technology, structured primarily around assessing, managing, and mitigating risks associated with AI systems.

Central to the AI Act is its categorization of AI systems according to the level of risk they present. Systems classified as posing an "unacceptable risk," such as those utilizing subliminal manipulation techniques or exploitative social scoring mechanisms, are prohibited from operation in the EU market. Conversely, "high-risk" systems—such as those used in critical sectors including healthcare, transportation, law enforcement, employment decisions, and education—face stringent compliance obligations.

Under the Act, providers of high-risk AI applications must ensure robust data governance practices, including detailed documentation of datasets, processes for reducing potential bias, and transparency regarding the functionality and limitations of the technology. Human oversight is mandated for these applications, underscoring accountability and control.

Additionally, the regulation specifically addresses general-purpose AI models, such as large language models. Developers must disclose extensive documentation about training data, risk management strategies, and safeguards against potential misuse or unintended consequences. Open-source models face fewer obligations unless their scale or application generates significant systemic risks.

The Act emphasizes transparency and responsible data governance, closely aligning its requirements with existing frameworks like the General Data Protection Regulation (GDPR). Entities must document their data usage clearly, control access rigorously, and apply strict cybersecurity and privacy protections.

Compliance enforcement is rigorous, with fines potentially reaching up to €35 million or 7% of an entity’s global annual turnover, whichever is greater. These substantial penalties reinforce the importance of adherence to the Act's provisions.

Implementation of the AI Act is phased over several years. Initial prohibitions took effect in early 2025, with obligations for general-purpose AI models beginning in August 2025. Requirements for high-risk AI applications will apply fully starting August 2026, with full enforcement expected by August 2027.

Given its scope and applicability to any AI systems offered in the EU market, the AI Act influences businesses globally, requiring multinational corporations to align their AI governance practices accordingly. This positions the Act as an influential global reference for regulatory practices in artificial intelligence.

Find a neatly arranged version of the act here.




EU AI Act Implementation Timeline

August 1, 2024

  • AI Act officially enters into force.

February 2, 2025

  • Prohibitions Enforced:
    Ban on AI systems posing unacceptable risks (e.g., subliminal manipulation, social scoring, emotion recognition in workplaces).
  • AI Literacy Requirement:
    Companies must begin training staff on AI literacy.

May 2, 2025

  • European Commission publishes Code of Practice for general-purpose AI model providers, clarifying compliance expectations.

August 2, 2025

  • General-Purpose AI Rules Effective:
    Providers must disclose details about their AI models, including training data and safety measures.
  • EU Member States must establish their national AI enforcement authorities and specify rules for penalties and enforcement.

August 2, 2026

  • Full Enforcement for High-Risk AI Applications:
    Comprehensive obligations for high-risk AI systems become fully enforceable. Requirements include detailed documentation, human oversight, data quality, and transparency measures.

August 2, 2027

  • Enforcement extends to existing high-risk AI systems embedded in products (e.g., medical devices, toys, radio equipment) already regulated by other EU safety legislation.
  • Providers of general-purpose AI models that entered the market prior to August 2025 must be fully compliant.

December 31, 2030

  • Final Compliance Deadline:
    AI systems integrated within large-scale IT infrastructure that entered the market before August 2027 must achieve full compliance with the Act.

Enforcement and Penalties:

  • Non-compliance carries penalties up to €35 million or 7% of global annual revenue, whichever is higher.

Published .