On 1 August, the European Artificial Intelligence Act (AI Act) became law, marking the world’s first comprehensive regulation on artificial intelligence.

The AI Act aims to ensure that AI developed and used in the EU is trustworthy and safeguards fundamental rights. It also seeks to establish a harmonised internal market for AI, promoting the technology’s adoption while fostering innovation and investment.

Key Provisions of the AI Act

The AI Act introduces a product safety and risk-based approach to AI:

Minimal Risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category and face no obligations under the AI Act. Companies may voluntarily adopt additional codes of conduct.

Specific Transparency Risk: AI systems like chatbots must disclose to users that they are interacting with a machine. AI-generated content, including deep fakes, must be labelled, and users informed when biometric categorisation or emotion recognition systems are used. Providers must ensure synthetic content is marked in a machine-readable format and detectable as artificially generated or manipulated.

High Risk: High-risk AI systems must comply with strict requirements, including risk-mitigation measures, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight, and robust cybersecurity. Regulatory sandboxes will facilitate responsible innovation. Examples include AI systems used for recruitment, loan assessments, and autonomous robots.

Unacceptable Risk: AI systems posing a clear threat to fundamental rights are banned. This includes AI applications that manipulate human behaviour, such as toys with dangerous voice assistance for minors, systems enabling ‘social scoring’ by governments or companies, and certain uses of predictive policing. Some biometric systems, like emotion recognition in workplaces, are also prohibited.

The AI Act introduces rules for general-purpose AI models, which are versatile and capable of performing a wide variety of tasks, like generating human-like text. These models are also increasingly used in AI applications, and the Act ensures transparency and addresses potential systemic risks.

Implementation and Enforcement

EU Member States have until 2 August 2025 to designate national authorities to oversee AI system regulations and conduct market surveillance. The Commission’s AI Office will be the primary body for implementing and enforcing the AI Act at the EU level, particularly for general-purpose AI models.

Three advisory bodies will support the AI Act’s implementation:

  • European Artificial Intelligence Board: Ensures uniform application across Member States and facilitates cooperation between the Commission and Member States.
  • Scientific Panel of Independent Experts: Provides technical advice, can issue alerts about general-purpose AI model risks, and supports enforcement.
  • Advisory Forum: Composed of diverse stakeholders offering guidance to the AI Office.

Companies not complying with the AI Act will face fines. Violations involving banned AI applications can incur fines up to 7% of global annual turnover, with other breaches attracting fines up to 3%, and incorrect information leading to fines up to 1.5%.

Next Steps

Most AI Act rules will start applying on 2 August 2026. However, prohibitions on AI systems deemed to present an unacceptable risk will take effect after six months, while general-purpose AI model rules will apply after 12 months.

To bridge the transition period, the Commission has also launched the AI Pact, encouraging developers to voluntarily adopt key AI Act obligations ahead of deadlines. The Commission is also developing guidelines for AI Act implementation and co-regulatory instruments like standards and codes of practice. A call for interest has been opened to participate in drafting the first general-purpose AI Code of Practice and a multi-stakeholder consultation is underway for input on the Code.