This AI Rule Changes Everything (Starting 2025)
The European Union is taking a bold step toward shaping the future of AI. More than just a set of rules, this regulation represents a fundamental shift in how we think about artificial intelligence. What does it mean for individuals, businesses and governments to build AI that protects human rights and fosters innovation. Here is how this new AI era will come into being, through a phased approach that spans from 2025 through 2030.
From 2 February 2025:
- Prohibited AI Practices in Effect: The rules prohibiting certain AI practices will go into effect. This includes things like:
- AI systems using subliminal or manipulative techniques to distort a person’s behavior in a harmful way.
- AI systems exploiting vulnerabilities of people due to their age, disability, or socio-economic situation.
- AI systems engaging in social scoring of individuals.
- AI systems making risk assessments based solely on profiling or personality traits.
- AI systems scraping facial images from the internet or CCTV for facial recognition databases.
- AI emotion-recognition systems in the workplace and education context
- AI systems for biometric categorization based on sensitive traits.
- Real-time remote biometric identification in public spaces for law enforcement, except in certain very specific and justified cases.
From 2 August 2025:
- Governance and Conformity Infrastructure Operational: The European Artificial Intelligence Board (AI Board) and the structures for conformity assessment (for ensuring AI systems meet regulations) must be set up and working.
- Obligations on AI Models for General Purpose: The obligations outlined for those who offer AI models for general use (like large language models) will take effect.
- Cybersecurity: Enisa will play a role in cybersecurity for AI systems
- Transparency Obligations The obligations of this Regulation relating to certain AI systems or components to identify that their outputs are AI-generated will take effect.
- Sanction Regime: The member states will have set rules about sanctions, including administrative fines, for violations of this Regulation.
From 2 August 2026:
- Full Regulation into Effect: The majority of the AI Regulation goes into full effect. This means:
- AI systems with high risk will be regulated.
- The bulk of compliance and enforcement mechanisms will become active.
By 2 August 2027:
- AI-systems used in large scale IT systems: Must be brought in accordance with the requirements of this Regulation by the end of 2030.
- AI models for general purpose already released: Must be brought into accordance with the requirements of this Regulation.
By 2 August 2030:
- AI Systems used by public authorities: The obligations set out in this Regulation relating to these AI systems must be met.
Key Goals and Themes Throughout: - Harmonization: To create a uniform legal framework for AI across the EU, preventing fragmented national rules.
- High Protection: To ensure a high level of protection for health, safety, and fundamental rights (like privacy, non-discrimination, freedom of expression) with AI.
- Risk-Based Approach: To regulate AI based on the risk it poses.
- Human-Centric Approach: To emphasize that AI should serve humans and respect their values and rights.
- Innovation: To support the development and deployment of trustworthy AI, while protecting people.
- Transparency: To require a certain level of openness in how some AI systems work.
- Market access: To encourage an ecosystem of public and private AI developers that comply with EU values.
- Monitoring and Testing: Member states must establish test environments for AI innovation and test systems under real conditions.
- Accountability: The roles and responsibilities of the different actors along the AI value chain are specified.
- Cooperation: The text provides for the cooperation between the European Union and member states in the implementation of the Regulation, as well as coordination and interaction with relevant stakeholders.
- Cybersecurity the cyber-resilience of AI-systems is improved.
- AI Literacy The AI-literacy of workers and people using AI systems is improved.
In Summary
The European Union is putting in place a comprehensive regulatory structure for AI, starting with the prohibition of specific harmful practices in 2025, moving towards regulating high-risk AI and general purpose AI models in 2026 and 2027, and fully coming in effect in 2030 with systems used by public authorities, to protect individuals and ensure a trustworthy and ethical AI ecosystem while simultaneously promoting innovation. It's a broad framework that will require a lot of changes for AI developers and providers within the EU and beyond.