EU AI Act – The 5 key articles you need to know now! Don’t miss the 4th one!

On February 2, 2025, the first provisions of the EU Artificial Intelligence Act (AI Act) officially came into force. Has the world crumbled? Nope, still here. Should you panic? Not unless your morning coffee was brewed by a rogue AI. Is it important? Absolutely!
The EU AI Act is the newest legal brainchild of the European Union, finally stepping onto the stage (or at least into force). This landmark regulation is like a rulebook for AI, making sure that our robot overlords—I mean, helpful AI assistants—are ethical, safe, and not plotting world domination.
More seriously, the regulation establishes a comprehensive legal framework to ensure the responsible development of AI technologies across the European Union. Its purpose? To improve the functioning of the internal market and encourage human-centric, trustworthy AI, all while safeguarding health, safety, and fundamental rights enshrined in the Charter. In short: AI gets rules, we get peace of mind. Sounds like fair trade, right?
The starting five : Article 1
Like basketball, we have a starting five—the first five articles of the EU AI Act now in force, laying the groundwork for AI regulation.
Article 1 sets the stage by defining the Act’s purpose: improving the internal market’s functioning while promoting human-centric and trustworthy AI. It also ensures a high level of protection for health, safety, and fundamental rights—because no one wants a world where AI is playing fast and loose with our well-being!
Usually, regulations come with recitals—essentially, the backstory behind the law. Recital 3 explains why the EU AI Act exists: to create a harmonized, trustworthy approach to AI across the Union and prevent a messy patchwork of national rules. Since AI is used across sectors and borders, inconsistent regulations could lead to legal uncertainty and slow down innovation. By establishing uniform obligations under Article 114 of the Treaty on the Functioning of the European Union (TFEU), the Act makes sure AI can circulate freely while protecting fundamental rights and public interests. It also introduces specific safeguards for AI in law enforcement (think biometric identification and risk assessments), ensuring compliance with Article 16 TFEU and keeping personal data in check with help from the European Data Protection Board (EDPB).
Meanwhile, Recital 7 reminds us why AI needs a safety net. To keep things safe and sound (and avoid AI-induced chaos), the EU is rolling out common rules for high-risk AI systems. Think of it as a seatbelt for AI—ensuring public interests like health, safety, and fundamental rights are well protected, no matter where AI is deployed. Because let’s face it, nobody wants a rogue algorithm making life-or-death decisions without some solid guardrails!
Article 2: To be or to be? Yes, you are
Article 2 explains the scope of the regulation while clearly dividing responsibilities between providers, deployers, importers, and distributors. As is now tradition, the EU extends its reach beyond its borders, ensuring that companies located outside the EU but whose AI output is used within the Union are not exempt from compliance. It is said to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union (see Recital 21). The golden era of a tech playground free from regulations is officially over.
That said, the EU does know when to step back. Some AI systems are left out of the regulation's grasp, such as those used for military, defense, or national security purposes—territory the EU wisely avoids. The exclusion is justified both by the fact that national security remains the sole responsibility of Member States and by the specific nature and operational needs of national security activities and specific national rules applicable to those activities (see Recital 24). Also, AI models developed strictly for scientific research and development are also exempt because, well, let’s not stifle innovation everywhere; EU respects freedom of science, and should not undermine research and development activity (see Recital 25). AI systems employed by public authorities in third countries for international law enforcement and judicial cooperation remain untouched, provided they meet fundamental rights protections—got to keep things friendly with the IMF, WTO, WHO, UN et al., but most importantly, Member states using them remain accountable to ensure their use complies with Union law (see Recital 22). Finally, AI systems released under free and open-source licenses are mostly off the hook, unless they qualify as high-risk or are explicitly prohibited—after all, why make life harder for those championing open technology?
The one - article 3 - you read once in a while
Well, Article 3 is the terminology one. We always need one, to make sure that we agree on what we disagree. Amongst the key definitions, the AI system is loosely defined enough to encompass and foresee unforeseen developments, formats, and applications of AI. We also have clearer definitions of the roles of providers, deployers, distributors, and operators, ensuring everyone knows their responsibilities in this evolving landscape.
On top of that, the Act introduces the concept of 'conformity assessment,' which means the process of demonstrating whether the requirements set out in Chapter III, Section 2, relating to a high-risk AI system, have been fulfilled. Did you know that you will need to assess your use of AI? Yes, yet another assessment to add to the compliance checklist!
Article 4 - The one you should really know about
Talking about assessments, Article 4 of the Act establishes the need for AI governance and accountability for all players—including providers (most likely your company, if it uses AI). It requires AI providers and deployers to ensure their employees are trained in AI compliance and risk management. Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education, and training, the context in which AI systems are used, and the persons or groups impacted by these systems. And this becomes effective now! Companies need to ensure a sufficient level of AI literacy among their employees. The least you should do? Create AI guidelines and a list of forbidden uses. If you need help, we have a template—just contact us, and we’ll send you one.
Article 5 - Our MVP, most valuable prohibition
Last but not least, the EU is drawing a firm line in the sand when it comes to AI systems that pose unacceptable risks to fundamental rights, democracy, and human dignity. In short, some AI applications have officially landed in the "thanks, but no thanks" category. Here’s what’s strictly off-limits:
- Subliminal or deceptive techniques that manipulate individuals beyond their consciousness, nudging them into decisions they wouldn’t otherwise make, often with harmful consequences.
- Exploitation of vulnerabilities based on age, disability, or social/economic status to distort behavior and cause harm, as taking advantage of people’s weaknesses is not acceptable.
- Social scoring systems that evaluate or classify individuals based on their behavior, leading to unjustified or disproportionate treatment. Real life isn’t a dystopian episode of Black Mirror.
- Predictive policing AI that assesses or predicts the risk of someone committing a crime based solely on profiling or personality traits. Minority Report was a movie, not a policy guide.
- Scraping of facial images to create or expand facial recognition databases without consent, as privacy still matters, even in the age of AI.
- Emotion recognition AI in workplaces and educational institutions, unless it’s for medical or safety reasons. Employers do not need AI to tell them if their employees are disengaged in a long meeting.
- Biometric categorization that infers sensitive attributes such as race, political opinions, religious beliefs, or sexual orientation. AI should not be in the business of labeling people in ways that could lead to discrimination.
- Real-time remote biometric identification in public spaces for law enforcement, except in specific and strictly necessary cases such as locating missing persons, preventing imminent threats, or investigating serious crimes. Mass surveillance is not on the EU’s wishlist.
Each instance of real-time remote biometric identification for law enforcement must be pre-authorized by a judicial or independent authority, except in urgent cases where retrospective approval is required within 24 hours. Additionally, law enforcement agencies must notify market surveillance and data protection authorities about each use case.If an AI project involves any of the above, it is probably time to rethink the approach.A regulatory milestone for AI in EuropeWith these initial chapters now in effect, the EU has taken a significant step toward ensuring the responsible use of AI. The regulation strikes a balance between fostering innovation and protecting citizens' rights, setting a global precedent for AI governance. The era of unregulated AI is over, and the time to align with these new rules is now.As the AI Act continues to roll out in phases, businesses and organizations must stay informed and adapt to the evolving regulatory landscape. More provisions will come into force in the coming months, so keeping up to date is key.Want to stay ahead of the curve?Join our next webinar, where we’ll dive deeper into the AI Act, unpack what it means for your business, and give you practical steps to ensure compliance while fostering innovation. Register now and be part of the conversation!
A regulatory milestone for AI in Europe
With these initial chapters now in effect, the EU has taken a significant step toward ensuring the responsible use of AI. The regulation strikes a balance between fostering innovation and protecting citizens' rights, setting a global precedent for AI governance. The era of unregulated AI is over, and the time to align with these new rules is now.
As the AI Act continues to roll out in phases, businesses and organizations must stay informed and adapt to the evolving regulatory landscape. More provisions will come into force in the coming months, so keeping up to date is key.
Want to stay ahead of the curve?
Join our next webinar, where we’ll dive deeper into the AI Act, unpack what it means for your business, and give you practical steps to ensure compliance while fostering innovation.
Register now and be part of the conversation!