EU AI Act: Key Provisions and Implications for Businesses
From theoretical concepts and early experiments to being woven into our lives, powering everything from self-driving cars to medical diagnosis tools, Artificial Intelligence (AI) has come a long way; it is revolutionising the way businesses operate- from streamlining customer service to automating complex decision-making.
This rapid evolution underscores the need for regulation and clear guidelines on how AI tools are developed and how they operate. The EU's AI Act is the first comprehensive legislation in the world in that direction.
This ground-breaking law brings structure to the AI landscape, promoting safety and ethics in both the development and the deployment of these systems for the benefit of both businesses and citizens. This legislation introduces a risk-based regulatory framework, providing much-needed clarity for AI developers and businesses utilising AI solutions.
This article offers an update and breakdown of the key EU AI Act provisions, a handy overview of the global AI regulatory landscape, and actionable steps for businesses to ensure compliance and allow them to harness the potential of AI responsibly.
Where we stand: The EU AI Act's current status
The EU AI Act was formally approved in March of 2024, following a multi-year legislative process. Its provisions are being introduced in a phased approach to allow businesses and developers time to adapt, with the whole act becoming fully applicable 24 months after entry into force. The following timeline outlines the next steps and key implementation dates:
Entry into force: The Act will become legally binding 20 days after publication in the Official Journal of the EU (expected in May or June 2024).
Phased implementation: The Act's provisions will roll out in stages to allow businesses time to adapt:
Immediate Bans (6 months): Unacceptable-risk AI systems will be banned six months after the Act enters into force.
Code of Practices (9 months): The CoP drafted by the EU AI office will start applying nine months after the Act enters into force.
General-Purpose AI (12 months): Specific rules for general-purpose AI models that need to comply with the transparency requirements (like GenAI models) will kick in 12 months after entry into force.
High-Risk Systems (listed in Annex III) (24 months): The strictest requirements for high-risk systems which are listed in Annexure III (biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and administration of justice) of the act will apply 24 months after entry into force.
All other High-Risk Systems (36 months): Obligations for such systems will become applicable 36 months after entry into force. These include regulated products like medical devices, industrial machinery, toys, cars, etc. or AI systems used as safety components of a regulated product or regulated equipment.
Ongoing Review: The EU Commission will regularly review and potentially update the list of prohibited AI systems and high-risk categories.
Global AI regulatory landscape - A Comparative Analysis
The EU AI Act is just one example of how governments around the world are grappling with the challenge of regulating AI. Here's a brief overview of the regulatory landscape surrounding AI in other major economies and their comparison with EU’s.
EU: AI Act (Comprehensive Regulation
Focus: Comprehensive, risk-based framework with a focus on protecting fundamental rights.
Scope: Wide-ranging, affecting the deployment of AI systems from both domestic and foreign developers.
Key Features: Bans on unacceptable-risk AI, strict requirements for high-risk systems, transparency rules, and regulatory sandboxes.
UK: Evolving framework, principles-based approach
Focus: Guided by high-level principles on AI ethics, fairness, and safety.
Scope: Lighter regulatory touch compared to the EU, following an ‘agile’ strategy with a focus on existing sector-specific regulators, empowering them with increased responsibility to monitor the advancement of AI. Also emphasises self-regulation and industry-led best practices.
Key Features: National AI Strategy prioritises innovation, while guidance documents offer recommendations on responsible AI development.
US: Patchwork of sector-specific regulations, AI Risk Management Framework (non-binding)
Focus: Management of AI risks through sector-specific regulations, voluntary guidelines, and an emphasis on aligning AI with democratic values.
Scope: Less centralised than the EU, with states and agencies regulating AI within their domains.
Key Features:
Patchwork of sector-specific rules
Non-binding AI Risk Management Framework for trustworthiness and accountability
Executive Order outlining principles for responsible AI development.
Canada: Artificial Intelligence and Data Act (Bill)
Focus: Proposed legislation seeks to establish a risk-based regulatory framework for AI, emphasising data governance and responsible AI practices.
Scope: Aims to regulate the development and use of AI systems deemed high-risk.
Key Features: (If passed) would include requirements for risk assessments, impact assessments, and potential creation of an AI regulator.
China: Regulations focused on data governance and algorithmic oversight
Focus: With no overarching regulation like the EU AI Act, China’s AI regulatory framework consists of regulation of specific AI applications (like GenAI), prioritises data security, control of information, alignment of AI with national goals, and balancing AI innovation with measures to prevent social and economic harm.
Scope: The regulatory framework encompasses content moderation, data protection, and algorithmic governance, targeting various aspects of AI technology use.
Key Features:
Algorithmic governance supplements existing laws.
Companies must ensure non-discrimination, conduct security assessments for specific algorithms (e.g., generative AI), and file algorithmic information with the CAC.
Chinese regulators are enhancing tools to understand and monitor algorithms, including training protocols, data sets, parameters, and mechanisms.
The Evolution from GDPR- Similarities and key differences
The EU's General Data Protection Regulation (GDPR) laid the groundwork for responsible data practices, setting precedents for subsequent legislation in the realm of digital governance. The EU AI Act builds upon this foundation with a focus on the broader risks and ethical concerns arising from the development and use of AI systems. Similarities and key differences between these two landmark regulations are highlighted below:
Similarities:
Foundation of Protection: Both the GDPR and AI Act prioritise the protection of individuals and their fundamental rights.
Emphasis on Accountability: Both emphasise transparent record-keeping, documentation, and the company's responsibility to demonstrate compliance.
Extraterritorial Reach: Both regulations can apply to businesses outside the EU if their activities impact individuals within the EU.
Differences
Focus: GDPR centres on personal data protection; the AI Act addresses broader societal risks posed by AI systems.
Scope: GDPR applies to the processing of personal data; the AI Act regulates the development and use of AI systems regardless of personal data involvement.
Governance: GDPR relies on Data Protection Authorities; the AI Act introduces a risk-based classification system with stricter pre-market controls for high-risk AI.
Enforcement: GDPR focuses on fines by Data Protection Authorities; the AI Act involves stricter penalties and new oversight structures with the EU AI Office at the helm.
A deep dive into the EU AI Act
Understanding the definition of AI along-with the several provisions in the AI Act is crucial for businesses. In this section we’ll dissect the six key characteristics of an AI system as defined in the EU AI Act, the scope as well as the risk-based categorisation of the AI systems.
1. Definition of AI
The EU AI Act provides a comprehensive definition to clarify which systems fall under its purview.
“Artificial intelligence system’ (AI system) means a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions , influencing the environments with which the AI system interacts.”
Based in the above definition, an AI system is defined as software that exhibits the following characteristics:
Machine-based: Relies on computers, digital hardware, or software implementations.
Designed to operate with autonomy: Functions with varying degrees of independence, from basic automation to self-learning capabilities.
Adaptiveness: Can evolve its behaviour based on data inputs and feedback, potentially changing its actions without explicit re-programming.
Data-driven: Takes input data (structured or unstructured), analyses it and uses the resulting insights to generate outputs.
Goal-oriented: Actions are designed to achieve specific objectives, whether pre-determined or learned through data patterns.
Impact on environments: Outputs can influence both physical environments (e.g., self-driving cars) and virtual ones (e.g., recommendation systems).
2. Scope of Application
The EU AI Act has a wide-reaching scope, aiming to regulate the development and use of AI systems within the EU market:
Geographic scope: Applies to AI systems placed on the market or put into service within the EU, regardless of where the developer or provider is located.
Embedded AI: Encompasses AI components that are integral parts of larger products or systems (e.g., AI-powered medical devices).
Indirect impacts: The Act can apply to AI systems used outside the EU if their outputs affect individuals within the EU.
3. Risk Categories
The risk-based framework is central to the EU AI Act. AI systems are classified into the following categories, each with corresponding regulatory requirements:
Unacceptable Risk: Systems with clear threats to fundamental rights and safety are banned entirely. Examples include AI-driven social scoring, manipulative AI targeting vulnerable groups, and real-time biometric surveillance in public spaces (with exceptions).
High-Risk: Systems used in critical sectors face strict requirements before launch and during use, including:
Rigorous risk assessments and mitigation plans
High-quality data sets to minimise bias
Detailed documentation and logging of system activity
Human oversight mechanisms to prevent unintended consequences
Robust cybersecurity measures
Limited Risk: Systems with specific transparency requirements. Users must be clearly informed when interacting with AI (e.g., chatbots, deepfake generators, emotional recognition systems)
Minimal Risk: Most AI systems fall into this category like spam filters. They aren't subject to specific AI Act rules but must comply with existing laws and uphold ethical principles.
4. General-Purpose AI Models
These versatile AI models can adapt to diverse tasks. The Act has extra rules for high-risk ones:
Transparency: Providers must disclose the model's purpose, capabilities, limitations, and potential risks.
Bias mitigation: Measures should be taken to reduce unintended discrimination or unfair outcomes.
Human oversight: Even if highly automated, these systems require mechanisms for humans to intervene, monitor, and if necessary, override them.
5. Regulatory Sandboxes
The AI Act promotes safe innovation through regulatory sandboxes, which are tools allowing for experimentation with innovative products, services or businesses under a regulator's supervision:
Controlled environments: Developers can test high-risk AI systems in close collaboration with regulatory authorities.
Risk management: Allows for experimentation and iteration under supervision, helping to refine AI systems before broader deployment.
Balancing innovation and safety: Sandboxes aim to foster responsible AI development while minimising potential harms.
Calls to action for businesses in the wake of EU AI Act
As the regulatory landscape surrounding AI continues to evolve, businesses must take proactive steps to ensure compliance and uphold ethical standards. With the introduction of the EU AI Act, companies operating within the European Union face a new framework that emphasises responsible AI development and deployment. Here's a strategic roadmap for businesses to navigate this regulatory landscape effectively:
Map your AI Systems:
Thorough Inventory: Create a comprehensive list of all AI systems deployed in your business, including internally developed tools, third-party solutions, and AI components embedded in other products. If you already robustly govern your data and IT, you will likely have a Records of Processing Activity (RoPA) for personal data processing and an IT asset inventory; these inventories are great foundations for your AI inventory
Risk Categorisation: Meticulously assess each system against the Act's risk framework (Unacceptable, High-Risk, Limited Risk, Minimal Risk). Consult expert guidance if needed for complex cases.
Prioritise High-Risk Systems:
Urgent Compliance: Dedicate immediate resources to ensuring compliance for high-risk systems. This involves:
Data Governance: Establish high-quality data collection, storage, and usage practices to minimise bias and ensure accuracy.
Risk Management: Conduct thorough risk assessments, identify potential harms, and implement mitigation strategies.
Robust Testing: Put systems through rigorous testing under various conditions to verify safety and reliability.
Ongoing Monitoring: Establish systems for post-deployment surveillance to identify and address any emerging risks or unintended consequences.
Partner with Experts:
Legal Counsel: Engage lawyers specialising in AI law to interpret the Act's provisions, who advise on compliance strategy.
AI Ethics Specialists: Consult experts in AI ethics to ensure systems align with responsible AI principles and address potential biases or unfair outcomes
Practitioners: For a more holistic and efficient approach, deploy AI governance and data protection specialists like Trace, who can help you implement and manage AI systems with maximum risk mitigation and implement your governance.
Embrace Transparency:
Clear Communication: Even for limited or minimal-risk AI, develop mechanisms to clearly communicate to stakeholders (customers, employees, etc.) when they are interacting with an AI system.
Understandable Explanations: Prepare explanations of how your AI systems work, with appropriate levels of detail tailored to different audiences. Be open about potential limitations.
Upskill Your Workforce:
AI Literacy across the Board: Invest in education programs covering AI basics for employees at all levels, fostering informed decision-making and ethical AI use.
Technical Training: Provide specialised training for those handling data management, development, and testing of high-risk AI systems.
The EU AI Act represents a pivotal moment for businesses, signalling the imperative to embrace responsible AI practices. By proactively navigating the regulatory landscape and prioritising ethical considerations, companies can not only ensure compliance but also foster innovation and earn the trust of customers and stakeholders. Now is the time for businesses to seize the opportunity to lead the way in ethical AI development, setting a standard for responsible innovation that drives sustainable growth and societal benefit.
This article was authored by Ritesh Katal, CIPP/E. This article is should not be taken as legal advice.
About Trace:
Trace help global companies navigate global data protection and AI regulations and implement practical steps for a risk-based and pragmatic approach to data governance with the relevant laws and frameworks. Our team include certified privacy and AI governance professionals.
Looking for support with data governance framework design, data sharing guidance, AI governance and applied Privacy by Design for your company? Book your free consultancy call now.