It’s done. The European Artificial Intelligence Act (AI Act) has finally become a reality. After years of back-and-forth, the time came on 13 March 2024: The European Parliament approved the AI Act, albeit with last-minute changes to the final text of the law. This milestone in AI legislation is now official – and the AI Act has been in force since 1 August 2024.
This article will inform you about what the new EU law means for medical devices.
What is the AI Act?
The European AI Act has been making headlines since the EU Commission published the world’s first proposal for a law aimed at regulating Artificial Intelligence in 2021: The AI Act. Since then, the document, spanning several hundred pages, has seen many versions and nearly failed in 2023. However, in December 2023, EU countries finally reached a compromise. A revised text of the AI Act was already released in January 2024.
The EU published the final version of the AI Act in the Official Journal of the European Union on 12 July 2024. It has undergone further changes, though these are relatively minor compared to the January version. The AI Act has been officially in force since 1 August 2024.
Is this also relevant in New Zealand and Australia?
The AI Act is relevant not only for those who wish to do business with the EU in the future but also beyond. As the world’s first general law on AI, it is viewed as a forerunner and will likely inspire subsequent regulations in other countries. Engaging with the AI Act now offers a head start when similar laws are introduced in other jurisdictions.
What does the AI Act regulate?
The AI Act broadly regulates AI, adopting a risk-based approach: AI systems with medium risk face some restrictions, such as transparency obligations. However, the bulk of the text deals with high-risk AI, subject to particularly stringent regulations. Violations can lead to significant fines up to 7% of annual turnover.
Definition of AI in the AI Act
The AI Act’s definition of AI has evolved over the process and, in its final version, describes an ‘AI system’ in Art. 3 (1) as
‘AI system’ means a machine-based system designed to operate with varying levels of
autonomy, that may exhibit adaptiveness after deployment and that, for explicit or
implicit objectives, infers, from the input it receives, how to generate outputs such as
predictions, content, recommendations, or decisions that can influence physical or virtual environments
The core of this definition is autonomy, meaning the system can operate to a certain extent independently. Therefore, the criteria to qualify as AI are:
- Machine-based system
- Designed to operate with autonomy
- Infers, from the input it receives, how to generate outputs.
The phrase “may exhibit adaptiveness after deployment” clarifies that adaptability after deployment is not necessarily required to qualify as AI.
Who is affected by the AI Act?
The AI Act applies to anyone who places AI systems on the market, puts them into service or uses them in the EU, which creates a wide range of affected parties:
- Providers
- Deployers
- Importers
- Distributors
- Authorised representatives of providers
- Affected individuals located in the Union.
Does the AI Act apply to medical devices?
Medical devices can also fall under the AI Act if they incorporate AI or are AI themselves. They are quickly classified into the highest risk category.
What is considered high risk under the AI Act is determined by the criteria in Article 6. An AI system is considered high risk if:
- The AI system itself, or the product in which the AI system is used as a safety component, belongs to one of the product categories listed in Annex I (this includes medical devices and IVDs as per numbers 11 and 12).
- The product is subject to a conformity assessment by a third party in accordance with the listed regulations.
Therefore, most medical devices from Class IIa and IVDs from Class B that utilise AI will be categorised as high-risk products under the AI Act.
What requirements must AI systems meet under the AI Act?
Depending on the risk class, AI systems must fulfil different standards.
Minimal and Medium Risk
The regulation categorizes AI systems with minimal risk, such as simple game AIs, as unproblematic, with no special authorisation requirements. In contrast, certain medium-risk systems are explicitly mentioned and regulated according to their risk level. These include AIs that interact with humans and can, for example, recognise emotions or generate text, image, audio, and video content.
Prohibited AI Systems (Article 5)
According to Article 5 of the AI Regulation, certain AI systems are entirely prohibited. These include manipulative techniques, exploitation of vulnerabilities such as age, disability, or social status, social scoring systems, and various applications related to criminal activities.
Requirements of the AI Act for High-Risk Medical Devices
Medical devices classified as high-risk under the AI Act must meet stringent requirements, many of which are already applicable to medical devices:
- Risk management system (Art. 9)
- Technical documentation (Art. 11 and 18)
- Cybersecurity (Art. 15)
- Quality management system (Art. 17)
- Conformity assessment by a notified body (Art. 16 and Chapter 5)
- Corrective actions and reporting obligations (Art. 20)
- Cooperation with authorities (Art. 21)
Additional AI-specific requirements include:
- High-quality data for training and validating the AI system (Art. 10)
- Protection of training data against manipulation (Art. 15)
- Records and logs of processes and events (Art. 12)
- High standards of transparency (Art. 13, Art. 16)
- Human supervision of the system (Art. 14)
- Special obligations for importers, users, traders, and along the value chain (Art. 22 ff.)
- Fundamental rights impact assessment for certain areas of use (Art. 27)
Conformity Assessment Procedure
As the requirements for high-risk AI products in the MDR/IVDR and the AI Act overlap, it’s planned that the conformity assessment of AI medical devices under the MDR/IVDR and the AI Act will be carried out by the same Notified Body. This arrangement aims to streamline the process for manufacturers and other stakeholders. However, it’s contingent on Notified Bodies having the requisite expertise and personnel.
When do the AI Act’s provisions apply?
The AI Act comes into force on 1 August 2024. Six months later, on 1 February 2025, the first provisions of Chapters I and II (general provisions and prohibited practices) will become binding. Chapter III Section 4, Chapter V, Chapter VII, Chapter XII, and Article 78 will apply from 1 August 2025 (except for Article 101). The provisions on high-risk systems in Article 6(1) and the associated provisions will apply from 1 August 2027, which is 36 months after the Act comes into force. All other provisions will apply from 1 August 2026.
AI systems that have been placed on the market before the provisions become binding can benefit from transitional periods before needing to comply with the new rules.
Conclusion
The AI Act represents a monumental step in global AI legislation. While it may seem like an additional burden for many innovators wishing to bring their products to market, it’s essential to remember that similar laws will likely be introduced elsewhere. Compliance with the AI Act now could offer a competitive advantage in future markets. The Act has been both praised and criticized, but its final form appears to offer sufficient flexibility for research and innovation. The real challenge will be its implementation, considering the complexities seen with the MDR and IVDR. Nonetheless, this legislative step is undeniably significant.
Regulatory changes such as the AI Act do not have to be a nuisance. If you want to learn how the right regulatory strategy can save you time and money, check out our new Regulatory Strategy course.