It’s done. The European Artificial Intelligence Act (AI Act) has finally become a reality. After years of back-and-forth, the time came on 13 March 2024: The European Parliament approved the AI Act, albeit with last-minute changes to the final text of the law. This milestone in AI legislation is now official. The final nod from EU countries is seen merely as a formality, expected to take place in May 2024. Just a few months later, the first provisions of the AI Act will start to apply.
This article will inform you about what the new EU law means for medical devices.
What is the AI Act?
The European AI Act has been making headlines since the EU Commission published the world’s first proposal for a law aimed at regulating Artificial Intelligence in 2021: The AI Act. Since then, the document, spanning several hundred pages, has seen many versions and nearly failed in 2023. However, in December 2023, EU countries finally reached a compromise. A revised text of the AI Act was already released in January 2024. The final version, now confirmed by the EU Parliament, underwent several more changes. This article therefore reflects the latest iteration of the Act as of 13 March 2024, pending a final legal and linguistic review before publication.
Is this also relevant in New Zealand and Australia?
The AI Act is relevant not only for those who wish to do business with the EU in the future but also beyond. As the world’s first general law on AI, it is viewed as a forerunner and will likely inspire subsequent regulations in other countries. Engaging with the AI Act now offers a head start when similar laws are introduced in other jurisdictions.
What does the AI Act regulate?
The AI Act broadly regulates AI, adopting a risk-based approach: AI systems with medium risk face some restrictions, such as transparency obligations. However, the bulk of the text deals with high-risk AI, subject to particularly stringent regulations. Violations can lead to significant fines up to 7% of annual turnover.
Definition of AI in the AI Act
The AI Act’s definition of AI has evolved over the process and, in its final version, describes an ‘AI system’ in Art. 3 (1) as
‘AI system’ means a machine-based system designed to operate with varying levels of
autonomy, that may exhibit adaptiveness after deployment and that, for explicit or
implicit objectives, infers, from the input it receives, how to generate outputs such as
predictions, content, recommendations, or decisions that can influence physical or virtual environments
The core of this definition is autonomy, meaning the system can operate to a certain extent independently. Therefore, the criteria to qualify as AI are:
- Machine-based system
- Designed to operate with autonomy
- Infers, from the input it receives, how to generate outputs.
The phrase “may exhibit adaptiveness after deployment” clarifies that adaptability after deployment is not necessarily required to qualify as AI.
Who is affected by the AI Act?
The AI Act applies to everyone involved in manufacturing, providing, distributing, or using AI systems, establishing a broad range of affected parties:
- Providers
- Deployers
- Importers
- Distributors
- Authorised representatives of providers
- Affected individuals located in the Union.
Does the AI Act apply to medical devices?
The AI Act also applies to medical devices that contain AI or are AI and fall under the MDR or IVDR, typically classified as high risk. This stems from: Article 6 AI Act, specifying which AI systems are considered high risk. Article 6 1. (1) a states:
(a) the AI system is intended to be used as a safety component of a product, or the AI
system is itself a product, covered by the Union harmonisation legislation listed in
Annex I
With the MDR ((EU) 2017/745) and IVDR (EU) 2017/746) also listed in Annex I, medical devices falling under these regulations that contain or are AI are classified as high risk under the AI Act under certain conditions. These include:
- The medical device falls under the MDR or IVDR, and
- The medical device contains AI intended as a safety component, or
- The medical device itself is AI.
Thus, many Software as a Medical Device (SaMDs) will likely fall under the AI Act. However, the health sector sees some specific regulations and partial exemptions. For instance, AI for emotion recognition is generally prohibited under the AI Act but allowed for medical purposes, subject to certain conditions such as special transparency.
Exemptions
Not all AI systems fall under the AI Act. Important exemptions exist for research and development, where the AI Act does not apply to:
- AI systems or models developed and used solely for scientific research and development.
- Research, testing, or development activities concerning AI systems or models before they are marketed or used.
- AI systems released under free and open-source licenses (unless they are high risk or fall under the prohibited systems according to Article 5 or the list of article 50).
What requirements must AI systems meet under the AI Act?
High-risk AI medical devices, that is, those that are AI or contain AI as a safety component, will need to meet stringent requirements to be allowed on the EU market. These focus heavily on cybersecurity, transparency, and include extensive testing, technical documentation, transparency, human oversight, risk management and quality management. High-risk AI systems must meet the following criteria:
- A continuously adjustable risk management system
- Training data requirements: For instance, Article 10 demands quality standards for training data, such as selection without bias. Other guidelines address preparation and labeling.
- Technical documentation: SMEs and startups benefit from facilitations and simplified technical documentation.
- Record Keeping: High-risk AI systems must technically enable the automatic recording of events (‘logs’) over their lifetime.
- Transparency and provision of information to deployers
- Human oversight: High-risk AI systems must be designed and developed to enable effective oversight by people during their use.
- Accuracy, robustness, and cybersecurity: High-risk AI systems must be designed and developed to ensure an appropriate level of accuracy, robustness, and cybersecurity, performing consistently throughout their lifecycle.
- Labeling: High-risk AI systems or, if not possible, their packaging or accompanying documentation must indicate their name, registered trade name or trademark, and a contact address.
- Quality Management System: A compliant quality management system as required by Article 17.
To enter the EU market, products must demonstrate:
- A Declaration of Conformity
- CE Marking
- Registration
After market entry, products must be monitored through Post-Market Surveillance, with incidents reported and addressed as required.
Conformity Assessment Procedure
As already known from the MDR and IVDR, manufacturers of AI systems must undergo a Conformity Assessment before market introduction. This process evaluates the medical device’s compliance as a whole under MDR and IVDR and also the AI component under the AI Act. As the requirements for high-risk AI products in the MDR/IVDR and the AI Act overlap, it’s planned that the conformity assessment of AI medical devices under the MDR/IVDR and the AI Act will be carried out by the same Notified Body. This arrangement aims to streamline the process for manufacturers and other stakeholders. However, it’s contingent on Notified Bodies having the requisite expertise and personnel.
When do the AI Act’s provisions apply?
First, the AI Act awaits formal endorsement by EU countries, expected in May 2024. Following publication in the Official Journal of the EU, the regulation enters into force 20 days later (Article 113). However, manufacturers, importers, etc., need not comply immediately with all provisions. Most AI Act provisions become mandatory 24 months after coming into force. Nevertheless, Chapters 1-2 (General Provisions and Prohibited Practices) will apply just 6 months after the AI Act comes into force.
AI systems on the market by then will have transition periods to comply with the new regulations, ranging from 24 to 36 months.
Conclusion
The AI Act represents a monumental step in global AI legislation. While it may seem like an additional burden for many innovators wishing to bring their products to market, it’s essential to remember that similar laws will likely be introduced elsewhere. Compliance with the AI Act now could offer a competitive advantage in future markets. The Act has been both praised and criticized, but its final form appears to offer sufficient flexibility for research and innovation. The real challenge will be its implementation, considering the complexities seen with the MDR and IVDR. Nonetheless, this legislative step is undeniably significant.
Regulatory changes such as the AI Act do not have to be a nuisance. If you want to learn how the right regulatory strategy can save you time and money, check out our new Regulatory Strategy course.