The European Artificial Intelligence Act: What it is, Enforcement Rules and Governance, Timeline and Penalties for Medical Device Manufacturers

Details on recent development of the EU Artificial Intelligence Act (AIA) can be found here: https://artificialintelligenceact.eu

On March 13, the European Parliament adopted the European (EU) Artificial Intelligence Act (AIA), a landmark framework for regulating AI-enabled devices, initially proposed on April 21, 2021. Similar to the EU General Data Protection Regulation, the AIA is expected to influence global AI regulatory standards.

The AIA was built upon established AI best practices. These practices were previously outlined in the EU’s soft law guidelines, such as the Ethics Guidelines for Trustworthy Artificial Intelligence, presented on April 8, 2019. This guideline outlined seven key requirements for trustworthy AI:

  1. Human Agency and Oversight

  2. Technical Robustness and Safety

  3. Privacy and Data Governance

  4. Transparency

  5. Diversity, Non-discrimination, and Fairness

  6. Environmental and Societal Well-being

  7. Accountability

These guidelines, although not enforceable, laid the groundwork for the current AIA regulations.

The AIA applies to all industry sectors (not just medical devices) and mandates that AI functionalities in medical devices must comply with its regulations. Providers and deployers of AI systems and general-purpose AI (GPAI) models intended for the EU market must adhere to the AIA, regardless of their establishment location.

Military, defense, and national security AI systems are excluded from the AIA, as are systems used exclusively for scientific research.

Key definitions include:

  • AI System: a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

  • General-Purpose AI Model: an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities.

The AIA employs a risk-based classification for AI systems, particularly high-risk AI systems in medical devices, requiring stringent compliance measures such as (1) data governance, (2) quality management, (3) technical documentation, (4) record keeping, (5) transparency, (6) human oversight, (7) accuracy, robustness and cybersecurity and (8) conformity assessment based on either self-declaration or Notified Body involvement. These requirements overlaps with the requirements under EU MDR and EU IVDR, hence, your technical documentation should meet the requirements of AIA, MDR and/or IVDR.

The AIA employs a risk-based classification for AI systems; they are classified as GPAI models and GPAI models with systemic risks.  Particularly the latter requiring additional obligations, including notification to the Commission, model evaluation, system risk mitigation and vigilance reporting.

Conformity assessment for high-risk AI systems can involve internal control or Notified Body involvement, aiming for simultaneous assessment with existing medical device regulations (MDR and IVDR).

Obligations per the AIA

The AIA delineates obligations for five operator roles: providers, deployers, authorized representatives, importers, and distributors, each with specific responsibilities in the AI value chain.

Despite challenges in implementation, organizations are encouraged to prepare for AIA compliance to navigate this new regulatory landscape effectively.

Prohibited AI systems (Chapter II, Art. 5)

The following types of AI system are ‘Prohibited’ according to the AI Act.

AI systems:

  • deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm.

  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behavior, causing significant harm.

  • biometric categorization systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorizes biometric data.

  • social scoring, i.e., evaluating or classifying individuals or groups based on social behavior or personal traits, causing detrimental or unfavorable treatment of those people.

  • assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.

  • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.

  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons.

  • ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited.preventing substantial and imminent threat to life, or foreseeable terrorist attack; oridentifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organized crime, and environmental crime, etc.).

High risk AI systems (Chapter III)

Some AI systems are considered ‘High risk’ under the AI Act. Providers of those systems will be subject to additional requirements.

Classification rules for high-risk AI systems (Art. 6)

High risk AI systems are those:

  • used as a safety component or a product covered by EU laws in Annex I AND required to undergo a third-party conformity assessment under those Annex I laws; OR

  • those under Annex III use cases (below), except if:the AI system performs a narrow procedural task;improves the result of a previously completed human activity;detects decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; orperforms a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

  • AI systems are always considered high-risk if it profiles individuals, i.e. automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behavior, location or movement.

  • Providers whose AI system falls under the use cases in Annex III but believes it is not high-risk must document such an assessment before placing it on the market or putting it into service.

Enforcement and Oversight

The AIA mandates different levels of enforcement and oversight:

  • National Competent Authorities: Each EU member state must designate at least one notifying authority and one market surveillance authority.

  • AI Board: An advisory body that promotes AI literacy, issues opinions, recommendations, and facilitates AIA compliance.

  • AI Office: Responsible for the enforcement and supervision of GPAI models and encouraging voluntary compliance with high-risk AI mandates.

AI Regulatory Sandboxes and Real-World Testing

To promote innovation, each member state must establish at least one AI regulatory sandbox at a national level. These sandboxes provide a controlled environment for developing, testing, and validating AI systems before market placement. Priority is given to small and medium enterprises, including startups. High-risk AI systems may also undergo real-world testing outside the sandbox under specific conditions set by the Commission.

Penalties:

  • Prohibited AI Systems: Fines up to €35 million or 7% of worldwide annual turnover.

  • GPAI Models Non-compliance: Fines up to €15 million or 3% of worldwide annual turnover.

  • EU Institutions Non-compliance: Fines up to €1.5 million; other non-compliance fines up to €750,000.

  • Supplying Incorrect Information: Fines up to €7.5 million or 1% of worldwide annual turnover.

Impact and Timelines:

  • Medical device manufacturers must comply with AIA in addition to the MDR and IVDR.

  • Most AI/ML-enabled devices likely fall into the high-risk category.

  • Compliance timeline:March 2024: AIA adoption.Q2/Q3 2024: Formal adoption and publication.+20 days: Entry into force.+6 months: Prohibitions on unacceptable risk AI systems.+9 months: Codes of practice ready.+12 months: GPAI model obligations and penalties start (except for GPAI providers).+18 months: Post-market monitoring implementation.+24 months: Obligations for Annex III high-risk AI systems apply.+36 months: Obligations for Annex I high-risk AI systems apply; end of transition for GPAI models.

Concluding Remarks:

  • The AIA is the first comprehensive legal framework for AI technologies, addressing ethical, legal, and societal implications.

  • Manufacturers should proactively use the transition period for compliance, perform gap analyses, and develop quality plans to meet the application deadlines.

Previous
Previous

Cybersecurity Considerations for FDA Medical Device Submissions

Next
Next

Japan's Regulation on SaMD