With the entry into force of the European regulation on artificial intelligence, also known as the AI Act, published on 12 July in the Official Journal of the European Union, all entities that develop or implement artificial intelligence solutions in the European Union must guarantee their full compliance within a maximum period of three years, depending on the risk they pose. The purpose of the new law is to ensure the reliability and security of artificial intelligence systems and respect for the fundamental rights of people.
In this article we explain the main considerations and obligations that any public or private entities with a health-geared artificial intelligence system used in the European Union will have to take into account, depending on the type of risk posed by their solutions. However, it’s essential to bear in mind the different cases and exceptions stipulated in the regulation and to see how it is ultimately interpreted in the coming years, as there are still a number of points to be clarified, as indicated by different agents[i] [ii] [iii].
The definition of an artificial intelligence system is one of the issues[iv] that generated the most debate in the regulatory approval process, as it initially included any software made up of fixed rule-based algorithms such as those used in numerous medical devices. Finally, however, the agreed option includes the capacity for adaptation:
“A machine-based system designed to operate with different levels of autonomy which can display adaptability after deployment and which, for explicit or implicit goals, infers from the input information it receives how to generate output results, such as predictions, content, recommendations and decisions, among others, that can influence physical and virtual environments”.
The AI Act divides artificial intelligence systems into four categories in accordance with the potential risks they pose and sets a series of requirements and obligations for each of them. Below we summarise the different categories and obligations, focusing on examples applied to the field of health.
1. Systems with an unacceptable degree of risk
These are ones that pose a threat to security, livelihoods and people’s rights. This category includes, for example, any solution in the field of health that uses manipulative or deceptive techniques to encourage dangerous behaviour, alter conduct, interfere with emotions or take advantage of any vulnerability.
Obligations: their use is prohibited throughout the European Union.
2. Systems with a high degree of risk
These include any solutions that may have a potential impact on people’s rights and decision-making, as in the case of a diagnosis or treatment in the field of health. This classification includes class-IIa medical devices (with MDR certification) or higher with artificial intelligence and in vitro diagnostic products (all those with IVDR certification) that also use this technology. Therefore, for example, it includes systems that incorporate artificial intelligence into the field of radiological image interpretation software, electrocardiograms, remote patient monitoring systems, heart beat management and the analysis of embryos fertilised in vitro to evaluate and select embryos for transfer, among many others.
The high-risk category also includes systems that use artificial intelligence as a security component, such as those used for robot-assisted surgery and those that incorporate biometric identification, as well as certain artificial intelligence solutions for healthcare, whether they be medical devices or otherwise. It includes, for example, systems that could be used by public authorities to assess people’s eligibility for essential public services, solutions for the assessment and classification of emergency calls, systems for sending primary intervention services during medical emergencies and solutions for the triage of patients during emergencies that use artificial intelligence.
Obligations: all systems regarded as high-risk are subject to strict obligations before they can be marketed. The main obligations require the implementation of adequate risk assessment and mitigation systems; ensuring the high quality of the data sets that feed the system to minimise risks and discriminatory results; creating a log of the activity to guarantee the traceability of the results; drawing up detailed documentation that provides all the necessary information on the system and its purpose so that the authorities can carry out an impact assessment before its first use; providing clear and appropriate information for the implementing entity; ensuring adequate human supervisory measures to minimise the risk; and displaying a high degree of robustness, security and accuracy. Promoters of health products that already bear an MDR or IVDR certificate must also comply with these obligations, preparing additional documentation on any points that have not been covered in the process of obtaining the European marking.
3. Systems with a limited degree of risk:
These are systems intended for interaction that do not influence decision-making, and their risks are chiefly associated with a lack of transparency. This category includes, for example, certain virtual assistants (chatbots) employed in the field of health which are not regarded as high-risk due to their typology, appointment management applications and wellness apps that use artificial intelligence, content portals related to the field of promoting health created using generative artificial intelligence, etc.
Obligations: it must always be indicated when a solution uses artificial intelligence, in order to ensure that people are well-informed. For example, it is obligatory for human beings to be made aware that they are interacting with a machine or virtual assistant and to label any content generated by artificial intelligence in any format (text, video, audio, etc.).
4. Systems with a minimum degree of risk:
This category includes solutions that pose a minimal or zero risk.
Obligations: there are no restrictions or obligations, but the regulations suggest that general principles such as human supervision, non-discrimination and fairness be abided by.
The law also refers to artificial intelligence systems of general use which can be adapted to numerous purposes, beyond the ones for which they were created. Depending on whether they pose systemic risks or otherwise, it lists a set of obligations to be fulfilled.
The new law requires the national authorities to develop at least one controlled testing space (sandbox) with conditions similar to those in the real world to enable developers of artificial intelligence solutions, especially those from small and medium-sized enterprises, to develop, train and validate the models for a limited period of time before their release to the general public. However, we will now have to see how this testing environment is articulated and how the new responsibility of the authorities to evaluate high-risk systems in Spain is deployed. With regard to the testing environment, Spanish legislation, in Royal Decree 817/2023 of 8 November, provided for the establishment of a controlled testing environment to assess compliance with the proposal of the European Regulation establishing harmonised rules in the field of artificial intelligence.
The law also announced the creation of the European Artificial Intelligence Office[in], which will work on the implementation of new regulations and the drawing up of codes of conduct and promote the development and use of reliable artificial intelligence and international cooperation.
The new regulation will enter into force on 1 August, 20 days after it is published in the Official Journal of the European Union, but different deadlines are envisaged for its application, depending on the risk posed by the system. Thus, the rules will have to be implemented within 6 months for unacceptable systems, 12 months for general use AI systems, 24 months for high-risk systems under Annex III and 36 months for AI systems under Annex I.
Failure to comply with the obligations set out in the regulation can lead to high penalties, which may exceed 35 million euros or 7% of a company’s annual turnover in the event of using a system posing an unacceptable risk. In any event, the severity of the fine will depend on different factors, such as the nature of the offence, the size of the company and any previous infringements. It remains to be determined, in national regulations, whether public administrations can be sanctioned with a fine or, as established by the Organic Law on Data Protection and Guarantee of Digital Rights, they are excluded from this kind of financial penalty.
More information: European regulation of artificial intelligence
1 Gilbert S. (2024). The EU passes the AI Act and its implications for digital medicine are unclear. NPJ digital medicine, 7(1), 135. https://doi.org/10.1038/s41746-024-01116-6
2 MedTech Europe (2024) Medical Technology Industry Perspective on the final AI act, MedTech Europe. Available at: https://www.medtecheurope.org/resource-library/medical-technology-industry-perspective-on-the-final-ai-act/ (Accessed: 28 June 2024).
3 Lamb S, Tschammler D, Maisnier-Boché L. The impact of the new EU AI act on the medtech and Life Sciences Sector. McDermott Will & Emery. (2024, April 15). https://www.mwe.com/insights/the-impact-of-the-new-eu-ai-act-on-the-medtech-and-life-sciences-sector/#
4 Madiega, T. (2024) Artificial intelligence act, Briefing: Artificial intelligence act. Available at: https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf (Accessed: 28 June 2024).
5 European Commission (2024) European AI Office. Available at: https://digital-strategy.ec.europa.eu/en/policies/ai-office (Accessed: 28 June 2024).
About sixty managers and senior representatives from the centers of the Catalan public healthcare ...
26 JANUARY 2024The Health Quality and Assessment Agency of Catalonia (AQuAS) has published the Final report ...
20 OCTOBER 2023Within the framework of the Health/AI programme, the conditions of the market consultation concerning ...
19 JULY 2023