Artificial intelligence (AI) has been shown to have huge potential for applications in health care, as long as the risks involved are borne in mind. The ethical and legal use of this technology in the field must bring about well-being and benefits for the public, who are at the heart of the system. It is essential that the rights and freedoms of individuals, including their dignity and privacy, are guaranteed during its application.
In 2019, the European Commission published the Report “Ethics Guidelines for Trustworthy AI”, which provides recommendations for all professionals involved in the design, development, deployment, application and use of artificial intelligence. From an ethical point of view, now is time to define new roles and responsibilities that allow the current legal frameworks to be addressed.
The core values that drive the Health/AI Programme focus on the concept of AI trustworthiness. Trustworthiness implies that the solutions that are implemented must be: lawful, ethical and robust, both from a technical and social perspective, throughout the system’s entire life cycle.
And to meet these requirements, it is necessary to comply with the seven ethical principles mentioned in the Ethics Guidelines:
• Human agency and oversight
• Technical robustness and safety
• Privacy and data governance
• Transparency
• Diversity, non-discrimination and fairness
• Societal and environmental well-being
• Accountability
The Health/AI Programme –fully aligned with the above– aims to promote technological innovation, ensuring well-being and benefits for the public.