13 Amendments of Estrella DURÁ FERRANDIS related to 2021/0106(COD)
Amendment 94 #
Proposal for a regulation
Recital 4 d (new)
Recital 4 d (new)
(4 d) In terms of health and patients’ rights, AI systems can play a major role in improving the health of individual patients and the performance of public health systems. However, when AI is deployed in the context of health, patients may be exposed to potential specific risks that could lead to physical or psychological harm, for example, when different biases related to age, ethnicity, sex or disabilities in algorithms leads to incorrect diagnoses. The lack of transparency around the functioning of algorithms also makes it difficult to provide patients with the relevant information they need to exercise their rights, such as informed consent. In addition, AI’s reliance on large amounts of data, many of them being personal data, may affect the protection of medical data, due to patients’ limited control over the use of their personal data and the cybersecurity vulnerabilities of AI systems. All of this means that special caution must to be taken when AI is applied in clinical or healthcare settings.
Amendment 121 #
Proposal for a regulation
Recital 40 a (new)
Recital 40 a (new)
(40 a) AI systems not covered by Regulation (EU) 2017/745 with an impact on health or healthcare should be classified as high-risk and be covered by this Regulation. Healthcare is one of the sectors where many AI applications are being deployed in the Union and is a market posing potential high risk to human health. Regulation (EU) 2017/745 only covers medical devices and software with an intended medical purpose, but excludes many AI applications used in health, like AI administrative and management systems used by healthcare professionals in hospitals or other healthcare setting and by health insurance companies and many fitness and health apps which provides AI powered recommendations. These applications may present new challenges and risks to people, because of their health effects or the processing of sensitive health data. In order to control this potential specific risks that could lead to any physical or psychological harm or the misuse of sensitive health data, these AI systems should be classified as high- risk.
Amendment 126 #
Proposal for a regulation
Recital 44
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, that is, to ensure algorithmic non-discrimination, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
Amendment 142 #
Proposal for a regulation
Recital 58 a (new)
Recital 58 a (new)
(58 a) Insofar the Union lacks a charter of digital rights that would provide a reference framework for guaranteeing citizens' rights in the new digital reality and that would safeguard fundamental rights in the digital landscape. A number of AI-related data-protection issues may lead to uncertainties and costs, and may hamper the development of AI applications. In this regard, some provisions are included in the text to ensure the explanation, acceptability, surveillance, fairness and transparency of the AI systems.
Amendment 186 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems and to ensure algorithmic non-discrimination, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.
Amendment 190 #
Proposal for a regulation
Article 13 – paragraph 2
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users, including in relation to possible risks to fundamental rights and discrimination.
Amendment 201 #
Proposal for a regulation
Article 26 – paragraph 1 – point c
Article 26 – paragraph 1 – point c
(c) the system bears the required conformity marking and is accompanied by the required concise and clear documentation and instructions of use, including in relation to possible risks to fundamental rights and discrimination.
Amendment 202 #
Proposal for a regulation
Article 27 – paragraph 1
Article 27 – paragraph 1
1. Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking and the energy efficiency and carbon intensity marking, that it is accompanied by the required concise and clear documentation and instruction of use, including in relation to possible risks to fundamental rights and discrimination, and that the provider and the importer of the system, as applicable, have complied with the obligations set out in this Regulation.
Amendment 212 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, especially in the healthcare sector, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Amendment 213 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
Article 52 – paragraph 3 a (new)
3 a. Recipients of an AI system in the domain of healthcare shall be informed of their interaction with an AI system.
Amendment 214 #
Proposal for a regulation
Article 52 – paragraph 3 b (new)
Article 52 – paragraph 3 b (new)
3 b. Public and administrative authorities which adopt decisions with the assistance of AI systems shall provide a clear and intelligible explanation which shall be accessible for persons with disabilities and other vulnerable groups.
Amendment 260 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
Annex III – paragraph 1 – point 8 a (new)
8 a. Health, health care, long-term care and health insurance: (a) AI systems not covered by Regulation (EU) 2017/745 intended to be used in the health, health care and long-term care sectors that have indirect and direct effects on health or that use sensitive health data. (b) AI administrative and management systems used by healthcare professionals in hospitals and other healthcare settings and by health insurance companies that process sensitive data of people’s health.
Amendment 263 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point g
Annex IV – paragraph 1 – point 1 – point g
(g) clear and concise instructions of use for the user including in relation to possible risks to fundamental rights and discrimination and, where applicable installation instructions;