BETA

Activities of Jordi SOLÉ related to 2021/0106(COD)

Shadow opinions (1)

OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts
2022/06/14
Committee: ITRE
Dossiers: 2021/0106(COD)
Documents: PDF(272 KB) DOC(201 KB)
Authors: [{'name': 'Eva MAYDELL', 'mepid': 98341}]

Amendments (17)

Amendment 323 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person, economic, physical or psychological harm;
2022/03/31
Committee: ITRE
Amendment 362 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health and safety, a risk to climate or environment or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/03/31
Committee: ITRE
Amendment 366 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety, a risk to climate or environment or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
2022/03/31
Committee: ITRE
Amendment 368 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health and safety, to climate or environment or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
2022/03/31
Committee: ITRE
Amendment 371 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an adverse impact on the climate, the environment or negatively affecting the ability to achieve energy efficiency targets or the health or safety of persons shall not be considered as easily reversible;
2022/03/31
Committee: ITRE
Amendment 396 #
Proposal for a regulation
Article 9 – paragraph 8 a (new)
8a. The risk management system shall always identify significant impact on the environment through, inter alia, AI compute-related energy consumption, efficiency in data use, when compared with other, state-of-the-art AI systems or if it may result in significant environmental impacts or greenhouse gas emissions through the way AI is applied.
2022/03/31
Committee: ITRE
Amendment 397 #
Proposal for a regulation
Article 9 a (new)
Article 9 a Impact of AI on energy consumption 1. All AI systems shall be designed and developed to make use of state-of-the-art methods and best practice to reduce greenhouse gas emissions, computational complexities, increase energy efficiency and the efficiency of data of the system in productive use. This includes techniques involving the training and re- training or models. They shall be developed and established with capabilities that enable the measurement of the energy consumed and/or other environmental impact that the productive use of the systems may have. 2. Providers of high-risk AI systems shall perform an environmental sustainability assessment, including on energy use, over its entire lifecycle. 3. The assessment referred in paragraph 2 shall include information relating to: (a) energy consumption; (b) greenhouse gas emissions; (c) water and marine resources; (d) resource use, including rare metals, minerals and the circular economy; (e) pollution; (f) biodiversity and ecosystems. 4. The assessment shall be structured in a standardised, machine readable and interoperable format that allows for publication and further comparability analysis. 5. The Commission is empowered to adopt delegated acts in accordance with Article 73 to: a) provide reliable, accurate and reproducible standards and methods for the environmental sustainability assessment, with particular focus on energy efficiency, to take into account recognised state-of-the-art measurement methods, or new methods that enable the comparison of the environmental impact of AI systems. The data must be understandable, relevant, representative, verifiable, comparable and represented in a faithful manner; b) amend Annex IIIa where necessary to ensure that, in the light of technical progress, the environmental impact measurement is complete and comparable.
2022/03/31
Committee: ITRE
Amendment 400 #
Proposal for a regulation
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
2022/03/31
Committee: ITRE
Amendment 455 #
Proposal for a regulation
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, security by design and by default in the light of their intended purpose, thus reaching an appropriate level of accuracy, robustness, safety and cybersecurity, and perform consistently in those respects throughout their lifecycle.
2022/03/31
Committee: ITRE
Amendment 460 #
Proposal for a regulation
Article 15 – paragraph 2
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be assessed by an independent entity and declared in the accompanying instructions of use. The language used shall be clear, free of misunderstandings or misleading statements.
2022/03/31
Committee: ITRE
Amendment 471 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 2 a (new)
High risk AI shall be accompanied by security solutions and patches for the lifetime of the embedded product, or in case of the absence of dependence on a specific product, for a time that needs to be stated by the manufacturer and cannot be less then 10 years.
2022/03/31
Committee: ITRE
Amendment 484 #
Proposal for a regulation
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in anone or several official Union languages determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
2022/03/31
Committee: ITRE
Amendment 517 #
Proposal for a regulation
Article 44 – paragraph 1
1. Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in anone or several official Union languages determined by the Member State in which the notified body is established or in anone or several official Union languages otherwise acceptable to the notified body.
2022/03/31
Committee: ITRE
Amendment 561 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities and can only be implemented in a specified area with approval of the regional or local authorities. Any significant risks to environment, health and safety and fundamental rights identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
2022/03/31
Committee: ITRE
Amendment 566 #
Proposal for a regulation
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including goodbest practices, computational energy use and efficiency, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
2022/03/31
Committee: ITRE
Amendment 641 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 a (new)
4a. Environmental impact and energy use: (a) AI systems that require a higher frequency of training and re-training of models than 60% of comparable state-of- the-art systems; (b) AI systems that require training or re- training of data quantities that exceed 60% of comparable state-of-the-art systems; (c) AI systems that require the re-training of partial data-sets involved where these exceed 20% of the data globally available to the system; (d) AI systems other than those which make use of techniques involving the training of models that are resource intensive than 60% of the comparable state-of-the-art systems
2022/03/31
Committee: ITRE
Amendment 656 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3
3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, environmental sustainability and energy efficiency, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to energy grids and policy, climate and environmental protection, health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;
2022/03/31
Committee: ITRE