BETA

Activities of Robert ROOS related to 2021/0106(COD)

Shadow opinions (1)

OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts
2022/06/14
Committee: ITRE
Dossiers: 2021/0106(COD)
Documents: PDF(272 KB) DOC(201 KB)
Authors: [{'name': 'Eva MAYDELL', 'mepid': 98341}]

Amendments (26)

Amendment 151 #
Proposal for a regulation
Recital 12 a (new)
(12a) This Regulation shall not restrict research and development activities in the European Union. This is without prejudice to the obligation that all research and development activities must be subject to recognized ethical standards for scientific research under all circumstances.
2022/03/31
Committee: ITRE
Amendment 156 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should in particular be non-discriminatory and in line with the Union’s international trade commitments.
2022/03/31
Committee: ITRE
Amendment 159 #
Proposal for a regulation
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. However, it is important to distinguish both categories between the person who develops and makes the system available and the person who deploys the AI-system.
2022/03/31
Committee: ITRE
Amendment 283 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity, this includes research activities to the extent that they are conducted in accordance with generally accepted ethical standards;
2022/03/31
Committee: ITRE
Amendment 285 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
(4a) ‘end-user’ means the natural or legal person who interacts with the results produced by the AI-system;
2022/03/31
Committee: ITRE
Amendment 310 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44a) ‘deep fake’ means manipulated or synthetic audio and/or visual material that gives an authentic impression, in which events appear to be taking place, which never happened, and which has been produced using techniques in the field of artificial intelligence, including machine learning and deep learning, without the user, or end-user being aware that the audio and/or visual material has been produced using artificial intelligence;
2022/03/31
Committee: ITRE
Amendment 376 #
Proposal for a regulation
Article 7 – paragraph 2 – point h a (new)
(ha) the extent to which the AI system acts autonomously;
2022/03/31
Committee: ITRE
Amendment 377 #
Proposal for a regulation
Article 7 – paragraph 2 – point h a (new)
(h a) the extent to which the AI system acts autonomously;
2022/03/31
Committee: ITRE
Amendment 394 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user or end-user when applicable.
2022/03/31
Committee: ITRE
Amendment 413 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, in particular biases that are likely to affect health and safety of persons or lead to prohibited discrimination;
2022/03/31
Committee: ITRE
Amendment 416 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
(g) the identification of any possible data gaps or shortcomings, and, where practicable, how those gaps and shortcomings can be addressed.
2022/03/31
Committee: ITRE
Amendment 420 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall bemust be as relevant, representative, free of errors and complete. T as possible in order to fulfil the purpose of the AI system. In particular, they shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/03/31
Committee: ITRE
Amendment 428 #
Proposal for a regulation
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.(Does not affect English version.)
2022/03/31
Committee: ITRE
Amendment 453 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
(e) be able to intervene on the operation of the high-risk AI system or interrupt the AI system through a “stop” button or a similar procedure.
2022/03/31
Committee: ITRE
Amendment 479 #
Proposal for a regulation
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logsy shall be kept for a period that is appropriate in the light of the intended purpose of high-risk AI system and applicable legal obligations under Union or national lawkeep them for at least six months, unless otherwise stipulated in the applicable Union or national law, or if strictly necessary in the light of the intended purpose of high-risk AI system.
2022/03/31
Committee: ITRE
Amendment 483 #
Proposal for a regulation
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. National authorities shall exercise restraint in the requesting of information that could be regarded as a trade secret. Should they nevertheless request such information, they shall treat it as strictly confidential.
2022/03/31
Committee: ITRE
Amendment 546 #
Proposal for a regulation
Article 52 a (new)
Article 52 a General purpose AI-systems 1. The placing on the market, putting into service or use of general purpose AI- systems shall not, by themselves only, make those systems subject to the provisions of this Regulation. 2. Any person who places on the market or puts into service under its own name or trademark or uses a general purpose AI- system made available on the market or put into service for an intended purpose that makes it subject to the provisions of this Regulation shall be considered the provider of the AI system. 3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a general purpose AI-system made available on the market, with or without modifying it, into an AI-system whose intended purpose makes it subject to the provisions of this Regulation. 4. The provisions of this Article shall apply irrespective of whether the general purpose AI-system is open source software or not.
2022/03/31
Committee: ITRE
Amendment 569 #
Proposal for a regulation
Article 55 – title
Measures for small-scale providerSMEs and users
2022/03/31
Committee: ITRE
Amendment 574 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
(a) provide small-scale providerSMEs and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
2022/03/31
Committee: ITRE
Amendment 576 #
Proposal for a regulation
Article 55 – paragraph 1 – point b
(b) organise specific awareness raising activities about the application of this Regulation tailored to the needs of the small-scale providerSMEs and users;
2022/03/31
Committee: ITRE
Amendment 578 #
Proposal for a regulation
Article 55 – paragraph 1 – point c
(c) where appropriate, establish a dedicated channel for communication with small-scale providerSMEs and user and other innovators to provide guidance and respond to queries about the implementation of this Regulation.
2022/03/31
Committee: ITRE
Amendment 581 #
Proposal for a regulation
Article 55 – paragraph 2
2. The specific interests and needs of the small-scale providerSMEs shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size and market size.
2022/03/31
Committee: ITRE
Amendment 592 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
(ca) consider how the Union could better develop synergies, for example through Horizon Europe and EuroHPC, in order to promote the take-up of AI.
2022/03/31
Committee: ITRE
Amendment 606 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
(ca) identify and address existing bottlenecks.
2022/03/31
Committee: ITRE
Amendment 624 #
Proposal for a regulation
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers and start-upSMEs and their economic viability.
2022/03/31
Committee: ITRE
Amendment 626 #
Proposal for a regulation
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. If the information supplied is incomplete, a period of two months shall be granted in which to provide the requested information.
2022/03/31
Committee: ITRE