Activities of Geoffroy DIDIER related to 2021/0106(COD)
Plenary speeches (1)
Artificial Intelligence Act (debate)
Amendments (16)
Amendment 707 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or, deception or EU principles and values irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio, text, script, or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. Besides, recommendation systems, in particular automated decision-making algorithms that disseminate and order cultural and creative content displayed to users, should be designed in such a way that their personalised suggestions are explainable and non-discriminatory. A clear explanation of the parameters used for the personalised suggestions should be easily accessible and understandable to the users. Natural persons should have a right to opt out of recommended and personalised services without affecting their right to use the core service.
Amendment 904 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more ofa system that combines these three criteria: (i) receives machine and/or human-based data and inputs, (ii) infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and can, for a given set of human-defined objectives, generate outputs such as content(iii) generates outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
Amendment 1408 #
Proposal for a regulation
Title III
Title III
HIGH-RISK USES OF AI SYSTEMS
Amendment 1409 #
Proposal for a regulation
Title III – Chapter 1 – title
Title III – Chapter 1 – title
1 CLASSIFICATION OF AI SYSTEMS AS WITH HIGH-RISK USES
Amendment 1411 #
Proposal for a regulation
Article 6 – title
Article 6 – title
Classification rules for high-risk uses of AI systems
Amendment 1439 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk, if they pose a risk of harm to either physical health and safety or fundamental human rights, or both.
Amendment 1556 #
Proposal for a regulation
Article 8 – paragraph 1
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art and industry standards, including as reflected in relevant harmonised standards or common specifications.
Amendment 1567 #
Proposal for a regulation
Article 8 – paragraph 2
Article 8 – paragraph 2
2. The intended purpose of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with thosee relevant requirements depending on the type of risks posed.
Amendment 1721 #
3. Training, validation and testing data sets shall be relevant, representative, and to the best extent possible free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk uses of AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 2270 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio, text, script or video content that appreciably resembles existing persons, objects, places, text, script or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
Amendment 2277 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary forand shall be without prejudice to the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 2279 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
Article 52 – paragraph 3 a (new)
3 a. Providers shall ensure that recommendation systems used to disseminate and order cultural and creative content are designed in such a way that the personalised suggestion is explainable and non-discriminatory. A clear explanation regarding the parameters determining ranking shall be provided to users and shall be easily accessible. Natural persons shall have the right to opt out of recommended and personalised services. This opt-out possibility shall be easily accessible and not prevent from using the core service.
Amendment 2693 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk uses of AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of theask for the explainability of the functioning of algorithms and criteria used by an AI system.
Amendment 3014 #
Proposal for a regulation
Annex I – point b
Annex I – point b
Amendment 3023 #
Proposal for a regulation
Annex I – point c
Annex I – point c
Amendment 3043 #
Proposal for a regulation
Annex III – title
Annex III – title
HIGH-RISK USES OF AI SYSTEMS REFERRED TO IN ARTICLE 6(2)