33 Amendments of Abir AL-SAHLANI related to 2021/0106(COD)
Amendment 374 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. The notion of remote biometric identification system shall not include verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises.
Amendment 430 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended towith the objective to or the effect of distorting human behaviour, whereby physical or psychological harms are reasonably likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacitiesspecific groups of persons due to their age, disabilities, social or economic situation. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 443 #
Proposal for a regulation
Recital 17 a (new)
Recital 17 a (new)
(17 a) AI systems used by law enforcement authorities or on their behalf to predict the probability of a natural person to offend or to reoffend, based on profiling and individual risk-assessment hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.
Amendment 464 #
Proposal for a regulation
Recital 19
Recital 19
Amendment 477 #
Proposal for a regulation
Recital 20
Recital 20
Amendment 486 #
Proposal for a regulation
Recital 21
Recital 21
Amendment 494 #
Proposal for a regulation
Recital 22
Recital 22
Amendment 582 #
Proposal for a regulation
Recital 38
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
Amendment 595 #
Proposal for a regulation
Recital 39 a (new)
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border management should however not, at any point, be used by Member States or by the institutions or agencies of the Union to infringe on the principle of non- refoulement, the right to asylum or to circumvent international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967.
Amendment 1037 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts or intentions of natural persons on the basis of their biometric or biometrics-based data;
Amendment 1044 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, or inferring their characteristics and attributes on the basis of their biometric or biometrics-based data;
Amendment 1052 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
Amendment 1169 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order towith the objective to or the effect of materially distorting a person’s behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
Amendment 1181 #
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of an individual, including characteristics of such individual’s known or predicted personality or social or economic situation, a specific group of persons due to their age, physical or mental or disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1223 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service or use of an AI system for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics or past criminal behaviour of natural persons or groups of natural persons;
Amendment 1254 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
Amendment 1260 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
Amendment 1274 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1286 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(d a) the use of an AI system for the general monitoring, detection and interpretation of private content in interpersonal communication services, including all measures that would undermine end-to-end encryption..
Amendment 1354 #
Proposal for a regulation
Article 5 – paragraph 2
Article 5 – paragraph 2
Amendment 1356 #
Proposal for a regulation
Article 5 – paragraph 2 – point a
Article 5 – paragraph 2 – point a
Amendment 1358 #
Proposal for a regulation
Article 5 – paragraph 2 – point b
Article 5 – paragraph 2 – point b
Amendment 1361 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1
Article 5 – paragraph 2 – subparagraph 1
Amendment 1367 #
Proposal for a regulation
Article 5 – paragraph 3
Article 5 – paragraph 3
Amendment 1375 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
Article 5 – paragraph 3 – subparagraph 1
Amendment 1387 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
Amendment 2268 #
Proposal for a regulation
Article 52 – paragraph 2
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Amendment 3066 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems intended to be used to make inferences on the basis of biometric data, including emotion recognition systems, or biometrics-based data, including speech patterns, tone of voice, lip-reading and body language analysis, that produces legal effects or affects the rights and freedoms of natural persons.
Amendment 3154 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities or on their behalf for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
Amendment 3177 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 3205 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities or on their behalf to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
Amendment 3214 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities or on their behalf for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 3216 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assistbe used by competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.