BETA

Activities of Karen MELCHIOR related to 2021/0106(COD)

Plenary speeches (1)

Artificial Intelligence Act (debate)
2023/06/13
Dossiers: 2021/0106(COD)

Shadow opinions (1)

OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts
2022/09/12
Committee: JURI
Dossiers: 2021/0106(COD)
Documents: PDF(311 KB) DOC(227 KB)
Authors: [{'name': 'Axel VOSS', 'mepid': 96761}]

Amendments (184)

Amendment 346 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occurithout the affected persons' knowledge, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacipersons or groups of persons with protected characteristiecs. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is. Such distortions are likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/03/24
Committee: JURI
Amendment 347 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
2022/03/24
Committee: JURI
Amendment 352 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
2022/03/24
Committee: JURI
Amendment 353 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1) and certain threats to the life or physical safety of natural persons or of a terrorist attack.
2022/03/24
Committee: JURI
Amendment 374 #
Proposal for a regulation
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. The notion of remote biometric identification system shall not include verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises.
2022/06/13
Committee: IMCOLIBE
Amendment 375 #
Proposal for a regulation
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. The notion of remote biometric identification system shall not include authentification and verification systems whose purpose is to confirm, based on prior consent, that a specific natural person is the person he or she claims to be or to confirm the identity of a natural person for the purpose of having access to a service, a device or premises.
2022/06/13
Committee: IMCOLIBE
Amendment 376 #
Proposal for a regulation
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-riskprohibited, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and usedDue to the reproduction of the inherent biases of our societies, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.
2022/03/24
Committee: JURI
Amendment 378 #
Proposal for a regulation
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-riskprohibited, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work- related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
2022/03/24
Committee: JURI
Amendment 381 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-riskbanned. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a highn unacceptable risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/03/24
Committee: JURI
Amendment 382 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number ofprohibit some AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-riskprohibited AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, and for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offencesincluded in such a ban.
2022/03/24
Committee: JURI
Amendment 385 #
Proposal for a regulation
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-riskprohibit AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; and for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status.. Other AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
2022/03/24
Committee: JURI
Amendment 393 #
Proposal for a regulation
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations. This exception should nevertheless be limited to trusted countries and international organizations that share the Union’s values.
2022/06/13
Committee: IMCOLIBE
Amendment 399 #
Proposal for a regulation
Recital 12 a (new)
(12 a) This Regulation should not undermine research and development activity and should respect freedom of science. It is therefore necessary to exclude from its scope AI systems specifically developed and put into service for the sole purpose of scientific research and development and to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems. As regards product oriented research activity by providers, the provisions of this Regulation should apply insofar as such research leads to or entails placing of an AI system on the market or putting it into service. Under all circumstances, any research and development activity should be carried out in accordance with recognised ethical standards for scientific research.
2022/06/13
Committee: IMCOLIBE
Amendment 413 #
Proposal for a regulation
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate for individuals and society, rather than depend on the type of technology. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 428 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. SuchIn particular, AI systems that deploy subliminal components individualthat natural persons cannot perceive or, that exploit the vulnerabilities of children and people due to their age, physical or mental incapacities. They do soany groups,or that use purposefully manipulative techniques with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person or to their rights or to the values of the Union should be prohibited. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/06/13
Committee: IMCOLIBE
Amendment 434 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics using trustworthiness, good citizenship, patriotism, deviancy, or any other such metric as a proxi. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. This detrimental treatment can also be effected by providing undue and unjustified privileges to groups of people based on their social score. Such AI systems should be therefore prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 441 #
Proposal for a regulation
Recital 17 a (new)
(17 a) AI systems used in law enforcement and criminal justice contexts based on predictive methods, profiling and risk assessment pose an unacceptable risk to fundamental rights and in particular to the right of non- discrimination, insofar as they contradict the fundamental right to be presumed innocent and are reflective of historical, systemic, institutional and societal discrimination and other discriminatory practices. These AI systems should therefore be prohibited;
2022/06/13
Committee: IMCOLIBE
Amendment 443 #
Proposal for a regulation
Recital 17 a (new)
(17 a) AI systems used by law enforcement authorities or on their behalf to predict the probability of a natural person to offend or to reoffend, based on profiling and individual risk-assessment hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 450 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The use of those systems in publicly accessible places should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 451 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The use of those systems in publicly accessible places should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 454 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
2022/06/13
Committee: IMCOLIBE
Amendment 464 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 465 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 467 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 473 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 474 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 477 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 483 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 486 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 487 #
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 490 #
Proposal for a regulation
Recital 22
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 494 #
Proposal for a regulation
Recital 22
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 495 #
Proposal for a regulation
Recital 22
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 497 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 498 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 499 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 508 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distorts a person’s behaviour in a manner that causes or is likely to cause that person or anowithout their person physical or psychological harm;knowledge.
2022/03/24
Committee: JURI
Amendment 508 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 511 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
2022/06/13
Committee: IMCOLIBE
Amendment 512 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that personsex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, or sexual orientation, in order to materially distort the behaviour orf another person physical or psychological harm;ertaining to that group.
2022/03/24
Committee: JURI
Amendment 512 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
2022/06/13
Committee: IMCOLIBE
Amendment 515 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
2022/03/24
Committee: JURI
Amendment 515 #
Proposal for a regulation
Recital 24 a (new)
(24 a) Fundamental rights in the digital sphere have to be guaranteed to the same extent as in the offline world. The right to privacy needs to be ensured, amongst others through end-to-end encryption in private online communication and the protection of private content against any kind of general or targeted surveillance, be it by public or private actors. Therefore, the use of AI systems violating the right to privacy in online communication services should be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 527 #
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union or to Union values as enshrined in Article 2 TEU and such limitation minimises any potential restriction to international trade, if any.
2022/06/13
Committee: IMCOLIBE
Amendment 533 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:.
2022/03/24
Committee: JURI
Amendment 535 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/03/24
Committee: JURI
Amendment 538 #
Proposal for a regulation
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and, safety or the fundamental rights of persons or to Union values as enshrined in Article 2 TEU, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. Such systems should be classified as high-risk only insofar as they are built and operated with biometric, biometrics- based, or personal data or they influence decisions of natural persons or make decisions or influence decisions affecting natural persons. This ensures that, when referencing AI systems in pre-defined areas of human activity, this Regulation does not inadvertently apply to AI systems that can have no impact on the health, safety, fundamental rights of natural persons or the values of the Union as enshrined in Article 2 TEU.
2022/06/13
Committee: IMCOLIBE
Amendment 539 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/03/24
Committee: JURI
Amendment 543 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/03/24
Committee: JURI
Amendment 546 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(da) practices listed in Annex IIIa;
2022/03/24
Committee: JURI
Amendment 546 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk, except for verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.
2022/06/13
Committee: IMCOLIBE
Amendment 547 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
(db) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
2022/03/24
Committee: JURI
Amendment 548 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
(dc) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
2022/03/24
Committee: JURI
Amendment 549 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(dd) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships;
2022/03/24
Committee: JURI
Amendment 550 #
(de) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
2022/03/24
Committee: JURI
Amendment 551 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
(df) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
2022/03/24
Committee: JURI
Amendment 552 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
(dg) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/03/24
Committee: JURI
Amendment 553 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
(dh) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
2022/03/24
Committee: JURI
Amendment 554 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
(di) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
2022/03/24
Committee: JURI
Amendment 554 #
Proposal for a regulation
Recital 34
(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, and internet, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities.
2022/06/13
Committee: IMCOLIBE
Amendment 555 #
Proposal for a regulation
Article 5 – paragraph 1 – point d j (new)
(dj) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
2022/03/24
Committee: JURI
Amendment 556 #
Proposal for a regulation
Article 5 – paragraph 1 – point d k (new)
(dk) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
2022/03/24
Committee: JURI
Amendment 563 #
Proposal for a regulation
Recital 36
(36) AI systems used for making autonomous decisions or materially influencing decisions in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
2022/06/13
Committee: IMCOLIBE
Amendment 568 #
Proposal for a regulation
Article 5 a (new)
Article 5a Amendments to Annex IIIa 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex IIIa by adding prohibited AI practices where such practices pose an unacceptable risk to fundamental rights. 2. When assessing for the purposes of paragraph 1 whether an AI system poses an unacceptable risk to fundamental rights, the Commission shall take into account the following criteria: (a) the intended purpose of the AI system; (b) the extent to which an AI system has been used or is likely to be used; (c) the extent to which the use of an AI system has already had an adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such an impact, as demonstrated by reports or documented allegations submitted to national competent authorities; (d) the potential extent of such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons; (e) the extent to which potentially adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome; (f) the extent to which potentially adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age; (g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible
2022/03/24
Committee: JURI
Amendment 582 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:
2022/03/24
Committee: JURI
Amendment 582 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 583 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;deleted
2022/03/24
Committee: JURI
Amendment 583 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 592 #
Proposal for a regulation
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or European Union institutions as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non- refoulement, or deny safe and effective legal avenues into the territory of the Union, including the right to international protection;
2022/06/13
Committee: IMCOLIBE
Amendment 657 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point i a (new)
(ia) An overview of different inputs taken into account by the Artificial Intelligence solution when making decisions.
2022/03/24
Committee: JURI
Amendment 663 #
Proposal for a regulation
Recital 58
(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regard the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Users should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record- keeping, as appropriate. Given the potential impact and the need for democratic oversight and scrutiny, users of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies should be required to conduct a fundamental rights impact assessment prior to commencing the use of a high-risk AI system should be required to register the use of any high- risk AI systems in a public database.
2022/06/13
Committee: IMCOLIBE
Amendment 683 #
Proposal for a regulation
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which and AI systems intended to be used to make inferences on the basis of biometric data that produce legal effects or affect the rights and freedoms of natural persons. For those types of AI systems the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited..
2022/06/13
Committee: IMCOLIBE
Amendment 698 #
Proposal for a regulation
Recital 68
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 718 #
Proposal for a regulation
Article 29 a (new)
Article 29a Recourse for parties affected by decisions of high-risk Artificial Intelligence systems 1. Where the decision of a high-risk Artificial Intelligence system directly affects a natural person, that person is entitled to an explanation of the decision, including but not limited to: (a) The inputs taken into account by the Artificial Intelligence solution in decision-making. (b) Where feasible, the inputs that had the strongest influence on the decision. 2. Where the decision of a high-risk Artificial Intelligence system directly affects a natural persons economic or social prospects (for instance, job or educational opportunities, access to benefits, public services or credit), and without prejudice to existing sectoral legislation, that person may request that the decision be re-evaluated by a human being. This re-evaluation must take place within reasonable time following the request.
2022/03/24
Committee: JURI
Amendment 739 #
Proposal for a regulation
Recital 76
(76) In order to facilitate a smooth,ensure an effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence, to achieve a high level of trustworthiness and of protection of health, safety, fundamental rights and the Union values enshrined in Article 2 TEU across the Union with regards to artificial intelligence systems, to actively support Member States, Union institutions, bodies, offices and agencies in matters pertaining to this Regulation, to reduce the fragmentation of the internal market, and to increase the uptake of artificial intelligence throughout the Union, an European Union Artificial Intelligence Office should be established. The AI Office should have legal personality, should act in full independence, and should be adequately funded and staffed. Member States should provide the strategic direction and control of the AI Office through the management board of the AI Office, alongside the Commission, the EDPS, and the FRA. An executive director should be responsible for the coordination of the AI Office’s operations and for the implementation of its work programme. Industry,start-ups and SMEs, and civil society should formally participate in the work of the AI Office through an advisory forum that should ensure varied stakeholder representation and should advise the AI Office on matters pertaining to this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 741 #
Proposal for a regulation
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established as a body of the Union and should have legal personality. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission and the national competent authorities on specific questions related to artificial intelligence.
2022/06/13
Committee: IMCOLIBE
Amendment 750 #
Proposal for a regulation
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
2022/03/24
Committee: JURI
Amendment 757 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
2022/03/24
Committee: JURI
Amendment 905 #
Proposal for a regulation
Article 73 – paragraph 2
2. The delegation of power referred to in Article 4, Article 5a(1), Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
2022/03/24
Committee: JURI
Amendment 906 #
Proposal for a regulation
Article 73 – paragraph 3
3. The delegation of power referred to in Article 4, Article 5a(1), Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
2022/03/24
Committee: JURI
Amendment 908 #
Proposal for a regulation
Article 73 – paragraph 5
5. Any delegated act adopted pursuant to Article 4, Article 5a(1), Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
2022/03/24
Committee: JURI
Amendment 931 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point a
(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;deleted
2022/03/24
Committee: JURI
Amendment 932 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.deleted
2022/03/24
Committee: JURI
Amendment 933 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;deleted
2022/03/24
Committee: JURI
Amendment 934 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.deleted
2022/03/24
Committee: JURI
Amendment 936 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point a
(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;deleted
2022/03/24
Committee: JURI
Amendment 938 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;deleted
2022/03/24
Committee: JURI
Amendment 946 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;deleted
2022/03/24
Committee: JURI
Amendment 948 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/03/24
Committee: JURI
Amendment 950 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/03/24
Committee: JURI
Amendment 951 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
(f) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;deleted
2022/03/24
Committee: JURI
Amendment 953 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/03/24
Committee: JURI
Amendment 955 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;deleted
2022/03/24
Committee: JURI
Amendment 957 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.deleted
2022/03/24
Committee: JURI
Amendment 960 #
Proposal for a regulation
Annex III a (new)
ANNEX IIIa ADDITIONAL PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES REFFERED TO IN ARTICLE 5(1) 1. Additional Prohibited Artificial Intelligence Practices pursuant to Article 5(1)da are: (a) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.
2022/03/24
Committee: JURI
Amendment 1037 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts or intentions of natural persons on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1044 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, or inferring their characteristics and attributes on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1052 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
2022/06/13
Committee: IMCOLIBE
Amendment 1169 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order towith the objective to or the effect of materially distorting a person’s behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1172 #
Proposal for a regulation
Article 5 – paragraph 1 – point a a (new)
(a a) The placing on the market, putting into service or use of an AI system that deploys purposefully manipulative or deceptive techniques in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm, infringe on that person’s or another person’s fundamental rights, or contravene the Union values enshrined in Article 2 TEU;
2022/06/13
Committee: IMCOLIBE
Amendment 1181 #
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of an individual, including characteristics of such individual’s known or predicted personality or social or economic situation, a specific group of persons due to their age, physical or mental or disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1185 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1233 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:
2022/06/13
Committee: IMCOLIBE
Amendment 1234 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:.
2022/06/13
Committee: IMCOLIBE
Amendment 1239 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:;
2022/06/13
Committee: IMCOLIBE
Amendment 1250 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1253 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1254 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1260 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1261 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1263 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1269 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1273 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1274 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1286 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(d a) the use of an AI system for the general monitoring, detection and interpretation of private content in interpersonal communication services, including all measures that would undermine end-to-end encryption..
2022/06/13
Committee: IMCOLIBE
Amendment 1290 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(d a) The use of predictive, profiling and risk assessment AI systems in law enforcement and criminal justice;
2022/06/13
Committee: IMCOLIBE
Amendment 1292 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
(d b) The use of predictive, profiling and risk assessment AI system by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
2022/06/13
Committee: IMCOLIBE
Amendment 1303 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
(d c) the placing on the market, putting into service, or use of AI systems by law enforcement authorities or by competent authorities in migration, asylum and border control management, such as polygraphs and similar tools to detect deception, trustworthiness or related characteristics;
2022/06/13
Committee: IMCOLIBE
Amendment 1308 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(d d) the use of AI systems by or on behalf of competent authorities in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the interdicting, curtailing or preventing migration or border crossings;
2022/06/13
Committee: IMCOLIBE
Amendment 1348 #
Proposal for a regulation
Article 5 – paragraph 2
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1353 #
Proposal for a regulation
Article 5 – paragraph 2
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1354 #
Proposal for a regulation
Article 5 – paragraph 2
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1356 #
Proposal for a regulation
Article 5 – paragraph 2 – point a
(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1357 #
Proposal for a regulation
Article 5 – paragraph 2 – point a
(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1358 #
Proposal for a regulation
Article 5 – paragraph 2 – point b
(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1359 #
Proposal for a regulation
Article 5 – paragraph 2 – point b
(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1361 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1362 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1364 #
Proposal for a regulation
Article 5 – paragraph 3
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1367 #
Proposal for a regulation
Article 5 – paragraph 3
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1371 #
Proposal for a regulation
Article 5 – paragraph 3
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1375 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1376 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1381 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1384 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1405 #
Proposal for a regulation
Article 5 a (new)
Article 5 a Amendments to Article 5 The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list of AI systems and practices prohibited under Article 5 of the present regulation, according to the latest development in technology and to the assessment of increased or newly emerged risks to fundamental rights.
2022/06/13
Committee: IMCOLIBE
Amendment 1468 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding new area headings and high-risk AI systems where both of the following conditions are fulfilled:
2022/06/13
Committee: IMCOLIBE
Amendment 1476 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III or in the newly identified area headings;
2022/06/13
Committee: IMCOLIBE
Amendment 1659 #
Proposal for a regulation
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children or natural persons suffering from disabilities that render them legally unable to give their consent.
2022/06/13
Committee: IMCOLIBE
Amendment 1909 #
Proposal for a regulation
Article 16 a (new)
Article 16 a Obligations of users of high-risk AI systems Users of high-risk AI systems shall conduct and publish a fundamental rights impact assessment, detailing specific information relating to the context of use of the high-risk AI system in question, including: (a) the affected persons, (b) intended purpose, (c) geographic and temporal scope, (d) assessment of the legality and fundamental rights impacts of the system, (e) compatibility with accessibility legislation, (f) potential direct and indirect impact on fundamental rights, (g) any specific risk of harm likely to impact marginalised persons or those at risk of discrimination, (h) the foreseeable impact of the use of the system on the environment, (i) any other negative impact on the public interest, (j) clear steps as to how the harms identified will be mitigated and how effective this mitigation is likely to be.
2022/06/13
Committee: IMCOLIBE
Amendment 2039 #
Proposal for a regulation
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systems and implement human oversight in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this article.
2022/06/13
Committee: IMCOLIBE
Amendment 2040 #
Proposal for a regulation
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systems and implement human oversight in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5.
2022/06/13
Committee: IMCOLIBE
Amendment 2044 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
1 a. Users shall assign human oversight to natural persons who have the necessary competence, training and authority.
2022/06/13
Committee: IMCOLIBE
Amendment 2268 #
Proposal for a regulation
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 2271 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose, in an appropriate, clear and visible manner, that the content has been artificially generated or manipulated.
2022/06/13
Committee: IMCOLIBE
Amendment 2278 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detectcontent is part of an obviously artistic, pcrevent, investigate and prosecute criminal offencesative or fictional cinematographic work or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
2022/06/13
Committee: IMCOLIBE
Amendment 2287 #
Proposal for a regulation
Title IV a (new)
Rights of affected persons Article 52 a 1.Natural persons have the right not to be subject to non-compliant AI systems.The placing on the market, putting into service or use of non-compliant AI system gives rise to the right of the affected natural persons subject to such non-compliant AI systems to seek and receive redress. 2.Natural persons have the right to be informed about the use and functioning of AI systems they have been or may be exposed to, particularly in the case of high-risk and other regulated AI systems, according to Article 52. 3.Natural persons and public interest organisations have the right to lodge a complaint before the relevant national supervisory authorities against a producer or user of non-compliant AI systems where they consider that their rights or the rights of the natural persons they represent under the present regulation have been violated, and have the right receive effective remedy.
2022/06/13
Committee: IMCOLIBE
Amendment 2400 #
Proposal for a regulation
Article 56 – paragraph 1
1. A ‘European Artificial Intelligence Board’ (the ‘Board’) is established as a body of the Union and shall have legal personality.
2022/06/13
Committee: IMCOLIBE
Amendment 2423 #
Proposal for a regulation
Article 56 c (new)
Article 56 c Accountability, transparency, and independence 1. The AI Office shall be accountable to the European Parliament and to the Council in accordance with this Regulation. 2. The AI Office shall develop good administrative practices in order to ensure the highest possible level of transparency concerning its activities. Regulation (EC) No 1049/2001 shall apply to documents held by the AI Office. 3. The AI Office shall fulfil its tasks in complete independence.
2022/06/13
Committee: IMCOLIBE
Amendment 2731 #
Proposal for a regulation
Article 65 – paragraph 6 – point b b (new)
(b b) non-compliance with provisions set out in Article 52;
2022/06/13
Committee: IMCOLIBE
Amendment 2739 #
Proposal for a regulation
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), or 30 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, objections are raised by a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, the Commission shall without delay enter into consultation with the relevant Member State’s market surveillance authority and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months, or 60 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, starting from the notification referred to in Article 65(5) and notify such decision to the Member State concerned. The Commission shall also inform all other Member States of such decision.
2022/06/13
Committee: IMCOLIBE
Amendment 2772 #
Proposal for a regulation
Article 68 a (new)
Article 68 a Right to lodge a complaint with a supervisory authority 1. Without prejudice to any other administrative or judicial remedy, every natural or legal person shall have the right to lodge a complaint with a supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if the natural or legal person considers that their health, safety, or fundamental rights have been breached by an AI system falling within the scope of this Regulation. 2. Natural or legal persons shall have a right to be heard in the complaint handling procedure and in the context of any investigations conducted by the national supervisory authority as a result of their complaint. 3. The national supervisory authority with which the complaint has been lodged shall inform the complainants about the progress and outcome of their complaint. In particular, the national supervisory authority shall take all the necessary actions to follow up on the complaints it receives and, within three months of the reception of a complaint, give the complainant a preliminary response indicating the measures it intends to take and the next steps in the procedure, if any. 4. The national supervisory authority shall take a decision on the complaint and inform the complainant on the progress and the outcome of the complaint, including the possibility of a judicial remedy pursuant to Article 68b, without delay and no later than six months after the date on which the complaint was lodged.
2022/06/13
Committee: IMCOLIBE
Amendment 2779 #
Proposal for a regulation
Article 68 b (new)
Article 68 b Right to an effective judicial remedy against a national supervisory authority 1. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them. 2. Without prejudice to any other administrative or non-judicial remedy, each data subject shall have the right to a an effective judicial remedy where the national supervisory authority does not handle a complaint, does not inform the complainant on the progress or preliminary outcome of the complaint lodged within three months pursuant to Article 68a(3) or does not comply with its obligation to reach a final decision on the complaint within six months pursuant to Article 68a(4) or its obligations under Article 65. 3. Proceedings against a supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established.
2022/06/13
Committee: IMCOLIBE
Amendment 2986 #
Proposal for a regulation
Article 84 – paragraph 6
6. In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources, including stakeholders, and in particular civil society.
2022/06/13
Committee: IMCOLIBE
Amendment 2993 #
Proposal for a regulation
Article 84 – paragraph 7
7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology and new potential or realised risks to fundamental rights, and in the light of the state of progress in the information society.
2022/06/13
Committee: IMCOLIBE
Amendment 3066 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems intended to be used to make inferences on the basis of biometric data, including emotion recognition systems, or biometrics-based data, including speech patterns, tone of voice, lip-reading and body language analysis, that produces legal effects or affects the rights and freedoms of natural persons.
2022/06/13
Committee: IMCOLIBE
Amendment 3108 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used formake autonomous decisions or materially influence decisions about recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
2022/06/13
Committee: IMCOLIBE
Amendment 3118 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for makingmake autonomous decisions or materially influence decisions on promotion and termination of work- related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
2022/06/13
Committee: IMCOLIBE
Amendment 3164 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 3166 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point c
(c) AI systems intended to be used by law enforcement authorities or on behalf of law enforcement authorities to detect deep fakes as referred to in article 52(3);
2022/06/13
Committee: IMCOLIBE
Amendment 3168 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point c
(c) AI systems intended to be used by law enforcement authorities or on their behalf to detect deep fakes as referred to in article 52(3);
2022/06/13
Committee: IMCOLIBE
Amendment 3169 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point d
(d) AI systems intended to be used by law enforcement authorities or on behalf of law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3172 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point d
(d) AI systems intended to be used by law enforcement authorities or on their behalf for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3177 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3182 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
(f) AI systems intended to be used by law enforcement authorities or on behalf of law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3184 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
(f) AI systems intended to be used by law enforcement authorities or on their behalf for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3188 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point g
(g) AI systems intended to be used by law enforcement authorities or on their behalf for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
2022/06/13
Committee: IMCOLIBE
Amendment 3195 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 3196 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 3203 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities or by third parties acting on their behalf to assess a risk, including but not limited to a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
2022/06/13
Committee: IMCOLIBE
Amendment 3204 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities or on their behalf to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
2022/06/13
Committee: IMCOLIBE
Amendment 3205 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities or on their behalf to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
2022/06/13
Committee: IMCOLIBE
Amendment 3207 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point c
(c) AI systems intended to be used by competent public authorities or on their behalf for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
2022/06/13
Committee: IMCOLIBE
Amendment 3208 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point c
(c) AI systems intended to be used by competent public authorities or on their behalf for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
2022/06/13
Committee: IMCOLIBE
Amendment 3211 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination and assessment of the veracity of evidence and claims in relation tof applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
2022/06/13
Committee: IMCOLIBE
Amendment 3213 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to be used by competent public authorities or on their behalf or to assist competent public authorities forin the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
2022/06/13
Committee: IMCOLIBE
Amendment 3214 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities or on their behalf for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
2022/06/13
Committee: IMCOLIBE
Amendment 3220 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d a (new)
(d a) AI systems intended to be used by or on behalf of competent authorities in migration, asylum and border control management for the forecasting or prediction of trends related to migration, movement and border crossings;
2022/06/13
Committee: IMCOLIBE
Amendment 3224 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d b (new)
(d b) AI systems that are or may be used by or on behalf of competent authorities in law enforcement, migration, asylum and border control management for the biometric identification of natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3226 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d c (new)
(d c) AI systems intended to be used by or on behalf of competent authorities in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities for the purpose of recognising or detecting objects and natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3230 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a
(a) AI systems intended to be used by a judicial authority or administrative body or on their behalf or to assist a judicial authority or administrative body in researching and interpreting facts andor the law and in applying the law to a concrete set of facts.
2022/06/13
Committee: IMCOLIBE
Amendment 3233 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a
(a) AI systems intended to assist abe used by judicial authority in researching andies or on their behalf in interpreting facts andor the law and infor applying the law to a concrete set of facts.
2022/06/13
Committee: IMCOLIBE
Amendment 3235 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a a (new)
(a a) AI systems used by political parties, political candidates, public authorities, or on their behalf for influencing natural persons in the exercise of their vote in local, national, or European Parliament elections;
2022/06/13
Committee: IMCOLIBE
Amendment 3241 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
8 a. Others a) AI systems intended to be used for the delivery of online advertising to internet users
2022/06/13
Committee: IMCOLIBE