42 Amendments of Peter POLLÁK related to 2021/0106(COD)
Amendment 64 #
Proposal for a regulation
Recital 3
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, media, mobility, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
Amendment 68 #
Proposal for a regulation
Recital 4
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests, private data and rights that are protected by Union law. Such harm might be material or immaterial.
Amendment 70 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 with a human-centric approach and in compliance with freedom of expression, freedom of speech, media freedom, pluralism and cultural diversity. _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 79 #
Proposal for a regulation
Recital 9
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops, museums, monuments, cultural places, cultural institutions and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case- by-case basis, having regard to the specificities of the individual situation at hand.
Amendment 114 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education or for determining the course of study a student should follow should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. AI systems used to monitor students’ behaviour and emotion during tests at education and training institutions should be considered high-risk, since they are also interfering with students’ rights to privacy and data protection. The use of AI to check fraud at test or exam, such as plagiarism, should not be consider as high-risk.
Amendment 130 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use or where the content is doubtless used to form part of a creative, artistic or fictional cinematographic work. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities or other vulnerabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose in a clear manner that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Amendment 177 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of children or a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 239 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’) , shall disclose in an appropriate clear, repetitive and visible manner that the content has been artificially generated or manipulated.
Amendment 241 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or where the content forms part of an evidently artistic, creative or fictional cinematographic and analogous work-or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 413 #
Proposal for a regulation
Recital 14
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate for individuals and society, rather than depend on the type of technology. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.
Amendment 441 #
Proposal for a regulation
Recital 17 a (new)
Recital 17 a (new)
(17 a) AI systems used in law enforcement and criminal justice contexts based on predictive methods, profiling and risk assessment pose an unacceptable risk to fundamental rights and in particular to the right of non- discrimination, insofar as they contradict the fundamental right to be presumed innocent and are reflective of historical, systemic, institutional and societal discrimination and other discriminatory practices. These AI systems should therefore be prohibited;
Amendment 454 #
Proposal for a regulation
Recital 18
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
Amendment 465 #
Proposal for a regulation
Recital 19
Recital 19
Amendment 474 #
Proposal for a regulation
Recital 20
Recital 20
Amendment 487 #
Amendment 495 #
Proposal for a regulation
Recital 22
Recital 22
Amendment 498 #
Proposal for a regulation
Recital 23
Recital 23
Amendment 508 #
Proposal for a regulation
Recital 24
Recital 24
Amendment 592 #
Proposal for a regulation
Recital 39 a (new)
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or European Union institutions as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non- refoulement, or deny safe and effective legal avenues into the territory of the Union, including the right to international protection;
Amendment 1239 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:;
Amendment 1250 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
Amendment 1261 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
Amendment 1269 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1290 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(d a) The use of predictive, profiling and risk assessment AI systems in law enforcement and criminal justice;
Amendment 1292 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
Article 5 – paragraph 1 – point d b (new)
(d b) The use of predictive, profiling and risk assessment AI system by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
Amendment 1303 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
Article 5 – paragraph 1 – point d c (new)
(d c) the placing on the market, putting into service, or use of AI systems by law enforcement authorities or by competent authorities in migration, asylum and border control management, such as polygraphs and similar tools to detect deception, trustworthiness or related characteristics;
Amendment 1308 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
Article 5 – paragraph 1 – point d d (new)
(d d) the use of AI systems by or on behalf of competent authorities in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the interdicting, curtailing or preventing migration or border crossings;
Amendment 1353 #
Proposal for a regulation
Article 5 – paragraph 2
Article 5 – paragraph 2
Amendment 1371 #
Proposal for a regulation
Article 5 – paragraph 3
Article 5 – paragraph 3
Amendment 1384 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
Amendment 1405 #
Proposal for a regulation
Article 5 a (new)
Article 5 a (new)
Article 5 a Amendments to Article 5 The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list of AI systems and practices prohibited under Article 5 of the present regulation, according to the latest development in technology and to the assessment of increased or newly emerged risks to fundamental rights.
Amendment 1468 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding new area headings and high-risk AI systems where both of the following conditions are fulfilled:
Amendment 1476 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III or in the newly identified area headings;
Amendment 1909 #
Proposal for a regulation
Article 16 a (new)
Article 16 a (new)
Article 16 a Obligations of users of high-risk AI systems Users of high-risk AI systems shall conduct and publish a fundamental rights impact assessment, detailing specific information relating to the context of use of the high-risk AI system in question, including: (a) the affected persons, (b) intended purpose, (c) geographic and temporal scope, (d) assessment of the legality and fundamental rights impacts of the system, (e) compatibility with accessibility legislation, (f) potential direct and indirect impact on fundamental rights, (g) any specific risk of harm likely to impact marginalised persons or those at risk of discrimination, (h) the foreseeable impact of the use of the system on the environment, (i) any other negative impact on the public interest, (j) clear steps as to how the harms identified will be mitigated and how effective this mitigation is likely to be.
Amendment 2287 #
Proposal for a regulation
Title IV a (new)
Title IV a (new)
Rights of affected persons Article 52 a 1.Natural persons have the right not to be subject to non-compliant AI systems.The placing on the market, putting into service or use of non-compliant AI system gives rise to the right of the affected natural persons subject to such non-compliant AI systems to seek and receive redress. 2.Natural persons have the right to be informed about the use and functioning of AI systems they have been or may be exposed to, particularly in the case of high-risk and other regulated AI systems, according to Article 52. 3.Natural persons and public interest organisations have the right to lodge a complaint before the relevant national supervisory authorities against a producer or user of non-compliant AI systems where they consider that their rights or the rights of the natural persons they represent under the present regulation have been violated, and have the right receive effective remedy.
Amendment 2986 #
Proposal for a regulation
Article 84 – paragraph 6
Article 84 – paragraph 6
6. In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources, including stakeholders, and in particular civil society.
Amendment 2993 #
Proposal for a regulation
Article 84 – paragraph 7
Article 84 – paragraph 7
7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology and new potential or realised risks to fundamental rights, and in the light of the state of progress in the information society.
Amendment 3203 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities or by third parties acting on their behalf to assess a risk, including but not limited to a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
Amendment 3211 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination and assessment of the veracity of evidence and claims in relation tof applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 3220 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d a (new)
Annex III – paragraph 1 – point 7 – point d a (new)
(d a) AI systems intended to be used by or on behalf of competent authorities in migration, asylum and border control management for the forecasting or prediction of trends related to migration, movement and border crossings;
Amendment 3224 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d b (new)
Annex III – paragraph 1 – point 7 – point d b (new)
(d b) AI systems that are or may be used by or on behalf of competent authorities in law enforcement, migration, asylum and border control management for the biometric identification of natural persons;
Amendment 3226 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d c (new)
Annex III – paragraph 1 – point 7 – point d c (new)
(d c) AI systems intended to be used by or on behalf of competent authorities in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities for the purpose of recognising or detecting objects and natural persons;