368 Amendments of Kateřina KONEČNÁ related to 2021/0106(COD)
Amendment 80 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) In order to ensure a minimum level of transparency on the ecological sustainability aspects of an AI system, providers and users should document parameters including but not limited to resource consumption, resulting from the design, data management and training, the underlying infrastructures of the AI system, and of the methods to reduce such impact for any AI system.
Amendment 83 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 93 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
Amendment 117 #
Proposal for a regulation
Article 1 – paragraph 1 – point c
Article 1 – paragraph 1 – point c
(c) specific requirements for high-risk AI systems and obligations for operators of such systems;
Amendment 137 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12
Article 3 – paragraph 1 – point 12
(12) ‘intended purpoforeseeable use’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materialthat can reasonably be expected to be made of an AI system, including but not limited to the use for which the AI system is intended for consumers or the likely use by consumers aund statements, as well as in the technical documentaer reasonably foreseeable conditions;
Amendment 163 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a safety component of a product, or is itself a productthe failure or malfunctioning of which endangers the health, safety or fundamental rights of persons or of property, covered by the Union harmonisation legislation listed in Annex II;
Amendment 166 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2 a. In addition to the high-risk AI systems referred to in paragraphs 1 and 2, AI systems that have over 20 million EU citizens across the EU or 50% of any given Member States’ population as active monthly users, or whose users have cumulatively over 20 million customers or beneficiaries in the EU affected by it shall be considered high-risk, unless these are placed on to the market or put into service by a public authority.
Amendment 169 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
Article 6 – paragraph 2 b (new)
2 b. In addition to the high-risk AI systems referred to in paragraph 1, paragraph 2 and paragraph 3, AI systems that create foreseeable high-risks when combined shall also be considered high- risk.
Amendment 171 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex II and III by adding high-risk AI systems where both of the following conditions are fulfilled:.
Amendment 172 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
Amendment 173 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
Amendment 176 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights or on the environment that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account, including but not limited to, the following criteria:
Amendment 178 #
Proposal for a regulation
Article 7 – paragraph 2 – point a
Article 7 – paragraph 2 – point a
(a) the intended purpose or reasonably foreseeable use of the AI system;
Amendment 179 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or on the environment or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
Amendment 181 #
Proposal for a regulation
Article 7 – paragraph 2 – point d
Article 7 – paragraph 2 – point d
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or on the environment;
Amendment 185 #
Proposal for a regulation
Article 8 – paragraph 2
Article 8 – paragraph 2
2. The intended purpose of the high- risk AI system, the foreseeable uses and foreseeable misuses of AI systems within determinate uses and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
Amendment 193 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpoforeseeable use and they are in compliance with the requirements set out in this Chapter.
Amendment 194 #
Proposal for a regulation
Article 9 – paragraph 6
Article 9 – paragraph 6
6. Testing procedures shall be suitable to achieve the intended purpoforeseeable use of the AI system and do not need to go beyond what is necessary to achieve that purpose.
Amendment 195 #
Proposal for a regulation
Article 9 – paragraph 7
Article 9 – paragraph 7
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpoforeseeable use of the high-risk AI system.
Amendment 206 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into account, to the extent required by the intended purpoforeseeable use, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used.
Amendment 231 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Users of high risk AI systems, who modify or extend the purpose for which the conformity of the AI system was originally assessed,shall establish and document a post-market monitoring system(Art. 61) and must undergo a new conformity assessment (Art. 43) involved by a notified body.
Amendment 241 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1 a (new)
Article 43 – paragraph 4 – subparagraph 1 a (new)
A new conformity assessment is always required whenever safety related limits of continuing learning high-risk AI systems maybe exceeded or have an impact on the health or safety.
Amendment 243 #
Proposal for a regulation
Article 52 – title
Article 52 – title
Transparency obligations for certain AI systems
Amendment 247 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
Article 52 – paragraph 3 a (new)
3 a. Providers of any AI system should document and make available upon request the parameters regarding the environmental impact, including but not limited to resource consumption, resulting from the design, data management and training, the underlying infrastructures of the AI system, and of the methods to reduce such impact.
Amendment 248 #
Proposal for a regulation
Article 52 – paragraph 4
Article 52 – paragraph 4
4. Paragraphs 1, 2, 3 and 34 shall not affect the requirements and obligations set out in Title III of this Regulation.
Amendment 274 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Post-market monitoring must include continuous analysis of the AI environment, including other devices, software, and other AI systems that will interact with the AI system.
Amendment 304 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of roada component, the failure or malfunctioning of which endangers the health, safety or fundamental rights of persons or the safety of property, in the management, operation, generation and/or supply of the telecom, internet, and financial infrastructure, road, rail, air and water traffic, and the operation, management an/or supply of water, gas, heating, and electricity and energy(including nuclear power).
Amendment 318 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights, as well as consumer protection and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 322 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible and online spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 330 #
Proposal for a regulation
Recital 3 a (new)
Recital 3 a (new)
(3 a) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support such measures through allocating sufficient resources, including public funding, and giving priority access to regulatory sandboxes to projects led by civil society and social stakeholders. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts in equality and non- discrimination, accessibility, and consumer, environmental, and digital rights, and the academic community.
Amendment 332 #
Proposal for a regulation
Recital 3 a (new)
Recital 3 a (new)
(3 a) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support such measures through allocating sufficient resources, including public funding, and giving priority access to regulatory sandboxes to projects led by civil society and social stakeholders. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts in equality and non- discrimination, accessibility, and consumer, environmental, and digital rights, and the academic community.
Amendment 333 #
(3 b) Furthermore, in order for Member States to fight against climate change, to achieve climate-neutrality and to meet the Sustainable Development Goals (SDGs), the European companies should ensure the sustainable design of AI systems to reduce resource usage and energy consumption, thereby limiting the risks to the environment; AI systems have the potential to automatically provide businesses with detailed insight into their emissions, including value chains, and forecast future emissions, thus helping to adjust and achieve the Union's emission targets.
Amendment 336 #
Proposal for a regulation
Recital 4
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial and might affect one or more persons, a groups of persons or society as a whole, as well as the environment.
Amendment 340 #
Proposal for a regulation
Recital 4 a (new)
Recital 4 a (new)
(4 a) The concept of decision autonomy for machines is at its core in conflict with fundamental notions of our societies, such as human dignity, autonomy, and the rights to private life and the protection of personal data. This Regulation should reconcile the potential benefits to society offered by AI with the primacy of humans over machines;
Amendment 351 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safetythe environment and the protection of fundamental rights and values, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 367 #
Proposal for a regulation
Recital 7
Recital 7
(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 . An additional definition has been added for ‘biometrics-based data’ to cover physical, physiological or behavioural data that may not meet the criteria to be defined as biometric data (i.e. would not allow or confirm the unique identification of a natural person) but which may be used for purposes such as emotion recognition or biometric categorisation. The addition of this definition does not narrow the scope of, nor exclude anything from, the definition of biometric data, but rather provides for a comprehensive scope for additional forms of data which may be used for purposes such as biometric categorisation but which would not allow or confirm unique identification. _________________ 35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 36 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 37 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).
Amendment 368 #
Proposal for a regulation
Recital 7
Recital 7
(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 . An additional definition has been added for ‘biometrics-based data’ to cover physical, physiological or behavioural data that may not meet the criteria to be defined as biometric data (i.e. would not allow or confirm the unique identification of a natural person) but which may be used for purposes such as emotion recognition or biometric categorisation. The addition of this definition does not narrow the scope of, nor exclude anything from, the definition of biometric data, but rather provides for a comprehensive scope for additional forms of data which may be used for purposes such as biometric categorisation but which would not allow or confirm unique identification. _________________ 35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 36 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 37 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).
Amendment 384 #
Proposal for a regulation
Recital 9
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to online and public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case- by-case basis, having regard to the specificities of the individual situation at hand.
Amendment 396 #
Proposal for a regulation
Recital 12
Recital 12
(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
Amendment 403 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) In order to ensure a minimum level of transparency on the ecological sustainability aspects of an AI system, providers and users should document parameters including but not limited to resource consumption, resulting from the design, data management and training, the underlying infrastructures of the AI system, and of the methods to reduce such impact for any AI system.
Amendment 406 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. In order to ensure a minimum level of transparency on the ecological sustainability aspects of an AI system, providers and users should document (i) parameters including, but not limited to, resource consumption resulting from the design, data management, training and from the underlying infrastructures of the AI system; as well as (ii) the methods to reduce such impact.
Amendment 407 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, the environment and fundamental rights, and values, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal), the Joint Declaration on Digital Rights of the Union (the Declaration) and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) of the High- Level Expert Group on Artificial Intelligence (AI HLEG), and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 418 #
Proposal for a regulation
Recital 15 a (new)
Recital 15 a (new)
(15 a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (CRPD), the European Union and all Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality (Article 5). They are also obliged to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems. (Article 9). Finally, they are obliged to ensure respect for privacy of persons with disabilities (Article 22).
Amendment 419 #
Proposal for a regulation
Recital 15 a (new)
Recital 15 a (new)
(15 a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (CRPD), the European Union and all Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality (Article 5). They are also obliged to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems. (Article 9). Finally, they are obliged to ensure respect for privacy of persons with disabilities (Article 22).
Amendment 422 #
Proposal for a regulation
Recital 15 b (new)
Recital 15 b (new)
(15 b) Given the growing importance and use of AI systems, the strict application of universal design principles to all new technologies and services should ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is essential to ensure that providers of AI systems design them, and users use them, in accordance with the accessibility requirements set out in Directive (EU) 2019/882. Union law should be further developed, including through this Regulation, so that no one is left behind as result of digital innovation.
Amendment 437 #
Proposal for a regulation
Recital 17
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
Amendment 452 #
Proposal for a regulation
Recital 18
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. Such AI systems should be therefore prohibited.
Amendment 453 #
Proposal for a regulation
Recital 18
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible or online spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
Amendment 457 #
Proposal for a regulation
Recital 18 a (new)
Recital 18 a (new)
(18 a) The notion of ‘at a distance’ in Remote Biometric Identification (RBI) means the use of systems as described in Article 3(36), at a distance great enough that the system has the capacity to scan multiple persons in its field of view (or the equivalent generalised scanning of online / virtual spaces), which would mean that the identification could happen without one or more of the data subjects’ knowledge. Because RBI relates to how a system is designed and installed, and not solely to whether or not data subjects have consented, this definition applies even when warning notices are placed in the location that is under the surveillance of the RBI system, and is not de facto annulled by pre-enrolment.
Amendment 458 #
Proposal for a regulation
Recital 18 a (new)
Recital 18 a (new)
(18 a) The notion of ‘at a distance’ in Remote Biometric Identification (RBI) means the use of systems as described in Article 3(36), at a distance great enough that the system has the capacity to scanmultiple persons in its field of view (or the equivalent generalised scanning of online / virtual spaces), which would mean that the identification could happen without one or more of the data subjects’ knowledge. Because RBI relates to how a system is designed and installed, and not solely to whether or not data subjects have consented, this definition applies even when warning notices are placed in the location that is under the surveillance of the RBI system, and is not defacto annulled by pre-enrollment.
Amendment 461 #
Proposal for a regulation
Recital 18 b (new)
Recital 18 b (new)
(18 b) ‘Biometric categorisation systems’ are defined as AI systems that assign natural persons to specific categories, or infer their characteristics or attributes. ‘Categorisation’ shall include any sorting of natural persons, whether into discrete categories (e.g. male/female, suspicious/not-suspicious), on a numerical scale (e.g. using the Fitzpatrick scale for skin type) or any other form of assigning labels or values to people. ‘Inferring an attribute or characteristic’ shall include any situation in which an AI system uses one type of data about a natural person (e.g. hair colour) to ascribe a different attribute or characteristic to that person (e.g. ethnic origin).
Amendment 469 #
Proposal for a regulation
Recital 19
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
Amendment 479 #
Proposal for a regulation
Recital 20
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.
Amendment 480 #
Proposal for a regulation
Recital 20
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible or online spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.
Amendment 485 #
Proposal for a regulation
Recital 21
Recital 21
Amendment 488 #
Proposal for a regulation
Recital 21
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible or online spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.
Amendment 503 #
Proposal for a regulation
Recital 23
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it. The lex specialis nature of the prohibition on RBI does not provide a legal basis for law enforcement uses of RBI, nor does it weaken existing protections of biometric data under the Data Protection Law Enforcement Directive (LED) or national implementations of the LED.
Amendment 504 #
Proposal for a regulation
Recital 23
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible or online spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible or online spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.
Amendment 507 #
Proposal for a regulation
Recital 23 a (new)
Recital 23 a (new)
(23 a) ‘Biometric categorisation systems’ are defined as AI systems that assign natural persons to specific categories, or infer their characteristics or attributes. ‘Categorisation’ shall include any sorting of natural persons, whether into discrete categories (e.g. male/female, suspicious/not-suspicious), on a numerical scale (e.g. using the Fitzpatrick scale for skin type) or any other form of assigning labels or values to people. ‘Inferring an attribute or characteristic’ shall include any situation in which an AI system uses one type of data about a natural person (e.g. hair colour) to ascribe a different attribute or characteristic to that person (e.g. ethnic origin).
Amendment 514 #
Proposal for a regulation
Recital 24
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real- time’ remote biometric identification systems in publicly accessible or online spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible or online spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
Amendment 528 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any, but also on the environment, democracy and the rule of law in the Union..
Amendment 540 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose or reasonably foreseeable uses, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. (This amendment should apply throughout the text, i.e. any occurrence of "intended purpose" should be followed by "or reasonably foreseeable uses")
Amendment 542 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpoforeseeable uses, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
Amendment 550 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversightprohibited.
Amendment 556 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. Therefore, AI systems in education shall be prohibited to be used by public authorities in education of underaged children to meet the requirement in this regulation, to not exploit any of the vulnerabilities of the group of persons due to their age.
Amendment 564 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably but not limited to, for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and, livelihoods of these persons and workers’ rights. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
Amendment 567 #
Proposal for a regulation
Recital 36 b (new)
Recital 36 b (new)
(36 b) Given the significance of Artificial Intelligence impact assessments according to the usage Artificial Intelligence applications in the workplace, the EU will consider a corresponding directive with specific provisions for an impact assessment to ensure the protection of the rights and freedoms of workers affected by AI systems through collective agreements of national legislation.
Amendment 570 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systemsprohibited, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 584 #
Proposal for a regulation
Recital 38
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of and where a redress procedure is not foreseen. It is therefore appropriate to prohibit some AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress, including the availability of redress-by-design mechanisms and procedures. In view of the nature of the activities in question and the risks relating thereto, those high-risk AIprohibited systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offencesincluded in such a ban.
Amendment 586 #
Proposal for a regulation
Recital 39
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
Amendment 593 #
Proposal for a regulation
Recital 39 a (new)
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or European Union institutions as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non- refoulement, or deny safe and effective legal avenues into the territory of the Union, including the right to international protection;
Amendment 594 #
Proposal for a regulation
Recital 39 a (new)
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or European Union institutions as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January1967, nor should they be used to in any way infringe on the principle of non-refoulement, or deny safe and effective legal avenues into the territory of the Union, including the right to international protection;
Amendment 600 #
Proposal for a regulation
Recital 40 a (new)
Recital 40 a (new)
(40 a) Another area in which the use of AI systems deserves special consideration is the use for health-related purposes, including healthcare. Next to medical devices (as per EU regulation 2017/745), other health-related AI systems also bring about risks which should be regulated. These include systems that influence individual’s health outcomes but do not meet the criteria for a medical device, systems that influence population health outcomes or health equality, systems that impact the distribution of healthcare resources and systems used by pharmaceutical and medical technology companies in research and development, pharmacovigilance, market optimisation and pharmaceutical marketing. Bias and errors in health-related AI systems can have major and immediate consequences for individuals’ and populations’ health and wellbeing. Further, many systems will use sensitive and personal data, which needs to be justified, and about which patients need to be properly informed. What is more, systems that work on hospital, health system, or population level may have a major effect on societal health because they influence the distribution of healthcare resources and health policy design. For these reasons, there is a need for trustworthy AI in healthcare, meaning people must be able to trust that systems used in healthcare are scientifically, technically and clinically valid, safe and accountable, and safeguard individuals’ autonomy and privacy.
Amendment 616 #
Proposal for a regulation
Recital 42
Recital 42
(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of thr reasonably foreseeable use of the system and according to the risk management system to be established by the provider.
Amendment 617 #
Proposal for a regulation
Recital 42
Recital 42
(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of thforeseeable uses of the system and according to the risk management system to be established by the provider.
Amendment 620 #
Proposal for a regulation
Recital 43
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose or reasonably foreseeable use of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
Amendment 621 #
Proposal for a regulation
Recital 43
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpoforeseeable uses of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
Amendment 626 #
Proposal for a regulation
Recital 44
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpoforeseeable uses of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpoforeseeable uses, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
Amendment 627 #
Proposal for a regulation
Recital 44
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose or reasonably foreseeable use of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose or reasonably foreseeable use , the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended or foreseeable to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
Amendment 651 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 693 #
Proposal for a regulation
Recital 66
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose or reasonably foreseeable use of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.
Amendment 695 #
Proposal for a regulation
Recital 66
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpoforeseeable uses of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.
Amendment 702 #
Proposal for a regulation
Recital 69
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers and users of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system or the use thereof in a EU database, to be established and managed by the Commission. Certain AI systems listed in Article 52 (1b) and (2) and uses thereof shall be registered in the EU database. In order to facilitate this, users shall request information listed in Annex VIII point 2(g) from providers of AI systems. Any uses of AI systems by public authorities or on their behalf shall also be registered in the EU database. In order to facilitate this, public authorities shall request information listed in Annex VIII point 3(g) from providers of AI systems. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1)In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the European Accessibility Act.
Amendment 703 #
Proposal for a regulation
Recital 69
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers and users of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system or the use thereof in a EU database, to be established and managed by the Commission. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the European Accessibility Act. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
Amendment 718 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe and fully controlled space for experimentation, while ensuring responsible innovation and integration of appropriate ethical safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. Regulatory sandboxes involving activities that may impact health, safety and fundamental rights, democracy and the rule of law or the environment should be developed in accordance with redress-by-design principles. Any significant risks identified during the development and testing of such systems should result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place. The legal basis of such sandboxes should comply with the requirements established in the existing data protection framework and should be consistent with the Charter of fundamental rights of the European Union.
Amendment 725 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a strictly controlled experimentation and testing environment in the development and pre- marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation, as well as with the Charter of Fundamental Rights of the European Union and the General Data Protection Regulation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, andto provide safeguards needed to build trust and reliance on AI systems, to accelerate access to markets, including by removing barriers for the public sector, small and medium enterprises (SMEs) and start-ups; and to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 736 #
Proposal for a regulation
Recital 74
Recital 74
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possib, as well as the ENISA, the EU Agency for Fundamental Rights, EIGE, and the European Data Protection Supervisor should constantly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
Amendment 766 #
(84 a) Union legislation on the protection of whistleblowers (Directive (EU) 2019/1937) has full application to academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators acquiring information on breaches of Union law by a provider of AI system or its AI system, even if they are not explicitly mentioned in Article 4(1)a-4(1)d of that Directive.
Amendment 768 #
Proposal for a regulation
Recital 84 b (new)
Recital 84 b (new)
(84 b) Union legislation on consumer protection(notably Directives (EU) 2019/2161, 2005/29/EC,2011/83/EU) applies to AI systems to the extent determined in these legislations, regardless of whether these systems are categorized as high-risk.
Amendment 781 #
Proposal for a regulation
Article 1 – paragraph -1 (new)
Article 1 – paragraph -1 (new)
-1 The purpose of this Regulation is to ensure a high level of protection of health, safety, fundamental rights and the environment, from harmful effects of artificial intelligence systems ("AI systems") in the Union, while enhancing innovation.
Amendment 788 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the development, placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;
Amendment 809 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
Article 1 – paragraph 1 a (new)
The purpose of this Regulation is to ensure protection of health, safety, fundamental rights and the environment, from harmful effects of artificial intelligence systems in the Union, while supporting innovation.
Amendment 810 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
Article 1 – paragraph 1 a (new)
These provisions shall apply to AI systems as a product, service or practice, or as part of a product, service or practice.
Amendment 812 #
Proposal for a regulation
Article 1 – paragraph 1 b (new)
Article 1 – paragraph 1 b (new)
This Regulation is based on the principle that it is for developers, importers, distributors and downstream users to ensure that they develop, place on the market or use artificial intelligence that does not adversely affect health, safety, fundamental rights, and the environment. Its provisions are underpinned by the precautionary principle.
Amendment 813 #
Proposal for a regulation
Article 1 – paragraph 1 b (new)
Article 1 – paragraph 1 b (new)
This Regulation is based on the principle that it is for developers, importers, distributors and downstream users to ensure that they develop, place on the market or use artificial intelligence that does not adversely affect health, safety, fundamental rights, or the environment. Its provisions are underpinned by the precautionary principle.
Amendment 814 #
Proposal for a regulation
Article 1 – paragraph 1 c (new)
Article 1 – paragraph 1 c (new)
Any processing of personal data for the purposes of this Regulation shall take place in accordance with Union legislation for the protection of personal data, in particular Regulation 2016/679, Directive 2016/680, Regulation 2018/1725 and Directive 2002/58.
Amendment 817 #
Proposal for a regulation
Article 2 – paragraph 1 – point a a (new)
Article 2 – paragraph 1 – point a a (new)
(a a) providers of AI systems that have their main establishment in the EU;
Amendment 823 #
Proposal for a regulation
Article 2 – paragraph 1 – point b a (new)
Article 2 – paragraph 1 – point b a (new)
(b a) natural persons affected by the use of AI systems;
Amendment 839 #
Proposal for a regulation
Article 2 – paragraph 1 a (new)
Article 2 – paragraph 1 a (new)
1 a. This Regulation shall also apply to Union institutions, offices and agencies where they develop, deploy or otherwise make use of AI systems.
Amendment 873 #
Proposal for a regulation
Article 2 – paragraph 3 a (new)
Article 2 – paragraph 3 a (new)
3 a. Any exemptions from the application of this Act to AI systems used exclusively by Member States for national security purposes will be without prejudice to the application of Union law to any activity carried out by the Union or by a Member State that is subject to Union law.
Amendment 880 #
Proposal for a regulation
Article 2 – paragraph 4
Article 2 – paragraph 4
Amendment 885 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. The use of any AI-system that is in line with this Regulation, should also continue to comply with the European Charter on Fundamental Rights, secondary Union law and national law. This Regulation shall not provide the legal ground for unlawful AI development, deployment or use.
Amendment 886 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. An AI-system or practice that is in line with this Regulation, should also continue to comply with the European Charter on Fundamental Rights, existing and new secondary Union law and national law.
Amendment 894 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. Member States may adopt or maintain in force more stringent provisions, compatible with the Treaty in the field covered by this Directive, to ensure a higher level of protection of health, safety and fundamental rights.
Amendment 896 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall be without prejudice to Regulation (EU) 2016/679.
Amendment 900 #
Proposal for a regulation
Article 2 – paragraph 5 d (new)
Article 2 – paragraph 5 d (new)
5 d. This Regulation shall be without prejudice to national labour law and practice, that is any legal or contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation
Amendment 901 #
Proposal for a regulation
Article 2 – paragraph 5 e (new)
Article 2 – paragraph 5 e (new)
5 e. This Regulation shall not in any way affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Nor does it affect the right to negotiate, to conclude and enforce collective agreements, or to take collective action in accordance with national law and/or practice.
Amendment 919 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 943 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, data subject, public authority, agency or other body using an AI system under its authority and on its own responsibility, except where the AI system is used in the course of a personal non- professional activity;
Amendment 970 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12 a (new)
Article 3 – paragraph 1 – point 12 a (new)
(12 a) ‘foreseeable uses’ means uses that can reasonably be expected to be made of an AI system, including but not limited to the use for which the AI system is intended for consumers or the likely use by consumers under reasonably foreseeable conditions;
Amendment 971 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12 a (new)
Article 3 – paragraph 1 – point 12 a (new)
(12 a) 'reasonably foreseeable use' means the use of an AI system in a way that is or should be reasonably foreseeable;
Amendment 985 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a direct or indirect safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property;
Amendment 1014 #
Proposal for a regulation
Article 3 – paragraph 1 – point 29
Article 3 – paragraph 1 – point 29
(29) ‘training data’ means data used for training an AI system through fittingo fit its learnable parameters, including the weights of a neural network;
Amendment 1016 #
Proposal for a regulation
Article 3 – paragraph 1 – point 30
Article 3 – paragraph 1 – point 30
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non- learnable param. The process evaluates whethers and the model its learning process, among other things, in order to prevent overfitting; whereasunder-fitted or overfitted; The validation dataset should be a separate dataset of the training set for the evaliduation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split;to be unbiased. If there is only one available dataset, this is divided into two parts, a training set and a validation set. Both sets should still comply with Article 10(3) to ensure appropriate data governance and management practices.
Amendment 1020 #
Proposal for a regulation
Article 3 – paragraph 1 – point 31
Article 3 – paragraph 1 – point 31
(31) ‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service;. Similar to Article 3(30), the testing dataset should be a separate dataset from the training set and validation set. This set should also comply with Article 10(3) to ensure appropriate data governance and management practices.
Amendment 1025 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 a (new)
Article 3 – paragraph 1 – point 33 a (new)
(33 a) ‘biometrics-based data’ means data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person which may or may not allow or confirm the unique identification of a natural person;
Amendment 1026 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 a (new)
Article 3 – paragraph 1 – point 33 a (new)
(33 a) ‘biometrics-based data’ means data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person which may or may not allow or confirm the unique identification of a natural person
Amendment 1031 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts, states of mind (such as ‘deception’, ‘trustworthiness’ or ‘truthfulness’) or intentions of natural persons on the basis of their biometric data or other biometrics-based data;
Amendment 1032 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions , thoughts, states of mind (such as ‘deception’, ‘trustworthiness’ or ‘truthfulness’)or intentions of natural persons on the basis of their biometric data or biometrics-based data;
Amendment 1043 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system that uses biometric or biometrics-based data for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric dataor inferring their characteristics and attributes ;
Amendment 1045 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, or inferring their characteristics and attributes on the basis of their biometric data or biometrics-based data;
Amendment 1054 #
(36) ‘remote biometric identification system’ means an AI system for the purposcapable of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
Amendment 1058 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purposcapable of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified or data repository;
Amendment 1072 #
Proposal for a regulation
Article 3 – paragraph 1 – point 40 – point a a (new)
Article 3 – paragraph 1 – point 40 – point a a (new)
(a a) any other authority competent for law enforcement, including courts and the judiciary;
Amendment 1074 #
Proposal for a regulation
Article 3 – paragraph 1 – point 41
Article 3 – paragraph 1 – point 41
(41) ‘law enforcement’ means i) activities carried out by law enforcement authorities for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; and ii) activities carried out by any other authority that is part of the criminal justice system, including the judiciary;
Amendment 1086 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s healthphysical health, mental health or wellbeing, to property or the environment,
Amendment 1090 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a a (new)
Article 3 – paragraph 1 – point 44 – point a a (new)
(a a) a breach of fundamental rights defined by The Charter of Fundamental Rights of the European Union;
Amendment 1091 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a b (new)
Article 3 – paragraph 1 – point 44 – point a b (new)
(a b) systematic, mass or serious breach of other rights;
Amendment 1092 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a c (new)
Article 3 – paragraph 1 – point 44 – point a c (new)
(a c) damage to democracy, the rule of law or the environment
Amendment 1096 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
Article 3 – paragraph 1 – point 44 – point b a (new)
(b a) breach of obligations under Union law intended to protect personal data
Amendment 1105 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
Amendment 1106 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) scientific research and development means: any scientific development, experimentation, analysis, testing or validation carried out under controlled conditions.
Amendment 1108 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘social scoring’ means the evaluation or categorisation of EU citizens based on their behavior or (personality) characteristics, where one or more of the following conditions apply: (i) the information is not reasonably relevant for the evaluation or categorisation; (ii) the information is generated or collected in another domain than that of the evaluation or categorisation; (iii) the information is not necessary for or proportionate to the evaluation or categorisation; (iv) the information contains or reveals special categories of personal data.
Amendment 1109 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
Article 3 – paragraph 1 – point 44 b (new)
Amendment 1116 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
Article 3 – paragraph 1 – point 44 c (new)
(44 c) ‘affectee(s)’ mean(s) any natural or legal person or group of natural or legal persons affected by the use or outcomes of, or a combination of, AI system(s);
Amendment 1119 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
Article 3 – paragraph 1 – point 44 c (new)
(44 c) “child” is any person under the age of 18.
Amendment 1120 #
(44 d) ‘artificial intelligence system within determinate uses’ means an artificial intelligence system without specific and limited provider-defined purposes;
Amendment 1122 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 e (new)
Article 3 – paragraph 1 – point 44 e (new)
(44 e) 'deep fake' means generated or manipulated image, audio or video content produced by an AI system that appreciably resembles existing persons, objects, places or other entities or events and falsely appears to a person to be authentic or truthful;
Amendment 1125 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 f (new)
Article 3 – paragraph 1 – point 44 f (new)
Amendment 1159 #
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materiallyed, aimed at, or used for manipulation, deception or distorting a person’s behaviour or exploit a person’s characteristics, in a manner that causes, or is likely to cause, harm to: (i) that person or’s, another person physical or psychological harm’s or group of persons’ fundamental rights, including their physical or psychological health and safety, and/or (ii) democracy, the rule of law, or society at large;
Amendment 1162 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys manipulative, including subliminal, techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1175 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilicharacteristiecs of a specific group of persons due to their age, physical or mental disability,gender, ethnic origin, sexual orientation, disability, or any other biological, physical, physiological, behavioural or social characteristics that results in a detrimental, unfavourable, or discriminatory treatment vis-à-vis persons without those characteristics, or that is used in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or, psychological or material harm;
Amendment 1187 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
Article 5 – paragraph 1 – point c – introductory part
(c) tThe placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:or on behalf of public authorities or by private actors for the purpose of social scoring.
Amendment 1193 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:r groups thereof relating to their education, employment, housing, socio-economic situation, health, reliability, social behaviour, location or movements.
Amendment 1205 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
Article 5 – paragraph 1 – point c – point i
Amendment 1216 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
Article 5 – paragraph 1 – point c – point ii
Amendment 1238 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’placing or making available on the market or putting into service of remote biometric identification systems that are or may be used in publicly- accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectiv, as well as online spaces, and the use of remote biometric identification systems in publicly accessible spaces:;
Amendment 1276 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1278 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1283 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(d a) the placing on the market, putting into service or use of: (i) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions; (ii) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions. (iii) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests; (iv) AI systems intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships. (v) AI systems intended to be used by public authorities, private entities or on their behalf to evaluate the eligibility of natural persons for public assistance benefits and services, essential private services, as well as to grant, reduce, revoke, or reclaim such benefits and services; (vi) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use; (vii) AI systems intended to be used by competent authorities for migration, asylum and border control management to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State; (viii) AI systems intended to be used by public authorities, including competent authorities for migration, asylum and border control management, as polygraphs and similar tools or to detect the emotional state of a natural person;
Amendment 1285 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(d a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
Amendment 1293 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
Article 5 – paragraph 1 – point d b (new)
(d b) the placing on the market, putting into service or use of AI systems to infer emotions of a natural person, except for health or research purposes or other exceptional purposes, and subject to full regulatory review and with full and informed consent at all times.
Amendment 1294 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
Article 5 – paragraph 1 – point d b (new)
(d b) AI systems intended to be used by law enforcement authorities or other competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
Amendment 1301 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
Article 5 – paragraph 1 – point d c (new)
(d c) the use of AI systems by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
Amendment 1302 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
Article 5 – paragraph 1 – point d c (new)
(d c) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons, groups, or locations;
Amendment 1309 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
Article 5 – paragraph 1 – point d d (new)
Amendment 1310 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
Article 5 – paragraph 1 – point d d (new)
(d d) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
Amendment 1312 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
Article 5 – paragraph 1 – point d d (new)
(d d) The use of private facial recognition or other private biometric databases for the purpose of law enforcement;
Amendment 1314 #
Proposal for a regulation
Article 5 – paragraph 1 – point d e (new)
Article 5 – paragraph 1 – point d e (new)
(d e) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
Amendment 1320 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
Article 5 – paragraph 1 – point d f (new)
(d f) The use of remote biometric identification in migration management, border surveillance and humanitarian aid.
Amendment 1321 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
Article 5 – paragraph 1 – point d f (new)
Amendment 1324 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
Article 5 – paragraph 1 – point d g (new)
(d g) the use of AI systems, by or on behalf of competent authorities in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the interdicting, curtailing or preventing migration or border crossings;
Amendment 1326 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
Article 5 – paragraph 1 – point d g (new)
(d g) the use of biometric categorisation systems in publicly-accessible spaces, workplaces (including in hiring processes), and educational settings;
Amendment 1327 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
Article 5 – paragraph 1 – point d h (new)
(d h) the placing on the market, putting into service or use of biometric categorisation systems, or other AI systems, that categorise natural persons according to sensitive or protected attributes or characteristics, or infer those attributes or characteristics, including: ◦ Sex ◦ Gender & gender identity ◦ Race ◦ Ethnic origin ◦ Membership of a national minority ◦ Migration or citizenship status ◦ Political orientation ◦ Social origin or class ◦ Language or dialect ◦ Trade union membership ◦ Sexual orientation ◦ Religion or philosophical orientation ◦ Disability ◦ Or any other grounds on which discrimination is prohibited under Article 21 of the EU Charter of Fundamental Rights as well as under Article 9 of the General Data Protection Regulation
Amendment 1330 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
Article 5 – paragraph 1 – point d h (new)
(d h) The use of private facial recognition or other private biometric databases for the purpose of law enforcement;
Amendment 1331 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
Article 5 – paragraph 1 – point d i (new)
(d i) the use of AI systems by law enforcement authorities, criminal justice authorities, or other public authorities in conjunction with law enforcement and criminal justice authorities, to make predictions, profiles or risk assessments based on data analysis or profiling of natural persons [as referred to in Article 3(4) of Directive EU)2016/680], groups or locations, for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour.”
Amendment 1333 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
Article 5 – paragraph 1 – point d i (new)
(d i) The creation or expansion of facial recognition or other biometric databases through the untargeted or generalised scraping of biometric data from social media profiles or CCTV footage, or equivalent methods;
Amendment 1335 #
Proposal for a regulation
Article 5 – paragraph 1 – point d j (new)
Article 5 – paragraph 1 – point d j (new)
(d j) the use of AI systems, by or on behalf of competent authorities in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the interdicting, curtailing or preventing migration or border crossings;
Amendment 1337 #
Proposal for a regulation
Article 5 – paragraph 1 – point d j (new)
Article 5 – paragraph 1 – point d j (new)
(d j) the placing on the market, putting into service or use of ‘emotion recognition systems’;
Amendment 1338 #
Proposal for a regulation
Article 5 – paragraph 1 – point d k (new)
Article 5 – paragraph 1 – point d k (new)
(d k) The use of AI systems by law enforcement and criminal justice authorities to make predictions, profiles or risk assessments for the purpose of predicting crime.
Amendment 1339 #
Proposal for a regulation
Article 5 – paragraph 1 – point d k (new)
Article 5 – paragraph 1 – point d k (new)
Amendment 1341 #
Proposal for a regulation
Article 5 – paragraph 1 – point d l (new)
Article 5 – paragraph 1 – point d l (new)
(d l) the placing on the market, putting into service or use of: (i) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions; (ii) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions. (iii) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests; (iv) AI systems intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships; (v) AI systems intended to be used by public authorities, private entities or on their behalf to evaluate the eligibility of natural persons for public assistance benefits and services, essential private services, as well as to grant, reduce, revoke, or reclaim such benefits and services; (vi) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score;
Amendment 1374 #
Proposal for a regulation
Article 5 – paragraph 3 – introductory part
Article 5 – paragraph 3 – introductory part
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible or online spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use.
Amendment 1392 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible or online spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.
Amendment 1403 #
Proposal for a regulation
Article 5 a (new)
Article 5 a (new)
Amendment 1406 #
Proposal for a regulation
Article 5 b (new)
Article 5 b (new)
Amendment 1407 #
Proposal for a regulation
Title II a (new)
Title II a (new)
Amendment 1416 #
Proposal for a regulation
Article 6 – paragraph 1 – introductory part
Article 6 – paragraph 1 – introductory part
1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where bothone of the following conditions are fulfilled:
Amendment 1421 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex IIthe failure or malfunctioning of which endangers the health, safety or fundamental rights of persons;
Amendment 1430 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
Article 6 – paragraph 1 – point b
(b) the product whose safety component as meant under (a) is the AI system, or the AI system itself as a product, is required to undergo a third- party conformity assessment with a view to the placing on the market or putting into service or use of that product pursuant to the Union harmonisation legislation listed in Annex II.
Amendment 1432 #
(b a) the AI system is used by a public authority.
Amendment 1434 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be consideredidentified as posing a risk to fundamental human rights as defined in the EU Charter of Fundamental Rights, in relation to a specific intended use shall also be considered high-risk. Such risk is to be determined by completion of a Human Rights Impact Assessment by the user of the AI in relation to the specific use intended for the AI system, with records of such assessment retained for regulatory inspection. The provider shall apply a precautionary principle and, in case of uncertainty over the AI system's classification, shall consider the AI system high-risk.
Amendment 1448 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2 a. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.
Amendment 1450 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
Article 6 – paragraph 2 b (new)
2 b. In addition to the high-risk AI systems referred to in paragraphs 1, AI systems that have over 20 million EU citizens across the EU or 50% of any given Member States’ population as active monthly users, or whose users have cumulatively over 20 million customers or beneficiaries in the EU affected by it shall be considered high-risk, unless these are placed onto the market.
Amendment 1453 #
Proposal for a regulation
Article 6 – paragraph 2 c (new)
Article 6 – paragraph 2 c (new)
2 c. In addition to the high-risk AI systems referred to in paragraph 1, AI systems affecting employees in the employment relationship or in matters of training or further education shall be considered high risk.
Amendment 1454 #
Proposal for a regulation
Article 6 – paragraph 2 d (new)
Article 6 – paragraph 2 d (new)
2 d. In addition to the high-risk AI systems referred to in paragraph 1, AI systems likely to interact with children shall be considered high-risk.
Amendment 1455 #
Proposal for a regulation
Article 6 – paragraph 2 e (new)
Article 6 – paragraph 2 e (new)
2 e. In addition to the high-risk AI systems referred to in paragraph 1, an artificial intelligence system with indeterminate uses shall also be considered high risk.
Amendment 1461 #
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:the following condition is fulfilled: the AI systems pose a risk of harm to health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity or probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact associated with the high-risk AI systems already referred to in Annex III. Where an AI system is not intended to be used in any of the areas listed in points 1 to 8 of Annex III, the Commission is empowered to update the list of areas in Annex III by including new areas or extending the scope of existing areas.
Amendment 1469 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:.
Amendment 1470 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where botheither of the following conditions areis fulfilled:
Amendment 1472 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
Amendment 1478 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
Amendment 1482 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to theeconomic harm, negative societal impacts or harm to the environment, health and safety, or a risk of adverse impact on fundamental rights, democracy and the rule of law, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 1485 #
Proposal for a regulation
Article 7 – paragraph 1 – point b a (new)
Article 7 – paragraph 1 – point b a (new)
(b a) the AI systems pose a risk of harm to occupational health and safety, including psychosocial risks.
Amendment 1490 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights or on the environment, democracy and rule of law that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the followingconsult social partners and civil society and take into account, including but not limited to, the following non-cumulative criteria:
Amendment 1494 #
Proposal for a regulation
Article 7 – paragraph 2 – point a
Article 7 – paragraph 2 – point a
(a) the intended purpose of the AI system, or the reasonably foreseeable consequences of its use;
Amendment 1510 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or, democracy, rule of law and the environment has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by available reports or documented allegations submitted to national competent authorities;
Amendment 1513 #
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or on the environment or to affect a particular group of persons disproportionately;
Amendment 1528 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is not easily reversible, whereby outcomes having an impact on the health or safety of persons or on their fundamental rights shall not be considered as easily reversible;
Amendment 1540 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – introductory part
Article 7 – paragraph 2 – point h – introductory part
(h) the extent to which existing Union legislation provides forlacks:
Amendment 1541 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – point i
Article 7 – paragraph 2 – point h – point i
(i) effective measures of redress, the availability of redress-by-design mechanisms and procedures in relation to the risks posed by an AI system, with the exclusion of claims forincluding claims for material and non-material damages;
Amendment 1543 #
Proposal for a regulation
Article 7 – paragraph 2 – point h a (new)
Article 7 – paragraph 2 – point h a (new)
(h a) The general capabilities and functionalities of the AI system independent of its foreseeable use;
Amendment 1544 #
Proposal for a regulation
Article 7 – paragraph 2 – point h b (new)
Article 7 – paragraph 2 – point h b (new)
(h b) The extent of the availability and use of demonstrated technical solutions and mechanisms for the control, reliability and corrigibility of the AI system;
Amendment 1545 #
Proposal for a regulation
Article 7 – paragraph 2 – point h c (new)
Article 7 – paragraph 2 – point h c (new)
(h c) The potential misuse and malicious use of the AI system and of the technology underpinning it.
Amendment 1564 #
Proposal for a regulation
Article 8 – paragraph 2
Article 8 – paragraph 2
2. The intended purpoforeseeable uses and foreseeable misuses of AI systems with indeterminate uses of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
Amendment 1568 #
Proposal for a regulation
Article 8 – paragraph 2
Article 8 – paragraph 2
2. The intended purpose or reasonably foreseeable use of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
Amendment 1577 #
Proposal for a regulation
Article 9 – paragraph 1
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system.
Amendment 1580 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updatingreview and updating, including when the high-risk AI system is subject to significant changes in its design or purpose. It shall comprise the following steps:
Amendment 1582 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and the reasonably foreseeable risks associated with each high-risk AI system;that the high-risk AI system, and AI systems with indeterminate uses can pose to: (i) the health or safety of natural persons; (ii) the legal rights or legal status of natural persons; (iii) the fundamental rights of natural persons; (iv) the equal access to services and opportunities of natural persons; (v) the Union values enshrined in Article 2 TEU; (vi) society at large and the environment.
Amendment 1593 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
Article 9 – paragraph 2 – point b
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose or reasonably foreseeable use and under conditions of reasonably foreseeable misuse;
Amendment 1612 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or reasonably foreseeable use or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user.
Amendment 1619 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as possible through adequate design and development involving relevant domain and other experts and internal and external stakeholders, including but not limited to representative bodies and the social partners;
Amendment 1644 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose or reasonably foreseeable use and they are in compliance with the requirements set out in this Chapter.
Amendment 1681 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets as well as data that is collected, fed into, or used by the AI system, after deployment of the system and throughout its lifecycle shall be subject to appropriate data governance and management practices. Those practices shall concern in particular,
Amendment 1695 #
Proposal for a regulation
Article 10 – paragraph 2 – point d
Article 10 – paragraph 2 – point d
(d) the formulation of relevant, justified and reasonable assumptions, notably with respect to the information that the data are supposed to measure and represent;
Amendment 1737 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
Amendment 1739 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued. This should also guarantee explainability of AI driven recommendations or decisions.
Amendment 1770 #
Proposal for a regulation
Article 12 – paragraph 1
Article 12 – paragraph 1
1. High-riskAll AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to recognised standards or common specifications.
Amendment 1781 #
Proposal for a regulation
Article 12 – paragraph 4 – introductory part
Article 12 – paragraph 4 – introductory part
4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum:
Amendment 1784 #
Proposal for a regulation
Article 12 – paragraph 4 a (new)
Article 12 – paragraph 4 a (new)
4 a. For high-risk self-learning AI systems the logging of self-learning shall be maintained.The logging shall provide, at a minimum: (a) the input data used for self-learning; (b) the used algorithms of the input data interpretation; (c) the results of self-learning.
Amendment 1785 #
Proposal for a regulation
Article 12 – paragraph 4 b (new)
Article 12 – paragraph 4 b (new)
4 b. Where a decision and/or proposal of decision is the outcome of an AI system, the logging shall cover information comprehensively sufficient for further human manual review of the decision/proposal with no need to refer to the AI system itself.The logging shall provide, at a minimum: (a) the input data; (b)the reference database, if such present; (c) the algorithms that could had been used; (d) the algorithms that actually had been used; (e) output data (decision and/or proposal); (f) comprehensive mechanism of how the input data resulted into the output data.
Amendment 1786 #
Proposal for a regulation
Article 12 – paragraph 4 c (new)
Article 12 – paragraph 4 c (new)
4 c. For all high-risk AI systems, including those mentioned in paragraphs 4–6 above, the logging shall provide, at a minimum: (a) log-in information (user, date, time, authentication type); (b) the input data; (c) the output data.
Amendment 1787 #
Proposal for a regulation
Article 12 – paragraph 4 d (new)
Article 12 – paragraph 4 d (new)
4 d. The Commission is empowered to adopt delegated acts in accordance with Article 73 to define more minimum logging requirements for AI systems or their certain types.
Amendment 1810 #
Proposal for a regulation
Article 13 a (new)
Article 13 a (new)
Article 13 a Transparency for affectees of AI systems 1) High-risk AI systems shall be designed, developed and used in such a way that an affectee can obtain an explanation from the developer and user for any decision taken or supported by a high-risk AI system that significantly affects the affectee; 2) Providers and users of high-risk AI systems shall provide access to the person of persons designated with the exercise of 'human oversight' as described in Art. 14 to discuss and to clarify the facts, circumstances and reasons having led to the decision by the AI system; 3) Providers and users of high-risk AI systems shall provide the affectee with a written statement of the reasons for any decision taken or supported by a high-risk AI system; 4) Where the affectee is not satisfied with the explanation or the written statement of reasons obtained or consider that the decision referred to in paragraph (1) jeopardizes their health, safety or fundamental rights, the provider or user, as the case may be, shall review that decision, upon reasonable request by the affectee. The provider or user, as the case maybe, shall respond to such request by providing the affectee with a substantiated reply without undue delay and in any event within one week of receipt of the request.
Amendment 1816 #
Proposal for a regulation
Article 14 – paragraph 2
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system isAI systems that pose risks to health and safety or fundamental rights or AI systems subjected to the transparency obligations ex Article 52 are used in accordance with its intended purpotheir foreseeable uses or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
Amendment 1894 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant conformiindependent third party assessment procedure, prior to its placing on the market or putting into service;
Amendment 1896 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service or use;
Amendment 1905 #
Proposal for a regulation
Article 16 – paragraph 1 – point j a (new)
Article 16 – paragraph 1 – point j a (new)
(j a) refrain from placing on the market or putting into service a High-Risk AI system that: (i) is not in conformity with the requirements set out in Chapter 2 of this Title;or (ii) poses a risk of harm to health, safety or fundamental rights despite its conformity with the requirements set out in Chapter 2 of this Title.
Amendment 1907 #
Proposal for a regulation
Article 16 – paragraph 1 – point j b (new)
Article 16 – paragraph 1 – point j b (new)
(j b) ensure that the individual to whom human oversight is assigned shall either be fully independent from the provider or user or, be adequately protected against negative consequences for their position within the organisation, resulting from or related to their exercise of human oversight.
Amendment 1913 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
Article 17 – paragraph 1 – introductory part
1. Providers of high-risk AI systems shall put a quality management system in place, certified by an independent third party that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:
Amendment 1927 #
Proposal for a regulation
Article 17 – paragraph 1 – point f
Article 17 – paragraph 1 – point f
(f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service or use of high-risk AI systems;
Amendment 1943 #
Proposal for a regulation
Article 17 – paragraph 3 a (new)
Article 17 – paragraph 3 a (new)
3 a. High-risk AI systems shall make use of high quality models, that use relevant, justified and reasonable parameters and features and optimise for justified goals;
Amendment 1944 #
Proposal for a regulation
Article 17 – paragraph 3 b (new)
Article 17 – paragraph 3 b (new)
3 b. High-risk AI systems shall only be used in a different domain or environment where they are generalisable to such domain or environment
Amendment 1949 #
Proposal for a regulation
Article 19 – title
Article 19 – title
Independent Third party Conformity assessment
Amendment 1950 #
Proposal for a regulation
Article 19 – paragraph 1
Article 19 – paragraph 1
1. Providers of high-risk AI systems shall ensure that their systems undergo the relevantan independent third party conformity assessment procedure in accordance with Article 43 and Annex VII, prior to their placing on the market or putting into service. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49. The conformity assessment shall be publicly available.
Amendment 1952 #
Proposal for a regulation
Article 19 – paragraph 1
Article 19 – paragraph 1
1. Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43, prior to their placing on the market or putting into service or use. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49.
Amendment 1955 #
Proposal for a regulation
Article 20 – paragraph 1
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose or reasonably foreseeable use of high-risk AI system and applicable legal obligations under Union or national law.
Amendment 2034 #
Proposal for a regulation
Article 28 a (new)
Article 28 a (new)
Amendment 2046 #
Proposal for a regulation
Article 29 – paragraph 2
Article 29 – paragraph 2
2. The obligations in paragraph 1 are without prejudice to other user obligations under Union or national law and to the user’s discretion in organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider. This regulation does not conflict with the scope of Art. 153 TFEU, which sets minimum requirements for Member States that may be exceeded.
Amendment 2057 #
Proposal for a regulation
Article 29 – paragraph 5 – introductory part
Article 29 – paragraph 5 – introductory part
5. Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control. The logs shall be kept for a period that is appropriate in the light of the intended purpose or reasonably foreseeable use of the high-risk AI system and applicable legal obligations under Union or national law.
Amendment 2071 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Users of high-risk AI systems shall refrain from placing on the market or putting into service a high-risk AI system that: (i) is not in conformity with the requirements set out in Chapter 2 of this Title;or (ii) poses a risk of harm to health, safety or fundamental rights despite its conformity with the requirements set out in Chapter 2 of this Title.
Amendment 2073 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Users of high risk AI systems, who modify or extend the purpose for which the conformity of the AI system was originally assessed, shall establish and document a post-market monitoring system (Art. 61)and must undergo a new conformity assessment (Art. 43) involved by a notified body.
Amendment 2083 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Article 29 a Obligation on users to define affected persons 1. Before putting into use a high-risk AI system as defined in Article 6(2), the user shall define categories of natural persons and groups likely to be affected by the use of the system.
Amendment 2084 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Article 29 a A fiduciary duty for providers and users of high-risk AI systems Providers and users of high-risk AI systems have a fiduciary duty to act in the interest of the affectees.
Amendment 2085 #
Proposal for a regulation
Article 29 b (new)
Article 29 b (new)
Amendment 2132 #
Proposal for a regulation
Article 41 – paragraph 1
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2). The Commission shall adopt common specifications setting out how risk management systems should give specific consideration to interaction with or impact on children.
Amendment 2151 #
Proposal for a regulation
Article 42
Article 42
Amendment 2153 #
Proposal for a regulation
Article 42 – paragraph 1
Article 42 – paragraph 1
1. Taking into account their intended purpoforeseeable uses, high-risk AI systems that have been trained and tested on data concerning the specific geographical, behavioural and functional setting within which they are intended to be used shall be presumed to be in compliance with the requirement set out in Article 10(4).
Amendment 2157 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1, 3 and 4 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:follow the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
Amendment 2163 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
Article 43 – paragraph 1 – point a
Amendment 2170 #
Proposal for a regulation
Article 43 – paragraph 1 – point b
Article 43 – paragraph 1 – point b
(b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VIIdocumentation of analysis and achievement of the tests of strict necessity, proportionality and legality of the system, as well as any associated database or data repository on which it relies; with the involvement of a notified body, referred to in Annex VII, and with the involvement of the relevant national data protection authority.
Amendment 2180 #
Proposal for a regulation
Article 43 – paragraph 2
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
Amendment 2194 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1
Article 43 – paragraph 4 – subparagraph 1
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. A new conformity assessment is always required whenever safety-related limits of continuing learning high-risk AI systems may be exceeded or have an impact on the health or safety.
Amendment 2214 #
Proposal for a regulation
Article 47
Article 47
Amendment 2246 #
Proposal for a regulation
Article 51 – paragraph 1
Article 51 – paragraph 1
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2)n AI system, the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Amendment 2251 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Article 51 – paragraph 1 a (new)
Before using a high-risk AI system referred to in Article 6(2), the user or, where applicable, the authorised representative, shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each new use of a high-risk AI system.
Amendment 2256 #
Proposal for a regulation
Article 51 – paragraph 1 b (new)
Article 51 – paragraph 1 b (new)
Before using an AI system, public authorities shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each new use of an AI system.
Amendment 2283 #
Proposal for a regulation
Article 52 a (new)
Article 52 a (new)
Amendment 2290 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by the Commission in collaboration with one or more Member States competent authorities or the European Data Protection Supervisor, are considered high risk and shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. They shall operate in full compliance with the General Data Protection Regulation. This shall take place under the direct supervision and guidance by the Commission in collaboration with competent authorities with a view to identifying risks to health and safety and fundamental rights, testing mitigation measures for identified risks, demonstrating prevention of these risks and otherwise ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. AI regulatory sandboxes shall remain a technical solution, shall assess potentialadverse effects and not be used on the employment context.
Amendment 2314 #
Proposal for a regulation
Article 53 – paragraph 3
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks toRegulatory sandboxes involving activities that may impact health and, safety and fundamental rights, democracy and rule of law or the environment shall be developed in accordance with redress-by-design principles. Any significant risks identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
Amendment 2338 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
Article 53 – paragraph 6 a (new)
6 a. The modalities referred to in Article 53(6) shall ensure at least the following: (a) participants in the regulatory sandboxing system, in particular small-scale providers, are granted access to pre-deployment services, such as preliminary registration of AI system, insurance, compliance and R&D support services, and to all the other relevant elements of the Union’s AI ecosystem and other Digital Single Market initiatives such as testing and experimentation facilities, digital hubs, centers of excellence, testing and experimentation facilities, and EU benchmarking capabilities; and to other value-adding services such as standardization and certification, community social platform and contact databases, tenders and grant making portal and lists of potential investors. (b) foreign providers, in particular small- scale providers, are eligible to take part in the regulatory sandboxes to incubate and refine their products in compliance with this Regulation. (c) individuals such as researchers, entrepreneurs, innovators and other pre-market ideas owners are eligible to take part in the regulatory sandboxes to incubate and refine their products in compliance with this Regulation. (d) there be as little fragmentation as possible of the regulatory sandboxes across Member States, notably through development of a single interface and contact point at the EU level to interact with the regulatory sandbox ecosystem and through the Commission facilitating the creation of transnational and EU-wide regulatory sandboxes
Amendment 2388 #
Proposal for a regulation
Article 55 a (new)
Article 55 a (new)
Article 55 a Promoting research and development of AI in support of socially and environmentally beneficial outcomes led by civil society 1. Member States shall promote research and development of AI solutions which support socially and environmentally beneficial outcomes, including but not limited to development of AI-based solutions to increase accessibility for persons with disabilities, tackle socio- economic inequalities, and meet sustainability and environmental targets, by: (a) providing relevant projects with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions; (b) earmarking public funding, including from relevant EU funds, for AI research and development in support of socially and environmentally beneficial outcomes; (c) organising specific awareness raising activities about the application of this Regulation, the availability of and application procedures for dedicated funding, tailored to the needs of those projects; (d) where appropriate, establishing accessible dedicated channels for communication with projects to provide guidance and respond to queries about the implementation of this Regulation. 2. Member States shall ensure that when conformity assessment is required under Article 43, cost of such assessment is covered by public, including EU, funds available for AI research and development. 3. Without prejudice to Article 55 a (new)1(a), Member States shall ensure that relevant projects are led by civil society and social stakeholders that set the project priorities, goals, and outcomes.
Amendment 2390 #
Proposal for a regulation
Article 55 b (new)
Article 55 b (new)
Article 55 b Right not to be subject to non-compliant AI systems Natural persons shall have the right not to be subject to AI systems that: (a) pose an unacceptable risk pursuant to Article 5, or (b) otherwise do not comply with the requirements of this Regulation.
Amendment 2391 #
Proposal for a regulation
Article 55 c (new)
Article 55 c (new)
Article 55 c Right to information about the use and functioning of AI systems 1. Natural persons shall have the right to be informed that they have been exposed to high-risk AI systems as defined in Article 6, and other AI systems as defined in Article 52. 2. Natural persons shall have the right to be provided upon request, with an explanation for decisions producing legal effects or otherwise affecting them or outcomes related to them taken by or with the assistance of systems within the scope of this Regulation, pursuant to Article 52 paragraph (3b). 3. The information outlined in paragraphs 1 and 2 shall be provided in a clear, easily understandable and intelligible way, in a manner that is accessible for persons with disabilities.
Amendment 2396 #
Proposal for a regulation
Article 56 – title
Article 56 – title
Amendment 2402 #
Proposal for a regulation
Article 56 – paragraph 1 a (new)
Article 56 – paragraph 1 a (new)
1 a. The Board shall be independent in the fulfilment of its task. It shall have legal personality.
Amendment 2403 #
Proposal for a regulation
Article 56 – paragraph 1 b (new)
Article 56 – paragraph 1 b (new)
1 b. The Board shall ensure the consistent application of this Regulation.
Amendment 2406 #
Proposal for a regulation
Article 56 – paragraph 2 – introductory part
Article 56 – paragraph 2 – introductory part
2. The Board shall provide advice and assistance to the Commission and the national authorities in order to:
Amendment 2411 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(c a) carry out annual reviews and analyses of the complaints sent to and findings by national competent authorities, of the serious incidents and malfunctioning reports referred to in Article 62, and of the new registration in the EU Database referred to in Article 60 to identify trends and potential emerging issues threatening the future health and safety and fundamental rights of citizens and not adequately addressed by this Regulation; to carry out biannual horizon scanning and foresight exercises to extrapolate the impact these trends and emerging issues can have on the Union; and to annually publish recommendations to the Commission, including but not limited to recommendations on the categorization of prohibited practices, high-risk systems, and codes of conduct for AI systems that are not classified as high-risk.
Amendment 2417 #
Proposal for a regulation
Article 56 – paragraph 2 – point c b (new)
Article 56 – paragraph 2 – point c b (new)
(c b) represent and defend the interest of the broad cicil society, including Social Partners.
Amendment 2418 #
Proposal for a regulation
Article 56 – paragraph 2 – point c c (new)
Article 56 – paragraph 2 – point c c (new)
(c c) launch an evaluation procedure for an AI system
Amendment 2419 #
Proposal for a regulation
Article 56 – paragraph 2 a (new)
Article 56 – paragraph 2 a (new)
2 a. The Board shall have a sufficient number of competent personnel at their disposal for assistance in the proper performance of their tasks.
Amendment 2420 #
Proposal for a regulation
Article 56 – paragraph 2 b (new)
Article 56 – paragraph 2 b (new)
2 b. The Board shall be organised and operated so as to safeguard the independence, objectivity and impartiality of their activities. The Board shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality throughout its activities.
Amendment 2432 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor, the EU Agency for Fundamental Rights, ENISA, EIGE and social partners as well representratives of civil society. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 2435 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor and the Fundamental Rights Agency. Other national authorities or EU agencies may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 2438 #
Proposal for a regulation
Article 57 – paragraph 1 a (new)
Article 57 – paragraph 1 a (new)
1 a. The Commission shall have the right to participate in the activities and meetings of the Board without voting right. The Commission shall designate a representative. The Chair of the Board shall communicate to the Commission the activities of the Board.
Amendment 2444 #
Proposal for a regulation
Article 57 – paragraph 2
Article 57 – paragraph 2
2. The Board shall adopt its rules of procedure by a simple majority of its members, following the consent of the Commission. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish sub-groups as appropriate for the purpose of examining specific questions.
Amendment 2448 #
Proposal for a regulation
Article 57 – paragraph 2 a (new)
Article 57 – paragraph 2 a (new)
2 a. The Board may establish sub- groups as appropriate for the purpose of examining specific questions. The Board shall establish a permanent sub-group for the purpose of examining the question of the proper governance of general purpose AI systems. The Board shall also establish a permanent sub-group for the purpose of examining the question of the proper governance of research and development activities on the topic of AI and to inform the development of the governance framework.
Amendment 2451 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be chaired by the Commission. The Commission shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 2459 #
Proposal for a regulation
Article 57 – paragraph 3 a (new)
Article 57 – paragraph 3 a (new)
3 a. The Board shall elect a chair and two deputy chairs from amongst its members by simple majority.
Amendment 2461 #
Proposal for a regulation
Article 57 – paragraph 3 b (new)
Article 57 – paragraph 3 b (new)
3 b. The term of office of the Chair and of the deputy chairs shall be five years and be renewable once.
Amendment 2465 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The Board may invite external experts and observers to . To thatt end its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end tthe Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and specialised bodies. The Ccommission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groupsposition of the specialised body shall ensure fair representation of consumer organisations, civil society organisations and academics specialised on AI. Its meetings and their minutes shall be published online.
Amendment 2488 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
Article 58 – paragraph 1 – introductory part
When providing advice and assistance to the Commissensuring the consistent application inof the context of Article 56(2)is Regulation, the Board shall in particular:
Amendment 2517 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
Article 58 – paragraph 1 – point c a (new)
Amendment 2520 #
Proposal for a regulation
Article 58 – paragraph 1 – point c b (new)
Article 58 – paragraph 1 – point c b (new)
(c b) provide guidance in relation to governing research and development activities for creating new or improving existing AI systems, and the alignment of these activities with the objectives of this Regulation.
Amendment 2524 #
Proposal for a regulation
Article 58 – paragraph 1 – point c c (new)
Article 58 – paragraph 1 – point c c (new)
(c c) The Board shall provide statutory guidance in relation to children’s rights, applicable law and minimum standards for the evaluation of automated decision- making systems to meet the objectives of this Regulation pertaining to children and to investigate the design goals, data inputs, model selection, implementation and outcomes of such systems.
Amendment 2568 #
Proposal for a regulation
Article 59 – paragraph 3
Article 59 – paragraph 3
3. Member States shall inform the Board and the Commission of their designation or designations and, where applicable, the reasons for designating more than one authority.
Amendment 2580 #
Proposal for a regulation
Article 59 – paragraph 5
Article 59 – paragraph 5
5. Member States shall report to the Board and the Commission on an annual basis on the status of the financial and human resources of the national competent authorities with an assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations.
Amendment 2594 #
Proposal for a regulation
Article 59 – paragraph 8
Article 59 – paragraph 8
8. When Union institutions, agencies and bodies fall within the scope of this Regulation, tThe European Data Protection Supervisor shall act as the competent authority for their supervision of Union institutions, agencies and bodies.
Amendment 2608 #
Proposal for a regulation
Title VII
Title VII
EU DATABASE FOR STAND-ALONE HIGH-RISK AI SYSTEMS
Amendment 2610 #
Proposal for a regulation
Article 60 – title
Article 60 – title
60 EU database for stand-alone high- risk, general purpose and certain AI systems, uses thereof, and uses of AI systems by public authorities AI systems
Amendment 2612 #
Proposal for a regulation
Article 60 – title
Article 60 – title
EU database for stand-alone high-risk AI systems
Amendment 2614 #
Proposal for a regulation
Article 60 – paragraph 1
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51. AI systems which are registered in accordance with Article 51 and general purpose AI systems, in accordance with Article xx: a. high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51(1); b. any AI systems referred to in Article 52 paragraphs 1b and 2 which are registered in accordance with Article 51(1); c. any uses of high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51(2); d. any uses of AI systems referred to in Article 52 paragraph 1b and 2 which are registered in accordance with Article 51(2); e. any uses of AI systems by or on behalf of public authorities registered in accordance with Article 51(3).
Amendment 2620 #
Proposal for a regulation
Article 60 – paragraph 2
Article 60 – paragraph 2
2. The Commission shall provide providers and users entering data into the EU database with technical and administrative support.The following information should be included in the EU database: (a) For registrations according to paragraph 1(a) and 1(b), the data listed in Annex VIII point 1 shall be entered into the EU database by the providers. The Commission shall provide them with technical and administrative support. (b) For registrations according to paragraph 1(c) , 1(d) and 1(e), the data listed in Annex VIII point 2 shall be entered into the EU database by the users.
Amendment 2624 #
Proposal for a regulation
Article 60 – paragraph 3
Article 60 – paragraph 3
3. Information contained in the EU database shall be accessible to the publicThe EU database and the information contained in it shall be freely available to the public, comply with the accessibility requirements of Annex I to Directive 2019/882, and be user-friendly, navigable, and machine-readable, containing structured digital data based on a standardised protocol.
Amendment 2626 #
Proposal for a regulation
Article 60 – paragraph 3 a (new)
Article 60 – paragraph 3 a (new)
3 a. Users should register deployments of high-risk AI systems into the EU database before putting them into use. The users should include information in the database, not limited to, the identity of the provider and the user, the context of the purpose and of deployment, the designation of impacted persons, and the results of the impact assessment.
Amendment 2628 #
Proposal for a regulation
Article 60 – paragraph 4
Article 60 – paragraph 4
4. The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider, or the user.
Amendment 2632 #
Proposal for a regulation
Article 60 – paragraph 5
Article 60 – paragraph 5
5. The Commission shall be the controller of the EU database. It shall also ensure to providers and users adequate technical and administrative support, in particular in relation to registrations according to paragraph 1(e).
Amendment 2637 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
Article 60 – paragraph 5 a (new)
5 a. The database shall comply with the accessibility requirements of Annex I to Directive 2019/882.
Amendment 2638 #
Article 60 a Systemic transparency and monitoring of societal implications 1. The Commission shall, in collaboration with the Member States, set up and maintain a relational database of digital and AI systems that interact with high- risk or general purpose AI systems or with AI systems with transparency obligations. Among others, the relational database shall include digital and AI systems whose input directly or indirectly come from a high-risk or general purpose AI system or whose output directly or indirectly is taken as input by a high-risk or general purpose AI system. 2. For each entry in the EU database referred to in Article 60, the provider shall enter the upstream and downstream digital and AI systems into the relational database, as well as, to the extent it is possible, the digital and AI systems upstream of the upstream AI systems and the digital and AI systems downstream of the downstream AI systems. 3. The European AI Board and the Commission shall regularly assess the relational map to facilitate incident response and to identify AI systems (‘Societally Significant AI systems’)whose output is used as input into many downstream digital and AI systems.4. The European AI Board and the Commission shall develop a Code of Conduct for Societally Significant AI Systems.
Amendment 2640 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Post-market monitoring must include continuous analysis of the AI environment, including other devices, software, and other AI systems that will interact with the AI system.
Amendment 2703 #
Proposal for a regulation
Article 64 a (new)
Article 64 a (new)
Article 64 a Market surveillance authorities 1. Market surveillance authorities shall, at a minimum, have the power to (a) carry out unannounced on-site and remote inspections of AI systems. (b) acquire samples related to AI systems, including through remote inspections, to reverse-engineer the AI systems and to acquire evidence to identify non- compliance. 2. Member States may authorise their market surveillance authorities to reclaim from the relevant operator the totality of the costs of their activities with respect to instances of non-compliance. 3. The costs referred to in paragraph 2 of this Article may include the costs of carrying out testing, computation, hardware,storage, and the costs of activities relating to AI systems that are found to be non-compliant and are subject to corrective action prior to their placing on the market.
Amendment 2706 #
Proposal for a regulation
Article 65 – paragraph 1
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persons are concerned. in general, including safety in the workplace, protection of consumers, the environment, or to the protection of fundamental rights of persons are concerned, including autonomy of choice, access to goods and services, unfair discrimination and economic harm, privacy and data protection, as well as societal risks.
Amendment 2711 #
Proposal for a regulation
Article 65 – paragraph 1 a (new)
Article 65 – paragraph 1 a (new)
1 a. When AI systems are likely to interact with or impact on children, the precautionary principle shall apply.
Amendment 2713 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities, Board or bodies referred to in Article 64(3). Where there is sufficient reason to consider that that an AI system exploits the vulnerabilities of children or violates their rights intentionally or unintentionally, the market surveillance authority shall have the duty to investigate the design goals, data inputs, model selection, implementation and outcomes of the AI system and the burden of proof shall be on the operator or operators of that system to demonstrate compliance with the provisions of this Regulation. The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3), including by providing access to personnel, documents, internal communications, code, data samples and on platform testing as necessary.
Amendment 2716 #
Proposal for a regulation
Article 65 – paragraph 2 – subparagraph 1
Article 65 – paragraph 2 – subparagraph 1
Where, in the course of thatits evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe. The corrective action can also be applied to AI systems in other products or services judged to be similar in their objectives, design or impact.
Amendment 2740 #
Proposal for a regulation
Article 66 – paragraph 1
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), objections are raised by the European Parliament or a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, or has sufficient reasons to believe that an AI system presents a risk or affects consumers in more than one Member State the Commission shall without delay enter into consultation with the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months from the notification referred to in Article 65(5) and notify such decision to the Member State concerned.
Amendment 2743 #
Proposal for a regulation
Article 66 – paragraph 3
Article 66 – paragraph 3
3. Where the national measure is considered justified and the non- compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.The Commission shall also have the possibility to suggest alternative measures to the Member State concerned.
Amendment 2776 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Article 68 a Right to lodge a complaint with a supervisory authority 1. Citizens have a right not to be subjected to prohibited AI systems. 2. Citizens have a right not to be subjected to high-risk AI systems that fail to meet the requirements for high-risk systems. 3. Without prejudice to any other administrative or judicial remedy, every citizen shall have the right to lodge a complaint with a supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if the citizen considers that he or she has been subjected to an AI system that infringes this Regulation. 4. The supervisory authority with which the complaint has been lodged shall inform the complainant on the progress and the outcome of the complaint. 5. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision
Amendment 2813 #
Proposal for a regulation
Article 70 b (new)
Article 70 b (new)
Article 70 b Right for removal and injunction 1. If an AI system infringes this Regulation each natural or legal person affected by said AI system may require the user of this system to stop the use and to remove the infringement. 2. If further infringements of an AI system are to be feared, each affected natural or legal person may seek a prohibitory injunction.
Amendment 2835 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
Article 71 – paragraph 3 – introductory part
3. The following infringements shall be subject to administrative fines of up to 30 000 000 EUR or, if the offender is a company, up to 610 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:
Amendment 2850 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 47 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2856 #
Proposal for a regulation
Article 71 – paragraph 5
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 24 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2884 #
1. The European Data Protection Supervisor may impose administrative fines on Union institutions, agencies and bodies falling within the scope of this Regulationdeveloping, deploying or operating AI systems. When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following:
Amendment 2918 #
Proposal for a regulation
Article 73 – paragraph 2
Article 73 – paragraph 2
2. The delegation of power referred to in Article 4, Article 5a, Article 7(1), Article 11(3), Article 43(5) and (6), Article 48(5) and Article 48(5)52a shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
Amendment 2922 #
Proposal for a regulation
Article 73 – paragraph 3
Article 73 – paragraph 3
3. The delegation of power referred to in Article 4, Article 5a, Article 7(1), Article 11(3), Article 43(5) and (6) and, Article 48(5) and Article 52a may be revoked at any time by a joint decision from the European Parliament or byand the Council.. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
Amendment 2929 #
Proposal for a regulation
Article 73 – paragraph 4
Article 73 – paragraph 4
4. As soon as it adoptsIn preparation of a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.
Amendment 2930 #
Proposal for a regulation
Article 73 – paragraph 5
Article 73 – paragraph 5
5. Any delegated act adopted pursuant to Article 4, Article 5a, Article 7(1), Article 11(3), Article 43(5) and (6) and, Article 48(5) and Article 52a shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
Amendment 2947 #
Proposal for a regulation
Article 83 – paragraph 1 – introductory part
Article 83 – paragraph 1 – introductory part
1. This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [12 months after the date of application of this Regulation referred to in Article 85(2)], unless and the replacquirement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerneds laid down in this Regulation shall be taken into account in the evaluation of each large-scale IT systems established by the legal acts listed in Annex IX.
Amendment 2951 #
Proposal for a regulation
Article 83 – paragraph 1 – subparagraph 1
Article 83 – paragraph 1 – subparagraph 1
Amendment 2956 #
Proposal for a regulation
Article 83 – paragraph 2
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes in their design or intended purpose.
Amendment 2964 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III, including the extension of existing area headings or addition of new area headings; ,Article 5’s list of prohibited AI practices, and Article 52’s list of AI systems requiring additional transparency measures, once a year following the entry into force of this Regulation.
Amendment 2984 #
Proposal for a regulation
Article 84 – paragraph 6
Article 84 – paragraph 6
6. In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of equality bodies and other relevant bodies or sources, and shall consult relevant external stakeholders, in particular those potentially affected by the AI system, as well as stakeholders from academia and civil society.
Amendment 2991 #
Proposal for a regulation
Article 84 – paragraph 7
Article 84 – paragraph 7
7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology, the effect of AI systems on health and safety, fundamental rights, equality, and accessibility for persons with disabilities, and in the light of the state of progress in the information society.
Amendment 2996 #
Proposal for a regulation
Article 84 – paragraph 7 a (new)
Article 84 – paragraph 7 a (new)
7 a. To guide the evaluations and reviews referred to in paragraphs 1 to 4, the Board shall undertake to develop an objective and participative methodology for the evaluation of risk level based on the criteria outlined in the relevant articles and inclusion of new systems in: the list in Annex III, including the extension of existing area headings or addition of new area headings; Article 5’s list of prohibited AI practices; and Article 52’s list of AI systems requiring additional transparency measures.
Amendment 3052 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
Annex III – paragraph 1 – point 1 – introductory part
1. Biometric identification and categorisation of natural personsAI systems which use biometric or biometrics-based data:
Amendment 3064 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended tothat are or may be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
Amendment 3068 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems that are or may be used for the biometric identification of natural persons in publicly accessible spaces, as well as in workplaces, in educational settings and in border surveillance, or in the provision of public or essential services;
Amendment 3076 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that are or may be used for biometric verification in publicly accessible spaces, as well as in workplaces and in educational settings;
Amendment 3081 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used to diagnose or support diagnosis of medical conditions or medical emergencies on the basis of biometric or biometrics-based data;
Amendment 3082 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used for categorisation on the basis of biometric or biometrics-based data;
Amendment 3084 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a d (new)
Annex III – paragraph 1 – point 1 – point a d (new)
(a d) AI systems that are or may be used for the detection of a person’s presence, in workplaces, in educational settings, and in border surveillance, including in the virtual / online version of these spaces, on the basis of their biometric or biometrics-based data;
Amendment 3086 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a e (new)
Annex III – paragraph 1 – point 1 – point a e (new)
(a e) AI systems that are or may be used for monitoring compliance with health and safety measures or inferring alertness / attentiveness for safety purposes, on the basis of biometric or biometrics-based data;
Amendment 3087 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – introductory part
Annex III – paragraph 1 – point 2 – introductory part
2. Management and operation, operation, generation and supply of critical infrastructure, technology and energy:
Amendment 3104 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b a (new)
Annex III – paragraph 1 – point 3 – point b a (new)
(b a) AI systems intended to be used for the optimization of individual learning processes based on a student's learning data.
Amendment 3116 #
(b) AI intended to be used for making decisions on promotion and termination of work-related contractualaffecting the initiation, establishment, implementation and termination of an employment relationship, including AI systems intended to support collective legal and regulationships,ory matters, particularly for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
Amendment 3122 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point a
Annex III – paragraph 1 – point 5 – point a
(a) AI systems intended to be used by public authorities or on behalf of (semi-)public authorities to evaluateor private parties to evaluate or predict the lawful use by, or the eligibility of, natural persons, including the self employed and micro-enterprises, for public assistance, benefits and services and essential private services including but not limited to housing, electricity, heating/cooling, finance, insurance and internet, as well as to grant, reduce, revoke, or reclaim such benefits and services or set payment obligations related to these services;
Amendment 3127 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
Amendment 3132 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
Amendment 3151 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
Amendment 3152 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
Amendment 3159 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
Amendment 3161 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
Amendment 3174 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 3175 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 3181 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
Annex III – paragraph 1 – point 6 – point f
Amendment 3185 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point g
Annex III – paragraph 1 – point 6 – point g
Amendment 3191 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
Amendment 3192 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
Amendment 3198 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
Amendment 3202 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities, or by third parties acting on their behalf, to assess a risk, including but not limited to a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
Amendment 3212 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination and assessment of the veracity of evidence and claims in relation tof applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 3219 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d a (new)
Annex III – paragraph 1 – point 7 – point d a (new)
Amendment 3225 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d b (new)
Annex III – paragraph 1 – point 7 – point d b (new)
(d b) AI systems that are or may be used by or on behalf of competent authorities in law enforcement, migration, asylum and border control management for the biometric identification of natural persons;
Amendment 3227 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d c (new)
Annex III – paragraph 1 – point 7 – point d c (new)
(d c) AI systems intended to be used by, or on behalf of, competent authorities in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities for the purpose of recognizing or detecting objects and natural persons;
Amendment 3245 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose or reasonably foreseeable use , the person/s developing the system the date and the version of the system;
Amendment 3270 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point g
Annex IV – paragraph 1 – point 2 – point g
(g) the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracyperformance, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).
Amendment 3272 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3
Annex IV – paragraph 1 – point 3
3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose or reasonably foreseeable use ; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose or reasonably foreseeable use of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;
Amendment 3282 #
Proposal for a regulation
Annex IV – paragraph 1 – point 8 a (new)
Annex IV – paragraph 1 – point 8 a (new)
8 a. Without prejudice to Article 9(2), a detailed description of the economic and social implications and potential risks for health, and in particular mental health, safety and fundamental rights arising from the hypothetical widespread usage of the AI system or of similar systems in society, with reference to past incidents that occurred using similar systems and associated mitigating measures.
Amendment 3287 #
Proposal for a regulation
Annex VII – point 4 – point 4.7
Annex VII – point 4 – point 4.7
4.7. Any change to the AI system that could affect the compliance of the AI system with the requirements or its intended purpose or reasonably foreseeable use shall be approved by the notified body which issued the EU technical documentation assessment certificate. The provider shall inform such notified body of its intention to introduce any of the above-mentioned changes or if it becomes otherwise aware of the occurrence of such changes. The intended changes shall be assessed by the notified body which shall decide whether those changes require a new conformity assessment in accordance with Article 43(4) or whether they could be addressed by means of a supplement to the EU technical documentation assessment certificate. In the latter case, the notified body shall assess the changes, notify the provider of its decision and, where the changes are approved, issue to the provider a supplement to the EU technical documentation assessment certificate.
Amendment 3288 #
Proposal for a regulation
Annex VIII – title
Annex VIII – title
INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF HIGH- RISK AI SYSTEMS AND OF CERTAIN AI SYSTEMS, USES THEREOF, AND USES OF AI SYSTEMS BY PUBLIC AUTHORITIES IN ACCORDANCE WITH ARTICLE 51
Amendment 3290 #
Proposal for a regulation
Annex VIII – paragraph 1
Annex VIII – paragraph 1
The following information shall be provided and thereafter kept up to date by the provider with regard to high-risk AI systems referred to in Article 6(2) and to any AI system referred to in Article 52 1(b) and (2) to be registered in accordance with Article 51(1).
Amendment 3293 #
Proposal for a regulation
Annex VIII – paragraph 1 a (new)
Annex VIII – paragraph 1 a (new)
Amendment 3295 #
Proposal for a regulation
Annex VIII – paragraph 1 b (new)
Annex VIII – paragraph 1 b (new)
The following information shall be provided and thereafter kept up to date by the user with regard to uses of AI systems by public authorities to be registered in accordance with Article 51(3). (a) Name, address and contact details of the user;(b) Where submission of information is carried out by another person on behalf of the user, the name, address and contact details of that person; (c) Name, address and contact details of the authorised representative, where applicable; (d) For high-risk AI systems, URL of the entry of the AI system in the EU database by its provider, or, for non-high risk systems, AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system; (e) Description of the intended purpose of the intended use of the AI system; (f) Description of the context and the geographical and temporal scope of application, geographic and temporal, of the intended use of the AI system; (g) Basic explanation of design specifications of the system, namely the general logic of the AI system and of the algorithms;the key design choices including the rationale and assumptions made, also with regard to categories persons or groups of persons on which the system is intended to be used;the main classification choices;and what the system is designed to optimise for and the relevance of the different parameters. (h) Designation of persons foreseeably impacted by the intended use of the AI system; (i) If available, results of any impact assessment or due diligence process regarding the use of the AI system that the user has conducted; (j) Assessment of the foreseeable impact on the environment, including but not limited to energy consumption, resulting from the use of the AI system over its entire lifecycle, and of the methods to reduce such impact; (k) A description of how the relevant accessibility requirements set out in Annex I to Directive 2019/882 are met by the use of the AI system.
Amendment 3300 #
Proposal for a regulation
Annex VIII – point 5
Annex VIII – point 5
5. Description of the intended purpose or reasonably foreseeable use of the AI system;
Amendment 3307 #
Proposal for a regulation
Annex VIII – point 11
Annex VIII – point 11
11. Electronic instructions for use; this information shall not be provided for high-risk AI sy as listed in Article 13(3) and basic explanation of the general logic and key design as listemsd in the areas of law enforcement and migration, asylum and border control management referred toAnnex IV point 2(b) and of optimization choices as listed in Annex III,V points 1, 6 and 7 (3).
Amendment 3308 #
Proposal for a regulation
Annex VIII – point 11 a (new)
Annex VIII – point 11 a (new)
11 a. Assessment of the environmental impact, including but not limited to resource consumption, resulting from the design, data management and training, and underlying infrastructures of the AI system, and of the methods to reduce such impact;
Amendment 3309 #
Proposal for a regulation
Annex VIII – point 11 b (new)
Annex VIII – point 11 b (new)
11 b. A description of how the system meets the relevant accessibility requirements of Annex I to Directive 2019/882.