BETA

497 Amendments of Tsvetelina PENKOVA related to 2021/0106(COD)

Amendment 227 #
Proposal for a regulation
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification. The conformity assessment requirements as defined by this Regulation shall not apply for firmware and software updates developed by the product manufacturer.
2022/03/31
Committee: ITRE
Amendment 262 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5a. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development in the context of academic R&D projects. The Commission may adopt delegated acts that may clarify the further exemptions.
2022/03/31
Committee: ITRE
Amendment 282 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing itor places that system on the market or puttings it into service under its own name or trademark, whether for payment or free of charge;
2022/03/31
Committee: ITRE
Amendment 284 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
(4a) ‘end-user’ means any natural person who, in the context of employment or contractual agreement with the user, uses or deploys the AI system under the authority of the user;
2022/03/31
Committee: ITRE
Amendment 291 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14 a (new)
(14a) ‘information security component of a product or system’ means a component of a product of a system which has been specifically designed to fulfil security function for that product or system against cyber incidents, disruptions and/ or attacks;
2022/03/31
Committee: ITRE
Amendment 292 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14 b (new)
(14b) ‘information security product or system’ means a product or of a system which has been specifically designed to fulfil a security function against cyber incidents, disruptions and/ or attacks;
2022/03/31
Committee: ITRE
Amendment 314 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the funensure a high level of protectioning of the internal market by laying down a uniform legal frameworkfundamental rights, health, safety and the environment, as well as the Union values enshrined in paArticular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights,le 2 of the Treaty on European Union (TEU), from harmful effects of the use of artificial intelligence systems in the Union while enhancing innovation and improving the functioning of the internal market. This Regulation lays down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence in conformity with Union values and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 316 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and, fundamental rights, the environment and the Union values enshrined in Article 2 of the Treaty on European Union (TEU), and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 324 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent thatAs AI systems rely on the processing of large volumes of data, including personal data, it is appropriate to base this Regulation contains specific rules on the protection of individuals with regard Article 16 of the TFEU, which enshrines the right of everyone to the processingtection of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules andthem and provides for the adoption of rules on the protection of individuals with regard to the processing of personal data. In light of the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/06/13
Committee: IMCOLIBE
Amendment 325 #
Proposal for a regulation
Recital 2 a (new)
(2 a) However, in line with Article 114(2) TFEU, this Regulation does not affect the rights and interests of employed persons. This Regulation should therefore not affect Community law on social policy and national labour law and practice, that is any legal and contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation. This Regulation should not affect the exercise of fundamental rights as recognized in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Nor should it affect concertation practices, the right to negotiate, to conclude and enforce collective agreement or to take collective action in accordance with national law and/or practice. It should in any case not prevent the Commission from proposing specific legislation on the rights and freedoms of workers affected by AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 327 #
Proposal for a regulation
Recital 2 a (new)
(2 a) This Regulation should not affect the restrictions, prohibitions or enforcement that apply where an artificial intelligence practice infringes another EU law, including EU acquis on data protection, privacy, or the confidentiality of communications, on non discrimination, consumer protection or on competition.
2022/06/13
Committee: IMCOLIBE
Amendment 335 #
Proposal for a regulation
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, as well as the level of technological development, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm.
2022/06/13
Committee: IMCOLIBE
Amendment 338 #
Proposal for a regulation
Recital 4 a (new)
(4 a) In order to ensure the dual green and digital transition, and secure the technological resilience of the EU, to reduce the carbon footprint of artificial intelligence and achieve the objectives of the new European Green Deal, this Regulation should contribute to the promotion of a green and sustainable artificial intelligence and to the consideration of the environmental impact of AI systems throughout their lifecycle. Sustainability should be at the core at the European artificial intelligence framework to guarantee that the development of artificial intelligence is compatible with sustainable development of environmental resources for current and future generations, at all stages of the lifecycle of artificial intelligence products; sustainability of artificial intelligence should encompass sustainable data sources, data centres, resource use, power supplies and infrastructure;
2022/06/13
Committee: IMCOLIBE
Amendment 341 #
Proposal for a regulation
Recital 4 a (new)
(4 a) Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence systems to respect the principles of fairness, accountability, transparency and accountability, privacy and security, and social benefit.
2022/06/13
Committee: IMCOLIBE
Amendment 342 #
Proposal for a regulation
Recital 4 b (new)
(4 b) Despite the high potential of solutions to the environmental and climate crisis offered by artificial intelligence, the design, training and execution of algorithms imply a high energy consumption and, consequently, high levels of carbon emissions. Artificial intelligence technologies and data centres have a high carbon footprint due to increased computational energy consumption, and high energy costs due to the volume of data stored and the amount of heat, electric and electronic waste generated, thus resulting in increased pollution. These environmental and carbon footprints are expected to increase overtime as the volume of data transferred and stored and the increasing development of artificial intelligence applications will continue to grow exponentially in the years to come. It is therefore important to minimise the climate and environmental footprint of artificial intelligence and related technologies and that AI systems and associated machinery are designed sustainably to reduce resource usage and energy consumption, thereby limiting the risks to the environment.
2022/06/13
Committee: IMCOLIBE
Amendment 343 #
Proposal for a regulation
Recital 4 c (new)
(4 c) To promote the sustainable development of AI systems and in particular to prioritise the need for sustainable, energy efficient data centres, requirements for efficient heating and cooling of data centres should be consistent with the long-term climate and environmental standards and priorities of the Union and comply with the principle of 'do no significant harm' within the meaning of Article 17 of Regulation (EU) 2020/852 on the establishment of a framework to facilitate sustainable investment, and should be fully decarbonised by January 2050. In this regard, Member States and telecommunications providers should collect and publish information relating to the energy performance and environmental footprint for artificial intelligence technologies and date centres including information on the energy efficiency of algorithms to establish a sustainability indicator for artificial intelligence technologies. A European code of conduct for datacentre energy efficiency can establish key sustainability indicators to measure four basic dimensions of a sustainable data centre, namely, how efficiently it uses energy, the proportion of energy generated from renewable energy sources, the reuse of any waste and heat, and the usage of fresh water.
2022/06/13
Committee: IMCOLIBE
Amendment 345 #
Proposal for a regulation
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and, the protection of fundamental rights, as recognised and protected by Union law, the environment and the Union values enshrined in Article 2 TEU. To achieve that objective, rules regulating the development, the placing on the market, and the putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
2022/06/13
Committee: IMCOLIBE
Amendment 350 #
Proposal for a regulation
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, health and safety, as recognised and protected by Union law. To achieve that objective, rules regulating the development, the placing on the market and, putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
2022/06/13
Committee: IMCOLIBE
Amendment 359 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objective to perceive, reason and act on machine and/or human-based inputs, to generate outputs such as content, hypotheses, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
2022/06/13
Committee: IMCOLIBE
Amendment 360 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. AI systems can be developed through various techniques using learning, reasoning or modelling, such as: machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; statistical approaches, Bayesian estimation, search and optimization methods.
2022/06/13
Committee: IMCOLIBE
Amendment 369 #
Proposal for a regulation
Recital 7
(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data asthe same as that defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 . _________________ 35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 36 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 37 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).
2022/06/13
Committee: IMCOLIBE
Amendment 371 #
Proposal for a regulation
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event wiperforming automated recognition of physical, physiological, behavioural, and psychological human features, for the purpose of identification of natural persons throut a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated befgh the comparison of a person’s biometric data with the biometric data contained in a reference database, irrespectively of the particular technology, processes ore the use of the system in respect of the natural persons concernypes of biometric data used.
2022/06/13
Committee: IMCOLIBE
Amendment 372 #
Proposal for a regulation
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
2022/06/13
Committee: IMCOLIBE
Amendment 379 #
Proposal for a regulation
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art and industry standards, including as reflected in relevant harmonised standards or common specifications.
2022/03/31
Committee: ITRE
Amendment 382 #
Proposal for a regulation
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.
2022/06/13
Committee: IMCOLIBE
Amendment 383 #
Proposal for a regulation
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.
2022/06/13
Committee: IMCOLIBE
Amendment 390 #
Proposal for a regulation
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.
2022/06/13
Committee: IMCOLIBE
Amendment 391 #
Proposal for a regulation
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations or it affects natural persons within the Union.
2022/06/13
Committee: IMCOLIBE
Amendment 395 #
Proposal for a regulation
Recital 12
(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
2022/06/13
Committee: IMCOLIBE
Amendment 401 #
Proposal for a regulation
Article 10 – paragraph 1 a (new)
1a. Providers of high-risk AI systems that utilise data collected and/or managed by third parties may rely on representations from those third parties with regard to quality criteria referred to in paragraph 2, points (a), (b) and (c).
2022/03/31
Committee: ITRE
Amendment 402 #
Proposal for a regulation
Recital 12 a (new)
(12 a) AI systems developed or used exclusively for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V TEU. However, AI systems which are developed or used for military purposes but can also be used for civil purposes, falling under the definition of “dual use items” pursuant to Regulation (EU) 2021/821 of the European Parliament and of the Council1ashould fall into the scope of this Regulation. _________________ 1a Regulation (EU) 2021/821 of the European Parliament and of the Council of 20 May 2021 setting up a Union regime for the control of exports, brokering, technical assistance, transit and transfer of dual-use items (OJ L 206 11.6.2021, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 405 #
Proposal for a regulation
Recital 12 b (new)
(12 b) This Regulation should not affect the provisions aimed at improving working conditions in platform work set out in Directive 2021/762/EC.
2022/06/13
Committee: IMCOLIBE
Amendment 409 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, the environment and the Union values enshrined in Article 2 TEU, common normative standards for all high- risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
2022/06/13
Committee: IMCOLIBE
Amendment 414 #
Proposal for a regulation
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 421 #
Proposal for a regulation
Recital 15 a (new)
(15 a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the European Union and all Member States should protect persons with disabilities from discrimination and promote their equality, ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems and ensure respect for privacy of persons with disabilities.
2022/06/13
Committee: IMCOLIBE
Amendment 423 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be, to the best extent possible, relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/03/31
Committee: ITRE
Amendment 427 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended towith the effect or likely effect of distorting human behaviour, whereby material or non-material harm, including physical or, psychological or economic harms are likely to occur, should be forbidden. This limitation should be understood to include neuro-technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain- computer interfaces. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toeffect of materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/06/13
Committee: IMCOLIBE
Amendment 433 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by private or public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 436 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 440 #
Proposal for a regulation
Recital 17 a (new)
(17 a) AI systems used by law enforcement authorities or on their behalf to make predictions, profiles or risk assessments based on data analysis or profiling of natural groups or locations, for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour, hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 442 #
Proposal for a regulation
Recital 17 a (new)
(17 a) AI systems used by law enforcement authorities or on their behalf to predict the probability of a natural person to offend or to reoffend, based on profiling and individual or place-based risk-assessment hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 447 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly or privately accessible spaces, as well as online spaces, for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use ofTechnical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. In addition, whether such systems operatingare used in 'real- time’ carry' or post factum, there is little difference on the impact and the heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The placing or making available on the market, the putting into service or use of those systems should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 448 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly or privately accessible spaces for the purpose of law enforcement is consideredis particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activitiesSuch systems should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 456 #
Proposal for a regulation
Recital 18 a (new)
(18 a) Despite progress regarding biometric identification technologies, the accuracy of the results still varies across technologies and depends on contextual factors. Even the relatively well- established fingerprint identification applications face challenges, in particular at the stage of the collection of biometric data (related to, for example, subject's age). The reliability of face recognition technologies in 'real world' settings is highly dependent on the quality of the images captured and on the quality of the algorithms used for biometric matching. During enrolment, poor quality images taken at e-gates or through a CCTV camera under variable environmental conditions may result in less accurate results. As in the case of automated fingerprint identification, changes in a person's physical characteristics over time may also affect the accuracy of facial recognition technologies. Research has found a considerable degradation in performance for face recognition algorithms on children as compared to the performance obtained on adults. In light of this, the placing or making available on the market, the putting into service or use of remote biometric identification systems should be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 460 #
Proposal for a regulation
Recital 18 b (new)
(18 b) There are serious concerns about the scientific basis of AI systems aiming to detect emotions from facial expressions. Facial expressions and perceptions thereof vary considerably across cultures and situations, and even within a single person. Among the key shortcomings of such technologies are the limited reliability (emotion categories are neither reliably expressed through, nor unequivocally associated with, a common set of facial movements), the lack of specificity (facial expressions do not perfectly match emotion categories) and the limited generalisability (the effects of context and culture are not sufficiently considered). Reliability issues may also arise when deploying the system in real- life situations, for example, when dealing with subjects who actively seek (and train themselves) to fool the system. Therefore, the placing on the market, putting into service, or use of AI systems intended to be used as polygraphs and similar tools to detect the emotional state, trustworthiness or related characteristics of a natural person, should be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 461 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
3. Providers of High-risk AI systems shall btake appropriate technical and organizational measures to ensure that high-risk AI systems are resilient as regards to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systemconsistent with industry best practices.
2022/03/31
Committee: ITRE
Amendment 463 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 466 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 472 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 475 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 482 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 491 #
Proposal for a regulation
Recital 22
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 493 #
Proposal for a regulation
Recital 22
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 496 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Jurisdiction and territoriality Providers as defined in point 2 of Article 3 and within the meaning of Article 28, paragraph 1, shall be deemed to be under the jurisdiction of the Member State in which they have their main establishment in the Union.
2022/03/31
Committee: ITRE
Amendment 500 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 501 #
Proposal for a regulation
Article 38 – paragraph 2 a (new)
2a. Where a competent authority of a Member State requires obtaining an EU declaration of conformity of a provider which has its main establishment in another Member State, that request shall be made through the competent authority of the Member State where the provider has its main establishment. The information shall be transmitted by the provider in an official language of the Member State where it has its main establishment. The Commission is empowered to adopt delegated acts in accordance with this paragraph to further define the modalities for issuing and handling such requests.
2022/03/31
Committee: ITRE
Amendment 501 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 509 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 510 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
2022/06/13
Committee: IMCOLIBE
Amendment 516 #
Proposal for a regulation
Recital 25
(25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU.
2022/06/13
Committee: IMCOLIBE
Amendment 517 #
Proposal for a regulation
Recital 26
(26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.
2022/06/13
Committee: IMCOLIBE
Amendment 518 #
Proposal for a regulation
Recital 26 a (new)
(26 a) AI systems capable of reading facial expressions to infer emotional states hold no scientific basis, while at the same time running a high risk of inaccuracy, in particular for certain groups of individuals whose facial traits are not easily readable by such systems, as several examples have shown. Therefore, due to the particular risk of discrimination, these systems should be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 519 #
Proposal for a regulation
Article 48 – paragraph 1
1. The provider shall draw up a written EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be given to the relevant national competent authorities upony in the Member State of main establishment of the provider, upon the competent authority’s request.
2022/03/31
Committee: ITRE
Amendment 521 #
Proposal for a regulation
Article 48 – paragraph 2
2. The EU declaration of conformity shall state that the high-risk AI system in question meets the requirements set out in Chapter 2 of this Title. The EU declaration of conformity shall contain the information set out in Annex V and shall be translapresented into an official Union language or languages required byf the Member State(s) in which the provider of the high-risk AI system is made availablehas its main establishment.
2022/03/31
Committee: ITRE
Amendment 524 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law and do not breach the Union values enshrined in Article 2 TEU or the principles applicable to all AI systems as per this Regulation. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union, their health and safety and such limitation minimises any potential restriction to international trade, if any.
2022/06/13
Committee: IMCOLIBE
Amendment 525 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law and do not contravene the Union values enshrined in Article 2 TEU. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and the fundamental rights of persons in the Union or the environment and such limitation minimises any potential restriction to international trade, if any.
2022/06/13
Committee: IMCOLIBE
Amendment 543 #
Proposal for a regulation
Recital 32 a (new)
(32 a) In the light of the nature and complexity of the value chain for AI systems, it is essential to consider the foreseeable high-risks they can create when combined. Particular attention should be paid to the foreseeable uses and reasonably foreseeable misuses of AI systems with indeterminate uses.
2022/06/13
Committee: IMCOLIBE
Amendment 544 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 545 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 559 #
Proposal for a regulation
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate or monitor persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.
2022/06/13
Committee: IMCOLIBE
Amendment 569 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/06/13
Committee: IMCOLIBE
Amendment 577 #
Proposal for a regulation
Recital 37 a (new)
(37 a) Given the speed at which AI applications are being developed around the world, it is not feasible to compile an exhaustive listing of applications that should be prohibited or considered high- risk. What is needed is a clear and coherent governance model guaranteeing both the fundamental rights of individuals and legal clarity for operators, considering the continuous evolution of technology. Nevertheless, given the role and responsibility of police and judicial authorities, and the impact of decisions they take for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, the use of AI applications has to be categorised as high-risk in instances where there is the potential to significantly affect the lives of individuals.
2022/06/13
Committee: IMCOLIBE
Amendment 579 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its performance, including its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 581 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offenceon their behalf to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 585 #
Proposal for a regulation
Recital 38 a (new)
(38 a) The use of AI tools by law enforcement and judicial authorities should not become a factor of inequality, social fracture or exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, notably the difficulty in obtaining meaningful information on their functioning and the consequent difficulty in challenging their results in court, in particular by individuals under investigation.
2022/06/13
Committee: IMCOLIBE
Amendment 587 #
Proposal for a regulation
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 588 #
Proposal for a regulation
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status.; for verifying the authenticity of the relevant documents of natural persons; AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 601 #
Proposal for a regulation
Recital 40 a (new)
(40 a) Certain AI systems should at the same time be subject to transparency requirements and be classified as high- risk AI systems, given their potential to deceive and cause both individual and societal harm. In particular, AI systems that generate deep fakes representing existing persons have the potential to both manipulate the natural persons that are exposed to those deep fakes and harm the persons they are representing or misrepresenting, while AI systems that, based on limited human input, generate complex text such as news articles, opinion articles, novels, scripts and scientific articles have the potential to manipulate, to deceive, or to expose natural persons to built-in biases or inaccuracies. These should not include AI systems intended to translate text, or cases where the content forms part of an evidently artistic, creative or fictional cinematographic and analogous work.
2022/06/13
Committee: IMCOLIBE
Amendment 608 #
Proposal for a regulation
Recital 41
(41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant.
2022/06/13
Committee: IMCOLIBE
Amendment 609 #
Proposal for a regulation
Recital 41
(41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant.
2022/06/13
Committee: IMCOLIBE
Amendment 617 #
Proposal for a regulation
Article 64 – paragraph 1
1. Access to data and documentation in the context of their activities, the market surveillance authorities shall be granted full access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
2022/03/31
Committee: ITRE
Amendment 618 #
Proposal for a regulation
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code, of the AI systemr if impossible, all related data sets used to train or place the AI system on the market.
2022/03/31
Committee: ITRE
Amendment 619 #
Proposal for a regulation
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and, fundamental rights, the environment and the Union values enshrined in Article 2 TEU, as applicable in the light of the intended purpose or reasonably foreseeable use of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
2022/06/13
Committee: IMCOLIBE
Amendment 629 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training datasets, and where applicable, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system, including the labels, shall be relevant, representative, up-to-date, and to the best extent possible, free of errors and complete. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpoby the intended purpose, the foreseeable uses and reasonably foreseeable misuses of AI systems with indeterminate uses, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 630 #
Proposal for a regulation
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changesubstantial modifications as defined in Article 3(23) in their design or intended purpose.
2022/03/31
Committee: ITRE
Amendment 631 #
Proposal for a regulation
Recital 44 a (new)
(44 a) Biases can be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings. Any result provided by an AI system is necessarily influenced by the quality of the data used, and such inherent biases are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain ethnic groups or racialised communities.
2022/06/13
Committee: IMCOLIBE
Amendment 638 #
Proposal for a regulation
Recital 47 a (new)
(47 a) It is vital to ensure that the development, deployment and use of AI systems for the judiciary and law enforcement comply with fundamental rights, and are trusted by citizens, as well as in order to ensure that results generated by AI algorithms can be rendered intelligible to users and to those subject to these systems, and that there is transparency on the source data and how the system arrived at a certain conclusion. To this aim, law enforcement or judiciary authorities in the Union should use only such AI systems whose algorithms and logic are auditable and accessible at least to the police and the judiciary, as well as independent auditors, to allow for their evaluation, auditing and vetting, and such systems should not be closed or labelled as proprietary by the vendors.
2022/06/13
Committee: IMCOLIBE
Amendment 644 #
Proposal for a regulation
Recital 48 a (new)
(48 a) In order to protect natural persons that are developers or users of AI systems against retaliation from their employers and colleagues, and to prevent misconduct or breaches of this Regulation and other relevant Union law, they should have the right to rely on the whistleblower protections set in Directive (EU) 2019/1937 of the European Parliament and of the Council.
2022/06/13
Committee: IMCOLIBE
Amendment 661 #
Proposal for a regulation
Recital 56
(56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Unionplacing any AI system on the Union market, putting it into service or using it, where an importer cannot be identified, provideoperators established outside the Union shallould, by written mandate, appoint an authorised representative established in the Union. legal representative established in the Union. The legal representative should act on behalf of the operator and may be addressed by any competent authorities for the purpose of this Regulation. The designation of such a legal representative does not affect the responsibility or liability of the operator under this Regulation. Such a legal representative should perform its tasks according to the mandate received from the operator, including cooperating with the national supervisory authorities with regard to any action taken to ensure compliance with this Regulation. The designated legal representative should be subject to enforcement proceedings in the event of non-compliance by the operator.
2022/06/13
Committee: IMCOLIBE
Amendment 666 #
Proposal for a regulation
Recital 58 a (new)
(58 a) Whilst risks related to AI systems can generate from the way such systems are designed, risks can as well stem from how such AI systems are used. Users of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Users are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential risks that were not foreseen in the development phase, thanks to a more precise knowledge of the context of use, the people or groups of people likely to be affected, including marginalised and vulnerable groups. In order to efficiently ensure that fundamental rights are protected, the user of high-risk AI systems should therefore carry out a fundamental rights impact assessment on how it intends to use such AI systems, and prior to putting it into use. The impact assessment should be accompanied by a detailed plan describing the measures or tools that will help mitigating the risks to fundamental rights identified. When performing this impact assessment, the user should notify the national supervisory authority, the market surveillance authority as well as relevant stakeholders. It should also involve representatives of groups of persons likely to be affected by the AI system in order to collect relevant information which is deemed necessary to perform the impact assessment.
2022/06/13
Committee: IMCOLIBE
Amendment 668 #
Proposal for a regulation
Recital 58 a (new)
(58 a) Risks for people affected by AI systems often arise from uses of an AI system in a specific context and with respect to a specific group of people, and might not always be foreseeable for the provider. Therefore, prior to putting a high-risk AI system into use, the user should conduct an assessment of the system’s impact on the fundamental rights in particular, within the context of use, and publish the results.
2022/06/13
Committee: IMCOLIBE
Amendment 681 #
Proposal for a regulation
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 689 #
Proposal for a regulation
Recital 65
(65) In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of personsany of the use- cases listed in Annex III, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests.
2022/06/13
Committee: IMCOLIBE
Amendment 701 #
Proposal for a regulation
Recital 68
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 704 #
Proposal for a regulation
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, both providers and users of high- risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system in a EU database, to be established and managed by the Commission. Users who are public authorities or European Union institutions, bodies, offices and agencies or users acting on their behalf should also register in the EU database before putting into service or using any AI system. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 712 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
2022/06/13
Committee: IMCOLIBE
Amendment 714 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
2022/06/13
Committee: IMCOLIBE
Amendment 722 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that safeguards fundamental rights and is innovation-friendly, future-proof and resilient to disruption, national competentsupervisory authorities from one or more Member States should be encouraged tocould establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
2022/06/13
Committee: IMCOLIBE
Amendment 726 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/06/13
Committee: IMCOLIBE
Amendment 727 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation, while safeguarding fundamental rights and the values enshrined in Article 2 TFEU, by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competentnational supervisory authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start- ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680national supervisory authorities involved in the supervision of the sandboxes. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competentnational supervisory authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/06/13
Committee: IMCOLIBE
Amendment 728 #
Proposal for a regulation
Recital 72 a (new)
(72 a) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support and promote research and development of AI in support of socially and environmentally beneficial outcomes by allocating sufficient resources, including public and Union funding, and giving priority access to regulatory sandboxes to projects led by civil society. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non- discrimination, accessibility, consumer, environmental, and digital rights, as well as academics.
2022/06/13
Committee: IMCOLIBE
Amendment 738 #
Proposal for a regulation
Recital 76
(76) In order to facilitate a smooth, effective and harmconsisedtent implementation of this Regulation an independent European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence, including on possible amendments of the annexes, in particular the annex listing high-risk AI systems. To contribute to the effective and harmonised enforcement of this Regulation, the Board should also be able to adopt binding decisions for the settlement of cases involving two or more Member States in which the national supervisory authorities are in disagreement or when it is not clear who the lead national supervisory authority is. The Board should also be able to adopt a binding decision in those cases when a national supervisory authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the compliance with obligations under Union or national law intended to protect fundamental rights, the principles of Article 4a, the values as enshrined in Article 2 TEU, the environment, or to other aspects of public interest protection.
2022/06/13
Committee: IMCOLIBE
Amendment 745 #
Proposal for a regulation
Recital 77
(77) Each Member States should a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities for the purpose of supervising the application andestablish or designate a single national supervisory authority to act as the lead authority and be responsible for ensuring the effective coordination between the national competent authorities regarding the implementation of this Regulation. In order to inct should also reprease organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member Stant its Member State on the Board. Each national supervisory authority should act with complete aind Union levels, in each Member State one national authority should be designated as national supervisory authorityependence in performing its tasks and exercising its powers in accordance with this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 746 #
Proposal for a regulation
Recital 77 a (new)
(77 a) The national supervisory authorities should monitor the application of the provisions pursuant to this Regulation and contribute to its consistent application throughout the Union. For that purpose, the national supervisory authorities should cooperate with each other, with the market surveillance authorities and with the Commission, without the need for any agreement between Member States on the provision of mutual assistance or on such cooperation.
2022/06/13
Committee: IMCOLIBE
Amendment 756 #
Proposal for a regulation
Recital 80 a (new)
(80 a) Where the national market surveillance authority has not taken measures against an infringement to this Regulation, the Commission should be in possession of all the necessary resources, in terms of staffing, expertise, and financial means, for the performance of its tasks instead of the national market surveillance authority under this Regulation. In order to ensure the availability of the resources necessary for the adequate investigation and enforcement measures that the Commission could undertake under this Regulation, the Commission should charge fees on national market surveillance authorities, the level of which should be established on a case-by-case basis. The overall amount of fees charged should be established on the basis of the overall amount of the costs incurred by the Commission to exercise its investigation and enforcement powers under this Regulation. Such an amount should include costs relating to the exercise of the specific powers and tasks connected to Chapter 4 of Title VIII of this Regulation. The external assigned revenues resulting from the fees could be used to finance additional human resources, such as contractual agents and seconded national experts, and other expenditure related to the fulfilment of these tasks entrusted to the Commission by this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 764 #
Proposal for a regulation
Recital 84 a (new)
(84 a) In order to strengthen and harmonise administrative penalties for infringements of this Regulation, each national supervisory authority should have the power to impose administrative fines. This Regulation should indicate infringements and the upper limit for setting the related administrative fines, which should be determined by the national supervisory authority in each individual case, taking into account all relevant circumstances of the specific situation, with due regard in particular to the nature, gravity and duration of the infringement and of its consequences and the measures taken to ensure compliance with the obligations under this Regulation and to prevent or mitigate the consequences of the infringement.
2022/06/13
Committee: IMCOLIBE
Amendment 765 #
Proposal for a regulation
Recital 84 a (new)
(84 a) An affected person should also have the right to mandate a not-for-profit body, organisation or association that has been properly constituted in accordance with the law of a Member State, to lodge the complaint on their behalf. To this end, Directive 2020/1828/EC on Representative Actions for the Protection of the Collective Interests of Consumers should be amended to include this Regulation among the provisions of Union law falling under its scope.
2022/06/13
Committee: IMCOLIBE
Amendment 767 #
Proposal for a regulation
Recital 84 b (new)
(84 b) Natural persons, affected by an AI system falling within the scope of this Regulation, should have the right to lodge a complaint against the providers or users of such AI system with a national supervisory authority, if they consider that their fundamental rights, health or safety have been breached. An affected person should also have the right to mandate a not-for-profit body, organisation or association that has been properly constituted in accordance with the law of a Member State, to lodge the complaint on their behalf.
2022/06/13
Committee: IMCOLIBE
Amendment 772 #
Proposal for a regulation
Recital 85
(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, and the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58 . In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. _________________ 58 OJ L 123, 12.5.2016, p. 1.
2022/06/13
Committee: IMCOLIBE
Amendment 780 #
Proposal for a regulation
Article 1 – paragraph -1 (new)
-1 The purpose of this Regulation is to ensure a high level of protection of health, safety, fundamental rights, the environment and the Union values enshrined in Article 2 TEU from harmful effects of artificial intelligence systems in the Union while promoting innovation.
2022/06/13
Committee: IMCOLIBE
Amendment 784 #
Proposal for a regulation
Article 1 – paragraph 1 – introductory part
The purpose of this Regulation is to ensure a high level of protection of fundamental rights, health, safety and the environment from harmful effects of the use of artificial intelligence systems in the Union while enhancing innovation. This Regulation lays down:
2022/06/13
Committee: IMCOLIBE
Amendment 789 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
(a) harmonised rules for the development, placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 790 #
Proposal for a regulation
Article 1 – paragraph 1 – point a a (new)
(a a) principles applicable to all AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 792 #
Proposal for a regulation
Article 1 – paragraph 1 – point c a (new)
(c a) harmonised rules on high-risk AI systems to ensure a high level of trustworthiness and protection of fundamental rights, health and safety, the Union values enshrined in Article 2 TEU and the environment;
2022/06/13
Committee: IMCOLIBE
Amendment 793 #
Proposal for a regulation
Article 1 – paragraph 1 – point c a (new)
(c a) harmonised rules on high-risk AI systems to ensure a high level of trustworthiness and protection of fundamental rights, health and safety
2022/06/13
Committee: IMCOLIBE
Amendment 794 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
2022/06/13
Committee: IMCOLIBE
Amendment 811 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
This Regulation shall be applied taking due account of the precautionary principle.
2022/06/13
Committee: IMCOLIBE
Amendment 816 #
Proposal for a regulation
Article 2 – paragraph 1 – point a
(a) provideoperators placing on the market or putting into service AI systems in the Union, irrespective of whether those provideoperators are established within the Union or in a third country;
2022/06/13
Committee: IMCOLIBE
Amendment 821 #
Proposal for a regulation
Article 2 – paragraph 1 – point b
(b) users of AI systems that are located within the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 825 #
Proposal for a regulation
Article 2 – paragraph 1 – point c
(c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union or affects natural persons within the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 831 #
Proposal for a regulation
Article 2 – paragraph 1 – point c a (new)
(c a) natural persons, affected by the use of an AI system, who are in the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 832 #
Proposal for a regulation
Article 2 – paragraph 1 – point c a (new)
(c a) natural persons, affected by the use of an AI system, who are in the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 835 #
Proposal for a regulation
Article 2 – paragraph 1 – point c b (new)
(c b) providers placing on the market or putting into service AI systems outside the Union where the provider is located within the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 838 #
Proposal for a regulation
Article 2 – paragraph 1 a (new)
1 a. providers placing on the market or putting into service AI systems in a third country where the provider or distributor of such AI systems originates from the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 840 #
Proposal for a regulation
Article 2 – paragraph 1 a (new)
1 a. This Regulation shall apply to Union institutions, offices, bodies and agencies when acting as an operator of an AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 867 #
Proposal for a regulation
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems developed or used exclusively for military purposes. However, this Regulation shall apply to AI systems which are developed or used as dual-use items, as defined in Article 2, point (1) of Regulation (EU) 2021/821 of the European Parliament and of the Council1a. _________________ 1a Regulation (EU) 2021/821 of the European Parliament and of the Council of 20 May 2021 setting up a Union regime for the control of exports, brokering, technical assistance, transit and transfer of dual-use items (OJ L 206, 11.6.2021, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 872 #
Proposal for a regulation
Article 2 – paragraph 3 a (new)
3 a. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulations (EU) 2016/679, (EU) 2018/1725 or Directives 2002/58/EC and (EU) 2016/680.
2022/06/13
Committee: IMCOLIBE
Amendment 876 #
Proposal for a regulation
Article 2 – paragraph 3 a (new)
3 a. This Regulation shall apply to Union institutions, offices, bodies and agencies when acting as an operator of an AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 878 #
Proposal for a regulation
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 882 #
Proposal for a regulation
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 890 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not affect community law on social policy.
2022/06/13
Committee: IMCOLIBE
Amendment 891 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall not affect national labour law and practice or collective agreements, and it shall not preclude national legislation to ensure the protection of workers’ rights in respect of the use of AI systems by employers, including where this implies introducing more stringent obligations than those laid down in this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 897 #
Proposal for a regulation
Article 2 – paragraph 5 c (new)
5 c. This Regulation is without prejudice to the rules laid down by other Union legal acts regulating other aspects of AI systems as well as the national rules aimed at enforcing or, as the case may be, implementing these acts, in particular Union law on consumer protection and product safety, including Regulation (EU)2017/2394, Regulation (EU) 2019/1020, Directive 2001/95/EC on general product safety and Directive 2013/11/EU.
2022/06/13
Committee: IMCOLIBE
Amendment 920 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with can perceive, learn, reasone or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives,del based on machine and/or human based inputs, to generate outputs such as content, hypotheses, predictions, recommendations, or decisions influencing the real or virtual environments they interact with;
2022/06/13
Committee: IMCOLIBE
Amendment 921 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) 'artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listcan for example perceive, learn, reason or model based ion Annex I and can, for a given set of human-defined objectives,machine and/or human based inputs, to generate outputs such as content, hypotheses, predictions, recommendations, or decisions influencing the real or virtual environments they interact with;
2022/06/13
Committee: IMCOLIBE
Amendment 953 #
Proposal for a regulation
Article 3 – paragraph 1 – point 5
(5) ‘authorisedlegal representative’ means any natural or legal person established in the Union who has received a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf any of the obligations and procedures established by this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 959 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the provider, the user, the authorisedlegal representative, the importer and the distributor;
2022/06/13
Committee: IMCOLIBE
Amendment 960 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8 a (new)
(8 a) ‘affected person’ means any natural person or a group of persons who are subjects to or affected by an AI system
2022/06/13
Committee: IMCOLIBE
Amendment 961 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8 a (new)
(8 a) ‘affected person’ means any natural person or group of persons who are subject to or affected by an AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 978 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, and with other AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 983 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety or security function for that product or system or the failure or malfunctioning of which endangers the fundamental rights, health andor safety of persons, or propertywhich damages property or the environment;
2022/06/13
Committee: IMCOLIBE
Amendment 984 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety or security function for that product or system or the failure or malfunctioning of which endangers the health and, safety of persons or property, fundamental rights of persons or which damages property, or the environment;
2022/06/13
Committee: IMCOLIBE
Amendment 999 #
Proposal for a regulation
Article 3 – paragraph 1 – point 20
(20) ‘conformity assessment’ means the process of verifydemonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled;
2022/06/13
Committee: IMCOLIBE
Amendment 1019 #
Proposal for a regulation
Article 3 – paragraph 1 – point 30
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset can beis a separate dataset or part of the training dataset, either as a fixed or variable split;
2022/06/13
Committee: IMCOLIBE
Amendment 1022 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33
(33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic dataas defined in Article 4, point (14) of Regulation (EU) 2016/679;
2022/06/13
Committee: IMCOLIBE
Amendment 1029 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 a (new)
(33 a) “special categories of personal data” means the categories of personal data referred to in Article 9(1) of Regulation (EU)2016/679;
2022/06/13
Committee: IMCOLIBE
Amendment 1030 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 b (new)
(33 b) ‘biometric identification’ means the use of AI-systems for the purpose of the automated recognition of physical, physiological, behavioural, and psychological human features such as the face, eye movement, facial expressions, body shape, voice, speech, gait, posture, heart rate, blood pressure, odour, keystrokes, psychological reactions (anger, distress, grief, etc.) for the purpose of verification of an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database (one-to-many identification);
2022/06/13
Committee: IMCOLIBE
Amendment 1036 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural personthoughts, states of mind or intentions of individuals or groups on the basis of their biometric and biometric-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1039 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as gender, sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data; social origin, health, mental or physical ability, behavioural or personality traits, language, religion, or membership of a national minority, or sexual or political orientation, on the basis of their biometric or biometric-based data, or which can be reasonably inferred from such data.
2022/06/13
Committee: IMCOLIBE
Amendment 1040 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as gender, sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric social origin, health, mental or physical ability,behavioural or personality traits, language, religion, or membership of a national minority, or sexual or political orientation, on the basis of their biometric or biometric-based data, or which can be reasonably inferred from such data;
2022/06/13
Committee: IMCOLIBE
Amendment 1057 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
2022/06/13
Committee: IMCOLIBE
Amendment 1076 #
Proposal for a regulation
Article 3 – paragraph 1 – point 41
(41) ‘law enforcement’ means activities carried out by law enforcement authorities solely for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;
2022/06/13
Committee: IMCOLIBE
Amendment 1077 #
Proposal for a regulation
Article 3 – paragraph 1 – point 42
(42) ‘national supervisory authority’ means thean independent public authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State at the European Artificial Intelligence Board;
2022/06/13
Committee: IMCOLIBE
Amendment 1080 #
Proposal for a regulation
Article 3 – paragraph 1 – point 43
(43) ‘national competent authority’ means the national supervisory authority, the notifying authority and the market surveillance authority;
2022/06/13
Committee: IMCOLIBE
Amendment 1089 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s health, to property or the environment,
2022/06/13
Committee: IMCOLIBE
Amendment 1095 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
(b a) a breach of obligations under Union law intended to protect fundamental rights;
2022/06/13
Committee: IMCOLIBE
Amendment 1098 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘AI systems presenting a risk’ means an AI system having the potential to affect adversely fundamental rights, health and safety of persons in general, including in the workplace, protection of consumers, the environment, public security, the values enshrined in Article 2 TEU and other public interests, that are protected by the applicable Union harmonisation legislation, to a degree which goes beyond that considered reasonable and acceptable in relation to its intended purpose or under the normal or reasonably foreseeable conditions of use of the system concerned, including the duration of use and, where applicable, its putting into service, installation and maintenance requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1107 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'near miss' means any incident that, if the circumstances were slightly different, would have resulted in a 'serious incident';
2022/06/13
Committee: IMCOLIBE
Amendment 1112 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘artificial intelligence system with indeterminate uses’ means an artificial intelligence system without specific and limited provider-defined purposes;
2022/06/13
Committee: IMCOLIBE
Amendment 1114 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘child’ means any person below the age of 18 years.
2022/06/13
Committee: IMCOLIBE
Amendment 1117 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
(44 c) ‘profiling’ means any form of automated processing of personal data as defined point (4) of Article 4 of Regulation (EU) 2016/679;
2022/06/13
Committee: IMCOLIBE
Amendment 1131 #
Proposal for a regulation
Article 4
Amendments to Annex I The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.rticle 4 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1132 #
Proposal for a regulation
Article 4
Amendments to Annex I The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.rticle 4 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1143 #
Proposal for a regulation
Article 4 a (new)
Article 4 a Principles applicable to all AI systems All operators of AI systems shall respect the following principles: 1. Operators of AI systems shall respect fundamental rights and the Union values, as enshrined in Article 2 TEU, throughout the AI system lifecycle. To ensure this, operators shall implement mechanisms and safeguards that are appropriate to the context and consistent with the state of art (‘fairness’) 2. Operators shall be accountable for the proper functioning of AI systems and for the respect of the fairness principle, based on their roles, the context, and consistent with the state of art. Operators shall ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or use, in accordance with their role and applicable regulatory framework, and by demonstrating this through their actions and decision-making processes (‘accountability’) 3. Operators shall commit to transparency and responsible disclosure regarding AI systems. To this end, they shall provide meaningful information, appropriate to the context, and consistent with the state of the art: (a) to foster a general understanding of AI systems, (b) to make affected persons aware that they are interacting with an AI system and an explanation thereof, (c) to make affected persons aware about their rights conferred in this Regulation, (d) to enable those affected by an AI system to understand the outcome, and (e) to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision (‘transparency and explainability’). 4. Operators shall ensure that AI systems are robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable risk. Operators shall ensure, based on their roles and the context, traceability including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable the analysis of the outcomes of the AI system and responses to inquiry, appropriate to the context and consistent with the state of the art. Operators shall, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address the risks related to AI systems, including privacy, protection of personal data, digital security, safety and bias (‘privacy and security’) 5. Operators shall proactively engage in pursuit of beneficial outcomes for people, societies and the planet, such as advancing inclusion, reducing economic, social, gender and other inequalities, and protecting natural environments, therefore invigorating inclusive growth, sustainable development and well-being (‘social benefit’).
2022/06/13
Committee: IMCOLIBE
Amendment 1145 #
Proposal for a regulation
Article 4 a (new)
Article 4 a Principles applicable to all AI systems All operators of AI systems shall respect the following principles: 1. Operators of AI systems shall respect fundamental rights and the Union values, as enshrined in Article 2 TEU, throughout the AI system lifecycle. To ensure this, operators shall implement mechanisms and safeguards that are appropriate to the context and consistent with the state of art (‘fairness’) 2. Operators shall be accountable for the proper functioning of AI systems and for the respect of the fairness principle, based on their roles, the context, and consistent with the state of art. Operators shall ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their role and applicable regulatory framework, and by demonstrating this through their actions and decision-making processes (‘accountability’) 3. Operators shall commit to transparency and responsible disclosure regarding AI systems. To this end, they shall provide meaningful information, appropriate to the context, and consistent with the state of art: (a) to foster a general understanding of AI systems, (b) to make affected persons aware that they are interacting with an AI system and an explanation thereof, (c) to enable those affected by an AI system to understand the outcome, and (d) to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision (‘transparency and explainability’) 4. Operators shall ensure that AI systems are robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable risk. Operators shall ensure, based on their roles and the context, traceability including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable the analysis of the outcomes of the AI system and responses to inquiry, appropriate to the context and consistent with the state of art. Operators shall, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address the risks related to AI systems, including privacy, protection of personal data, digital security, safety and bias (‘privacy and security’) 5. Operators shall proactively engage in pursuit of beneficial outcomes for people, socieites and the planet, such as advancing inclusion, reducing economic, social, gender and other inequalities, and protecting natural environments, therefore invigorating inclusive growth, sustainable development and well-being (‘social benefit’)
2022/06/13
Committee: IMCOLIBE
Amendment 1148 #
Proposal for a regulation
Article 4 b (new)
Article 4 b Accessibility Requirements for providers and users of AI systems 1. Providers of AI systems shall ensure that their systems are accessible in accordance with the accessibility requirements set out in Section I, Section II, Section VI, and Section VII of Annex I of Directive (EU) 2019/882 prior to those systems being placed on the market or put into service. 2. Users of AI systems shall use such systems in accordance with the accessibility requirements set out in Section III, Section IV, Section VI, and Section VII of Annex I of Directive (EU) 2019/882. 3. Users of AI systems shall prepare the necessary information in accordance with Annex V of Directive (EU) 2019/882. Without prejudice to Annex VIII of this Regulation, the information shall be made available to the public in an accessible manner for persons with disabilities and be kept for as long as the AI system is in use. 4. Without prejudice to the rights of affected persons to information about the use and functioning of AI systems, transparency obligations for providers and users of AI, obligations to ensure consistent and meaningful public transparency under this Regulation, providers and users of AI systems shall ensure that information, forms and measures provided pursuant to this Regulation are made available in such a manner that they are easy to find, easy to understand, and accessible in accordance with Annex I to Directive 2019/882. 5. Users of AI systems shall ensure that procedures are in place so that the use of AI systems remains in conformity with the applicable accessibility requirements. Changes in the characteristics of the use, changes in applicable accessibility requirements and changes in the harmonised standards or in technical specifications by reference to which use of an AI system is declared to meet the accessibility requirements shall be adequately taken into account by the user. 6. In the case of non-conformity, users of AI systems shall take the corrective measures necessary to conform with the applicable accessibility requirements. When necessary, and at the request of the user, the provider of the AI system in question shall cooperate with the user to bring the use of the AI system into compliance with applicable accessibility requirements. 7. Furthermore, where the use of an AI system is not compliant with applicable accessibility requirements, the user shall immediately inform the competent national authorities of the Member States in which the system is being used, to that effect, giving details, in particular, of the non-compliance and of any corrective measures taken. They shall cooperate with the authority, at the request of that authority, on any action taken to bring the use of the AI system into compliance with applicable accessibility requirements. 8. AI systems and the use of thereof, which are in conformity with harmonised technical standards or parts thereof derived from Directive (EU) 2019/882 the references of which have been published in the Official Journal of the European Union, shall be presumed to be in conformity with the accessibility requirements of this Regulation in so far as those standards or parts thereof cover those requirements. 9. AI systems and use of thereof, which are in conformity with the technical specifications or parts thereof adopted for the Directive (EU) 2019/882 shall be presumed to be in conformity with the accessibility requirements of this Regulation in so far as those technical specifications or parts thereof cover those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1151 #
Proposal for a regulation
Article 4 b (new)
Article 4 b A right to explanation of individual decision-making 1. A decision which is taken by the user on the basis of the output from an AI system and which produces legal effects on an affected person, or which similarly significantly affects that person, shall be accompanied by a meaningful explanation of: (a) the role of the AI system in the decision-making process; (b) the logic involved, the main parameters of the decision-making, and their relative weight; and (c) the input data relating to the affected person and each of the main parameters on the basis of which the decision was made. For information on input data under point c) to be meaningful, it must include an easily understandable description of inferences drawn from other data, if it is the inference that relates to the main parameter. 2. For the purpose of Paragraph 1, it shall be prohibited for the law enforcement authorities or the judiciary in the Union to use AI systems that are considered closed or labelled as proprietary by the providers or the distributors; 3. The explanation within the meaning of paragraph 1 shall be provided at the time when the decision is communicated to the affected person.
2022/06/13
Committee: IMCOLIBE
Amendment 1153 #
Proposal for a regulation
Article 4 c (new)
Article 4 c Right to receive an explanation of individual decision-making 1. A decision which is taken by the user on the basis of the output from an AI system and which produces legal effects on an affected person, or which similarly significantly affects that person, shall be accompanied by a meaningful explanation of (a) the role of the AI system in the decision-making process; (b) the logic involved, the main parameters of the decision-making, and their relative weight; and (c) the input data relating to the affected person and each of the main parameters on the basis of which the decision was made. For information on input data under point c) to be meaningful, it must include an easily understandable description of inferences drawn from other data, if it is the inference that relates to the main parameter. 2. For the purpose of Paragraph 1, it shall be prohibited for the law enforcement authorities or the judiciary in the Union to use AI systems that are considered closed or labelled as proprietary by the providers or the distributors; 3. The explanation within the meaning of paragraph 1 shall be provided at the time when the decision is communicated to the affected person.
2022/06/13
Committee: IMCOLIBE
Amendment 1154 #
Proposal for a regulation
Article 4 d (new)
Article 4 d Right not to be subject to non-compliant AI systems Natural persons shall have the right not to be subject to AI systems that: (a) pose an unacceptable risk pursuant to Article 5, or (b) otherwise do not comply with the requirements of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 1157 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviourtechniques with the effect or likely effect of materially distorting a person’s behaviour by appreciably impairing the persons’ ability to make an informed decision, thereby causing the person to take a decision that they would not have taken otherwise, in a manner that causes or is likely to cause that person or another person, or a group of persons material or non-material harm, including physical or, psychological or economic harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1160 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviourtechniques with the effect or the likely effect of materially distorting the behaviour of a person by impairing their ability to make an autonomous decision, thereby causing them to take a decision that they would not have taken otherwise, in a manner that causes or is likely to cause that person or another persons material or non-material harm, including physical or, psychological or economic harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1173 #
Proposal for a regulation
Article 5 – paragraph 1 – point a a (new)
(a a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques.
2022/06/13
Committee: IMCOLIBE
Amendment 1176 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities ofor may be reasonably foreseen to exploit vulnerabilities of children or characteristics of a person or a specific group of persons due to their age, physical or mental disability, in order togender, sexual orientation, ethnicity, race, origin, and religion or social or economic situation, with the effect or likely effect of materially distorting the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person material or non-material harm, including physical or, psychological or economic harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1177 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any ofor may be reasonably foreseen to exploit the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order tosex, gender, sexual orientation, ethnic or social origin, race, religion or belief, or social or economic situation, with the effect or the likely effect of materially distorting the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another persons material or non-material harm, including physical or, psychological or economic harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1191 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time basfor the scoring, evaluation or classification of natural persons or groups related ton their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:education, employment, housing, socioeconomic situation, health, reliability, social behaviour, location or movements;
2022/06/13
Committee: IMCOLIBE
Amendment 1201 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1202 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1212 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1217 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1222 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service or use of an AI system for making individual or place-based risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics or past criminal behaviour of natural persons or groups of natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 1225 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service, or use of AI systems intended to be used as polygraphs and similar tools to detect the emotional state, trustworthiness or related characteristics of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 1237 #
(d) the use of ‘real-time’ remote biometric identification systems in publicly or privately accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:, both online and offline.
2022/06/13
Committee: IMCOLIBE
Amendment 1244 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectivplacing or making available on the market, the putting into service or use of remote biometric identification systems that are or maybe used in publicly or privately accessible spaces, as well as online spaces:;
2022/06/13
Committee: IMCOLIBE
Amendment 1246 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1252 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1257 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1265 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1272 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1277 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1288 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(d a) the creation or expansion of biometric databases through the untargeted or generalised scraping of biometric data from social media profiles or CCTV footage, or equivalent methods;
2022/06/13
Committee: IMCOLIBE
Amendment 1298 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
(d b) the use of remote biometric categorisation systems in publicly accessible spaces;
2022/06/13
Committee: IMCOLIBE
Amendment 1300 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
(d c) the placing on the market, putting into service or use of biometric categorisation systems, or other AI systems, that categorise natural persons according to sensitive or protected attributes or characteristics, or infer those attributes or characteristics.Sensitive attributes or characteristics include, but are not limited to: (i) Gender & gender identity (ii) Race (iii) Ethnic origin (iv) Migration or citizenship status (v) Political orientation (vi) Sexual orientation (vii) Religion (viii) Disability (ix) Or any other grounds on which discrimination is prohibited under Article 21 of the EU Charter of Fundamental Rights as well as under Article 9 of the Regulation (EU) 2016/679;
2022/06/13
Committee: IMCOLIBE
Amendment 1307 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(d d) the placing on the market, putting into service or use of an AI system for making predictions, profiles or risk assessments based on data analysis or profiling of natural persons, groups or locations, for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour;
2022/06/13
Committee: IMCOLIBE
Amendment 1316 #
Proposal for a regulation
Article 5 – paragraph 1 – point d e (new)
(d e) the use of private facial recognition or other private biometric databases for the purpose of law enforcement;
2022/06/13
Committee: IMCOLIBE
Amendment 1319 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
(d f) the placing on the market, putting into service, or use of AI systems that are aimed at automating judicial or similarly intrusive binding decisions by state actors;
2022/06/13
Committee: IMCOLIBE
Amendment 1322 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
(d g) the placing on the market, putting into service or the use of AI systems by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
2022/06/13
Committee: IMCOLIBE
Amendment 1328 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
(d h) the placing on the market, putting into service or the use of AI systems, by or on behalf of competent authorities in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the prohibiting, curtailing or preventing migration or border crossings;
2022/06/13
Committee: IMCOLIBE
Amendment 1332 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
(d i) the placing on the market, putting into service or the use of AI systems intended to assist competent authorities for the examination of application for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status;
2022/06/13
Committee: IMCOLIBE
Amendment 1350 #
Proposal for a regulation
Article 5 – paragraph 2
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1351 #
Proposal for a regulation
Article 5 – paragraph 2
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1368 #
Proposal for a regulation
Article 5 – paragraph 3
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1370 #
Proposal for a regulation
Article 5 – paragraph 3
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1380 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1 – point 1 (new)
(1) The placing on the market, putting into service or use of biometric categorisation systems, or other AI systems, that categorise natural persons or groups of persons according to sensitive or protected attributes or characteristics, or infer those attributes or characteristics. Sensitive attributes or characteristics include, but are not limited to: gender and gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability or any other grounds on which discrimination is prohibited under Article 21 of the EU Charter of Fundamental Rights as well as under Article 9 of the Regulation (EU) 2016/679.
2022/06/13
Committee: IMCOLIBE
Amendment 1382 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1386 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1394 #
Proposal for a regulation
Article 5 – paragraph 4 a (new)
4 a. This Article shall not affect the restrictions, prohibitions or enforcement that apply where an artificial intelligence practice infringes another EU law, including EU acquis on data protection, privacy, or the confidentiality of communications, on non discrimination, consumer protection or on competition.
2022/06/13
Committee: IMCOLIBE
Amendment 1395 #
Proposal for a regulation
Article 5 – paragraph 4 a (new)
4 a. The placing on the market, putting into service or use of AI systems intended to be used as polygraphs, emotion recognition systems or similar tools to detect the emotional state, trustworthiness or related characteristics of a natural person.
2022/06/13
Committee: IMCOLIBE
Amendment 1398 #
Proposal for a regulation
Article 5 – paragraph 4 b (new)
4 b. Member States may, by law or collective agreements, decide to prohibit or to limit the use of AI systems to ensure the protection of the rights of workers in the employment context, in particular for the purposes of the recruitment, the performance of the contract of employment, including discharge obligations laid down by law or by collective agreements, management, planning and organization of work, equality and diversity at the workplace, health and safety at work, protection of employers or customers' property and for the purposes of the exercise and enjoyment, on an individual or collective basis, of rights and benefits related to employment, and for the purpose of the termination of the employment relationship.
2022/06/13
Committee: IMCOLIBE
Amendment 1399 #
Proposal for a regulation
Article 5 – paragraph 4 c (new)
4 c. the placing on the market, putting into service or the use of AI systems by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
2022/06/13
Committee: IMCOLIBE
Amendment 1400 #
Proposal for a regulation
Article 5 – paragraph 4 d (new)
4 d. the placing on the market, putting into service or use of AI systems by competent authorities or on their behalf in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the prohibiting, curtailing or preventing migration or border crossings;
2022/06/13
Committee: IMCOLIBE
Amendment 1401 #
Proposal for a regulation
Article 5 – paragraph 4 e (new)
4 e. the placing on the market, putting into service or the use of AI systems intended to assist competent authorities for the examination of application for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status;
2022/06/13
Committee: IMCOLIBE
Amendment 1402 #
Proposal for a regulation
Article 5 – paragraph 4 f (new)
4 f. the placing on the market, putting into service, or use of an AI system for the specific technical processing of brain or brain-generated data in order to access, infer, influence, or manipulate a person's thoughts, emotions, memories, intentions, beliefs, or other mental states against that person's will or in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1413 #
Proposal for a regulation
Article 6 – paragraph -1 (new)
-1. AI systems referred to in Annex III shall be considered high-risk for the purposes of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 1433 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1447 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
2 a. An artificial intelligence system with indeterminate uses shall also be considered high risk if so identified per Article 9, paragraph 2, point (a).
2022/06/13
Committee: IMCOLIBE
Amendment 1452 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
2 b. In addition to the high-risk AI systems referred to in paragraph 1 and paragraph 2, AI systems that create foreseeable high-risks when combined shall also be considered high-risk.
2022/06/13
Committee: IMCOLIBE
Amendment 1462 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by addingAnnex III, including by adding new areas of high-risk AI systems, where both of the following conditions are fulfilled: a type of AI system poses a risk of harm to the health and safety, a risk of adverse impact on fundamental rights, on climate change mitigation and adaptation, the environment, or a risk of contravention of the Union values enshrined in Article 2 TEU, and that risk is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems in use in the areas listed in Annex III.
2022/06/13
Committee: IMCOLIBE
Amendment 1463 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update or amend the list in Annex III by adding areas of high-risk AI systems where both of the following conditions are fulfilled:the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, a risk of breach of the Union values enshrined in Article 2 TEU or a risk of adverse impact on the society and the environment.
2022/06/13
Committee: IMCOLIBE
Amendment 1473 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1475 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1479 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1481 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1488 #
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
2022/06/13
Committee: IMCOLIBE
Amendment 1499 #
Proposal for a regulation
Article 7 – paragraph 2 – point b
(b) the extent to which an AI system has been used or is likely to be used, including its reasonably foreseeable misuse;
2022/06/13
Committee: IMCOLIBE
Amendment 1502 #
Proposal for a regulation
Article 7 – paragraph 2 – point b a (new)
(b a) the type and nature of the data processed and used by the AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1504 #
Proposal for a regulation
Article 7 – paragraph 2 – point b b (new)
(b b) the extent to which the AI system respects the principles of Article 4a;
2022/06/13
Committee: IMCOLIBE
Amendment 1506 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to natural persons, has breached the Union values enshrined in Article 2 TEU, has caused harm to the health and safety or has had an adverse impact on the fundamental rights, on the environment or the society or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authoritiesthe national supervisory authority, to the national competent authorities, to the Commission, to the Board, to the EDPS or to the European Union Agency for Fundamental Rights (FRA);
2022/06/13
Committee: IMCOLIBE
Amendment 1507 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to natural persons, has contravened the Union values enshrined in Article 2 TEU, has caused harm to the health and safety or has had an adverse impact on the fundamental rights, on the environment or society, or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities, to the Commission, to the Board, to the EDPS or to the European Union Agency for Fundamental Rights (FRA);
2022/06/13
Committee: IMCOLIBE
Amendment 1525 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of personsfundamental rights of persons, the environment or the society, the health or safety of persons, or on the Union values enshrined in Article 2 TEU, shall not be considered as easily reversible;
2022/06/13
Committee: IMCOLIBE
Amendment 1526 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons, the fundamental rights of persons, the environment or society, or on the Union values enshrined in Article 2 TEU shall not be considered as easily reversible;
2022/06/13
Committee: IMCOLIBE
Amendment 1547 #
Proposal for a regulation
Article 7 – paragraph 2 a (new)
2 a. When carrying out the assessment referred to in paragraph 1 the Commission shall consult, where relevant, representatives of groups on which an AI system has an impact, stakeholders, independent experts and civil society organisations. The Commission shall organise public consultations in this regard.
2022/06/13
Committee: IMCOLIBE
Amendment 1563 #
Proposal for a regulation
Article 8 – paragraph 2
2. The intended purpose of the high- risk AI system, the foreseeable uses and foreseeable misuses of AI systems with indeterminate uses and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1566 #
Proposal for a regulation
Article 8 – paragraph 2
2. The intended purpose, reasonably foreseeable uses and foreseeable misuses of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1583 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and the reasonably foreseeable risks associated with each high-risk AI system;that the high-risk AI system, and AI systems with indeterminate uses, can pose to: (i) the health or safety of natural persons; (ii) the legal rights or legal status of natural persons; (iii) the fundamental rights; (iv) the equal access to services and opportunities of natural persons; (v) the Union values enshrined in Article 2 TEU.
2022/06/13
Committee: IMCOLIBE
Amendment 1590 #
(a a) evaluation of how the principles of Article 4a are adhered to;
2022/06/13
Committee: IMCOLIBE
Amendment 1594 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose or reasonably foreseeable use and under conditions of reasonably foreseeable misuse;
2022/06/13
Committee: IMCOLIBE
Amendment 1615 #
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable use or misuse. Those residual risks shall be communicated to the user.
2022/06/13
Committee: IMCOLIBE
Amendment 1633 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 2
In eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended or reasonably foreseeable to be used.
2022/06/13
Committee: IMCOLIBE
Amendment 1678 #
Proposal for a regulation
Article 10 – paragraph 1 a (new)
1 a. Validation datatsets shall be separate datasets from both the testing and the training datasets, in order for the evaluation to be unbiased. If only one dataset is available, it shall be divided in three parts: a training set, a validation set, and a testing set. Each set shall comply with paragraph 3 of this Article.
2022/06/13
Committee: IMCOLIBE
Amendment 1679 #
Proposal for a regulation
Article 10 – paragraph 1 a (new)
1 a. Techniques such as unsupervised learning and reinforcement learning, that do not use validation and testing data sets, shall be developed on the basis of training data sets that meet the quality criteria referred to in paragraphs 2 to 5.
2022/06/13
Committee: IMCOLIBE
Amendment 1680 #
Proposal for a regulation
Article 10 – paragraph 1 b (new)
1 b. Techniques such as unsupervised learning and reinforcement learning, that do not use validation and testing datasets, shall be developed on the basis of training datasets that meet the quality criteria referred to in paragraphs 2 to 4.
2022/06/13
Committee: IMCOLIBE
Amendment 1686 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices for the entire lifecycle of data processing. Those practices shall concern in particular,
2022/06/13
Committee: IMCOLIBE
Amendment 1689 #
Proposal for a regulation
Article 10 – paragraph 2 – point a
(a) the relevant design choices;
2022/06/13
Committee: IMCOLIBE
Amendment 1690 #
Proposal for a regulation
Article 10 – paragraph 2 – point b
(b) data collection processes;
2022/06/13
Committee: IMCOLIBE
Amendment 1694 #
Proposal for a regulation
Article 10 – paragraph 2 – point c
(c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation;
2022/06/13
Committee: IMCOLIBE
Amendment 1701 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, especially where data outputs are used as an input for future operations(‘feedback loops’);
2022/06/13
Committee: IMCOLIBE
Amendment 1718 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training datasets, and where applicable, validation and testing data sets, including the labels, shall be relevant, representative, up-to-date, and to the best extent possible, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets mayshall be met at the level of each individual data sets or a combination thereof.
2022/06/13
Committee: IMCOLIBE
Amendment 1719 #
3. Training, validation and testing data sets shall be relevant, representative, up-to-date, and to the best extent possible, taking into account the state of the art, free of errors and be as complete as possible. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets mayshall be met at the level of each individual data sets or a combination thereof.
2022/06/13
Committee: IMCOLIBE
Amendment 1729 #
Proposal for a regulation
Article 10 – paragraph 4
4. Training, validation and testing dData sets shall take into account, to the extent required by the intended purpose, the foreseeable uses and reasonably foreseeable misuses of AI systems with indeterminate uses, the characteristics or elements that are particular to the specific geographical, ,behavioural or functional setting within which the high-risk AI system is intended to be used.
2022/06/13
Committee: IMCOLIBE
Amendment 1732 #
Proposal for a regulation
Article 10 – paragraph 4
4. Training, validation and testing dData sets shall take into account, to the extent required by the intended purpose, the reasonably foreseeable uses and misuses of AI systems, the characteristics or elements that are particular to the specific geographical, cultural, behavioural or functional setting within which the high-risk AI system is intended to be used.
2022/06/13
Committee: IMCOLIBE
Amendment 1736 #
Proposal for a regulation
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state- of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1755 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide the national supervisory authority, the national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV.
2022/06/13
Committee: IMCOLIBE
Amendment 1764 #
Proposal for a regulation
Article 11 – paragraph 3 a (new)
3 a. Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the technical documentation as part of the documentation concerning internal governance, arrangements, processes and mechanisms pursuant to Article 74 of that Directive.
2022/06/13
Committee: IMCOLIBE
Amendment 1773 #
Proposal for a regulation
Article 12 – paragraph 2
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose or reasonably foreseeable use of the system.
2022/06/13
Committee: IMCOLIBE
Amendment 1782 #
Proposal for a regulation
Article 12 – paragraph 4 – point a
(a) recording of the period of each use of the system (start date and time and end date and time of each use);
2022/06/13
Committee: IMCOLIBE
Amendment 1783 #
Proposal for a regulation
Article 12 – paragraph 4 – point c
(c) the input data for which the search has led to a match;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1799 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point ii
(ii) the performance metrics and its appropriateness, including the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracyperformance, robustness and cybersecurity;
2022/06/13
Committee: IMCOLIBE
Amendment 1805 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point v
(v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purposedata sets used, including their limitation and assumptions, taking into account the intended purpose, the foreseeable and reasonably foreseeable misuses of the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1849 #
Proposal for a regulation
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracythe foreseeable uses and reasonably foreseeable misuses, an appropriate level of perfomance (such as accuracy, reliability and true positive rate), robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
2022/06/13
Committee: IMCOLIBE
Amendment 1854 #
Proposal for a regulation
Article 15 – paragraph 2
2. The perfomance metrics and its appropriateness, including the levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.
2022/06/13
Committee: IMCOLIBE
Amendment 1873 #
Proposal for a regulation
Article 15 a (new)
Article 15 a Sustainable AI systems reporting 1. Providers of high-risk AI systems shall make publicly available information on the energy consumption of the AI system, in particular its carbon footprint with regard to the development of hardware, computational resources, as well as algorithm design and training, testing and validating processes of the high-risk AI systems. The provider shall include this information in the technical documentation referred to in Article 11. 2. The Commission shall develop, by means of an implementing act, a standardised document to facilitate the disclosure of information on the energy used in the training and execution of AI systems and their carbon intensity.
2022/06/13
Committee: IMCOLIBE
Amendment 1882 #
Proposal for a regulation
Article 16 – paragraph 1 – point a a (new)
(a a) indicate their name, registered trade name or registered trade mark, and their address on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as appropriate;
2022/06/13
Committee: IMCOLIBE
Amendment 1883 #
Proposal for a regulation
Article 16 – paragraph 1 – point a a (new)
(a a) ensure that the performance of their high-risk AI system is measured appropriately, including its level of accuracy, robustness and cybersecurity;
2022/06/13
Committee: IMCOLIBE
Amendment 1886 #
Proposal for a regulation
Article 16 – paragraph 1 – point a b (new)
(a b) provide specifications for the input data, or any other relevant information in terms of the data sets used, including their limitation and assumptions, taking into account of the intended purpose and the foreseeable and reasonably foreseeable misuses of the AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1903 #
Proposal for a regulation
Article 16 – paragraph 1 – point j
(j) upon request of a national supervisory authority or a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title.
2022/06/13
Committee: IMCOLIBE
Amendment 1931 #
Proposal for a regulation
Article 17 – paragraph 1 – point i
(i) procedures related to the reporting of serious incidents and of malfunctioning, including near misses, in accordance with Article 62;
2022/06/13
Committee: IMCOLIBE
Amendment 1945 #
Proposal for a regulation
Article 18
Obligation to draw up technical documentation 1. Providers of high-risk AI systems shall draw up the technical documen-tation referred to in Article 11 in accordance with Annex IV. 2. Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the technical documentation as part of the documentation concerning internal governance, arrangements, processes and mechanisms pursuant to Article 74 of that Directive.Article 18 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1962 #
Proposal for a regulation
Article 21 – paragraph 1 a (new)
In the cases referred to in paragraph 1, providers shall immediately inform the distributors of the high-risk AI system and, where applicable, the legal representative, importers and users accordingly. They shall also immediately inform the national supervisory authority and the national competent authorities of the Member States where they made the AI system available or put it into service, and where applicable, the notified body of the non-compliance and of any corrective actions taken.
2022/06/13
Committee: IMCOLIBE
Amendment 1963 #
Proposal for a regulation
Article 22 – paragraph 1
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known toby the provider of the system, thate provider shall immediately inform the national supervisory authority and the national competent authorities of the Member States in which it made the system available and, where applicable, the user, the notified body that issued a certificate for the high-risk AI system, in particular of the non-compliance and of any corrective actions taken. Where applicable, the provider shall also inform the users of the high-risk AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1971 #
Proposal for a regulation
Article 23 – paragraph 1
Providers and, where applicable, users of high-risk AI systems shall, upon request by a national competent authority, provide that authoritysupervisory authority or a national competent authority or, where applicable, by the Board or the Commission, provide them with all the information and documentation necessary to demonstrate the conformity of the high- risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
2022/06/13
Committee: IMCOLIBE
Amendment 1975 #
Proposal for a regulation
Article 23 – paragraph 1 a (new)
Upon a reasoned request by a national supervisory authority or a national competent authority or, where applicable, by the Board or the Commission, providers and, where applicable, users shall also give them access to the logs automatically generated by the high-risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
2022/06/13
Committee: IMCOLIBE
Amendment 1980 #
Proposal for a regulation
Article 25
Authorised representatives 1. Prior to making their systems available on the Union market, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union. 2. The authorised representative shall perform the tasks specified in the mandate received from the provider. The mandate shall empower the authorised representative to carry out the following tasks: (a) keep a copy of the EU declaration of conformity and the technical documentation at the disposal of the national competent authorities and national authorities referred to in Article 63(7); (b) provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law; (c) cooperate with competent national authorities, upon a reasoned request, on any action the latter takes in relation to the high-risk AI system.rticle 25 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2004 #
Proposal for a regulation
Article 26 – paragraph 5
5. Importers shall provide the national supervisory authority and the national competent authorities, upon a reasoned request, with all the necessary information and documentation to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood by that national competent authorityem, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law. They shall also cooperate with those authorities on any action the national supervisory authority and the national competent authority takes in relation to that system.
2022/06/13
Committee: IMCOLIBE
Amendment 2036 #
Proposal for a regulation
Article 29 – paragraph -1 (new)
-1. Users of high-risk AI systems shall ensure that natural persons assigned to ensure or entrusted with human oversight for high-risk AI systems are competent, properly qualified and trained, free from external influence and neither seek nor take instructions from anybody. They shall have the necessary resources in order to ensure the effective supervision of the system in accordance with Article 14.
2022/06/13
Committee: IMCOLIBE
Amendment 2056 #
Proposal for a regulation
Article 29 – paragraph 4 – introductory part
4. Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall immediately inform the provider or distributor and suspend the use of the system. They shall also immediately inform the provider or distributor when they have identified any serious incident or any malfunctioning, including near misses, within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.
2022/06/13
Committee: IMCOLIBE
Amendment 2060 #
Prior to putting into service or use an AI system at the workplace, users shall consult workers representatives, inform the affected employees that they will be subject to the system and obtain their consent.
2022/06/13
Committee: IMCOLIBE
Amendment 2063 #
Proposal for a regulation
Article 29 – paragraph 5 a (new)
5 a. Users of high-risk AI systems shall comply with the registration obligations referred to in Article 51.
2022/06/13
Committee: IMCOLIBE
Amendment 2072 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
6 a. Users of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to an affected person, shall inform them that they are subject to the use of the high-risk AI system. This information shall include the type of the AI system used, its intended purpose and the type of decisions it makes.
2022/06/13
Committee: IMCOLIBE
Amendment 2078 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Fundamental rights impact assessment for a high-risk AI system 1. Prior to putting a high-risk AI system into use, as defined in Article 6(2), the user shall conduct an assessment of the system’s impact in the context of use. This assessment shall consist of, but not limited to, the following elements: (a) a clear outline of the intended purpose for which the system will be used; (b) a clear outline of the intended geographic and temporal scope of the system’s use; (c) verification that the use of the system is compliant with Union and national law; (d) categories of natural persons and groups likely to be affected by the use of the system; (e) the foreseeable direct and indirect impact on fundamental rights of putting the high-risk AI system into use; (f) any specific risk of harm likely to impact marginalised persons or vulnerable groups; (g) the foreseeable impact of the use of the system on the environment, including, but not limited to, energy consumption; (h) any other negative impact on the protection of the values enshrined in Article 2 TEU; (i) in the case of public authorities, any other impact on democracy, rule of law and allocation of public funds; and (j) detailed plan on how the risk of harm or the negative direct and indirect impact on fundamental rights identified will be mitigated. 2. If a detailed plan to mitigate the risks outlined in the course of the assessment in paragraph 1 cannot be identified, the user shall refrain from putting the high-risk AI system into use and inform the provider, the national supervisory authority and market surveillance authority without undue delay. Market surveillance authorities or, where relevant, national supervisory authorities, pursuant to their capacity under Articles 65, 67 and 67a, shall take this information into account when investigating systems which present a risk at national level. 3. The obligations as per paragraph 1 apply for each new deployment of the high-risk AI system. 4. In the course of the impact assessment, the user shall notify the national supervisory authority, the market surveillance authority and the relevant stakeholders. and involve representatives of the foreseeable persons or groups of persons affected by the high-risk AI system, as identified in paragraph 1, including but not limited to: equality bodies, consumer protection agencies, social partners and data protection agencies, with a view to receiving input into the impact assessment. The user must allow a period of six weeks for bodies to respond. 5. The user shall publish the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51(2). 6. Where the user is already required to carry out a data protection impact assessment pursuant to Article 29(6), the impact assessment outlined in paragraph 1 shall be conducted in conjunction to the data protection impact assessment.
2022/06/13
Committee: IMCOLIBE
Amendment 2079 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Fundamental rights impact assessment for high-risk AI systems 1. Prior to putting a high-risk AI system as defined in Article 6(2) into use, users shall conduct an assessment of the systems’ impact in the specific context of use. This assessment shall include, at a minimum, the following elements: (a) a clear outline of the intended purpose for which the system will be used; (b) a clear outline of the intended geographic and temporal scope of the system’s use; (ba) categories of natural persons and groups likely to be affected by the use of the system; (c) verification that the use of the system is compliant with relevant Union and national law, and with fundamental rights law; (d) the foreseeable direct or indirect impact on fundamental rights of putting the high-risk AI system into use; (e) any specific risk of harm likely to impact marginalised persons or vulnerable groups; (f) the foreseeable impact of the use of the system on the environment including, but not limited to, energy consumption; (g) any other negative impact on the protection of the values enshrined in Article 2 TEU; (h) in the case of public authorities, any other impact on democracy, rule of law and allocation of public funds; and (i) a detailed plan as to how the harms and the negative direct or indirect impact on fundamental rights identified will be mitigated. 2. If a detailed plan to mitigate the risks outlined in the course of the assessment outlined in paragraph 1 cannot be identified, the user shall refrain from putting the high-risk AI system into use and inform the provider and the relevant national competent authorities without undue delay. Market surveillance authorities, pursuant to Articles 65 and 67, shall take this information into account when investigating systems which present a risk at national level. 3. The obligation outlined under paragraph 1 applies for each new use of the high-risk AI system. 4. In the course of the impact assessment, the user shall notify relevant national competent authorities and relevant stakeholders and involve representatives of the persons or groups of persons that are reasonably foreseeable to be affected by the high-risk AI system, as identified in paragraph 1, including but not limited to: equality bodies, consumer protection agencies, social partners and data protection agencies, with a view to receiving input into the impact assessment. The user must allow a period of six weeks for bodies to respond. 5. The user that is a public authority shall publish the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51(2).
2022/06/13
Committee: IMCOLIBE
Amendment 2092 #
Proposal for a regulation
Article 30 – paragraph 8
8. Notifying authorities shall make sure that conformity assessments are carried out in a proportionate and timely manner, avoiding unnecessary burdens for providers and that notified bodies perform their activities taking due account of the size of an undertaking, the sector in which it operates, its structure and the degree of complexity of the AI system in question.
2022/06/13
Committee: IMCOLIBE
Amendment 2094 #
Proposal for a regulation
Article 31 – paragraph 3
3. Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall provide the notifying authority with all the documentary evidence necessary for the verification, recognition and regular monitoring of its compliance with the requirements laid down in Article 33. For notified bodies which are designated under any other Union harmonisation legislation, all documents and certificates linked to those designations may be used to support their designation procedure under this Regulation, as appropriate.
2022/06/13
Committee: IMCOLIBE
Amendment 2096 #
Proposal for a regulation
Article 32 – paragraph 3
3. The notification referred to in paragraph 2 shall include full details of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies concerned, as well as the relevant attestation of competence.
2022/06/13
Committee: IMCOLIBE
Amendment 2098 #
Proposal for a regulation
Article 32 – paragraph 4
4. The conformity assessment body concerned may perform the activities of a notified body only where no objections are raised by the Commission or the other Member States. within onetwo weeks of the validation of the notification where it includes an accreditation certificate referred to in Article 31(2), or within two months of athe notification where it includes documentary evidence referred to in Article 31(3).
2022/06/13
Committee: IMCOLIBE
Amendment 2100 #
Proposal for a regulation
Article 32 – paragraph 4 a (new)
4 a. Where objections are raised, the Commission shall without delay enter into consultation with the relevant Member States and the conformity assessment body. In view thereof, the Commission shall decide whether the authorisation is justified or not. The Commission shall address its decision to the Member State concerned and the relevant conformity assessment body.
2022/06/13
Committee: IMCOLIBE
Amendment 2104 #
Proposal for a regulation
Article 33 – paragraph 4
4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to which it performs conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in the high-risk AI system that is assessed, as well as of any competitors of the provider. This shall not preclude the use of assessed AI systems that are necessary for the operations of the conformity assessment body or the use of such systems for personal purposes.
2022/06/13
Committee: IMCOLIBE
Amendment 2110 #
Proposal for a regulation
Article 36 – paragraph 1
1. Where a notifying authority has suspicions or has been informed that a notified body no longer meets the requirements laid down in Article 33, or that it is failing to fulfil its obligations, that authority shall without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body investigation no longer meets the requirements laid down in Article 33 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw the notification as appropriate, depending on the seriousness of the failure. It shall also immediately inform the Commission and the other Member States accordingly.
2022/06/13
Committee: IMCOLIBE
Amendment 2112 #
Proposal for a regulation
Article 37 – paragraph 3
3. The Commission shall ensure that all confidentialsensitive information obtained in the course of its investigations pursuant to this Article is treated confidentially.
2022/06/13
Committee: IMCOLIBE
Amendment 2119 #
Proposal for a regulation
Article 39 – paragraph 1
Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement in this respect may be authorised to carry out the activities of notified Bodies under this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2127 #
Proposal for a regulation
Article 40 – paragraph 1 a (new)
When AI systems are intended to be deployed at the workplace, harmonised standards shall be limited to technical specifications and procedures.
2022/06/13
Committee: IMCOLIBE
Amendment 2159 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
2022/06/13
Committee: IMCOLIBE
Amendment 2164 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
(a) the conformity assessment procedure based on internal control referred to in Annex VI;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2168 #
Proposal for a regulation
Article 43 – paragraph 1 – point b
(b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2173 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 1
Where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied or has applied only in part harmonised standards referred to in Article 40, or where such harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider shall follow the conformity assessment procedure set out in Annex VII.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2176 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 2
For the purpose of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, when the system is intended to be put into service by law enforcement, immigration or asylum authorities as well as EU institutions, bodies or agencies, the market surveillance authority referred to in Article 63(5) or (6), as applicable, shall act as a notified body.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2178 #
Proposal for a regulation
Article 43 – paragraph 1 a (new)
1 a. Without prejudice to paragraph 1, if the provider has applied harmonised standard referred to in Article 40, or where applicable, common specifications referred to in Article 41, it shall follow the conformity assessment procedure based on internal control referred to in Annex VI.
2022/06/13
Committee: IMCOLIBE
Amendment 2179 #
Proposal for a regulation
Article 43 – paragraph 1 b (new)
1 b. In the following cases, the compliance of the high-risk AI system with requirements laid down in Chapter 2 of this Title shall be assessed following the conformity assessment procedure based on the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII: (a) where harmonised standards, the reference number of which has been published in the Official Journal of the European Union, covering all relevant safety requirements for the AI system, do not exist; (b) where the harmonised standards referred to in point (a) exist but the manufacturer has not applied them or has applied them only in part; (c) where one or more of the harmonised standards referred to in point (a) has been published with a restriction; (d) when the provider considers that the nature, design, construction or purpose of the AI system necessitate third party verification.
2022/06/13
Committee: IMCOLIBE
Amendment 2182 #
Proposal for a regulation
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
2022/06/13
Committee: IMCOLIBE
Amendment 2197 #
Proposal for a regulation
Article 43 – paragraph 4 a (new)
4 a. The specific interests and needs of the small-scale providers shall be taken into account when setting the fees for third-party conformity assessment under this Article, reducing those fees proportionately to their size and market size.
2022/06/13
Committee: IMCOLIBE
Amendment 2205 #
Proposal for a regulation
Article 43 – paragraph 6
6. The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.
2022/06/13
Committee: IMCOLIBE
Amendment 2215 #
2022/06/13
Committee: IMCOLIBE
Amendment 2224 #
Proposal for a regulation
Article 48 – paragraph 1
1. The provider shall draw up a written EU declaration of conformity for each high-risk AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn upsupervisory authority and the national competent authorities after the high-risk AI system has been placed on the market or put into service for the entire lifecycle of the high- risk AI system. A copy of the EU declaration of conformity shall be given to the national supervisory authority and the relevant national competent authorities upon request.
2022/06/13
Committee: IMCOLIBE
Amendment 2226 #
Proposal for a regulation
Article 48 – paragraph 2
2. The EU declaration of conformity shall state that the high-risk AI system in question meets the requirements set out in Chapter 2 of this Title, including the requirements related to the respect of the Union data protection acquis. The EU declaration of conformity shall contain the information set out in Annex V and shall be translated into an official Union language or languages required by the Member State(s) in which the high-risk AI system is placed on the market or made available.
2022/06/13
Committee: IMCOLIBE
Amendment 2230 #
Proposal for a regulation
Article 49 – paragraph 1
1. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems before the high-risk AI system is placed on the market. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. It may be followed by a pictogram or any other marking indicating a special risk or use.
2022/06/13
Committee: IMCOLIBE
Amendment 2236 #
Proposal for a regulation
Article 49 – paragraph 3 a (new)
3 a. Where high-risk AI systems are subject to other Union legislation which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other legislation.
2022/06/13
Committee: IMCOLIBE
Amendment 2238 #
Proposal for a regulation
Article 50 – paragraph 1 – introductory part
The provider shall, for the entire lifecycle of the AI system or for a period ending 10 years after the AI system has been placed on the market or put into service, whichever is the longest, keep at the disposal of the national competent authorities:
2022/06/13
Committee: IMCOLIBE
Amendment 2242 #
Proposal for a regulation
Article 50 – paragraph 1 – introductory part
The provider shall, for a period ending 10 years after the AI system has been placed on the market or put into service, keep at the disposal ofthe entire lifecycle of the AI system, keep at the disposal of the national supervisory authority and the national competent authorities:
2022/06/13
Committee: IMCOLIBE
Amendment 2248 #
Proposal for a regulation
Article 51 – paragraph 1
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2), the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60, in accordance with Article 60(2).
2022/06/13
Committee: IMCOLIBE
Amendment 2255 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Before putting into service or using a high-risk AI system in accordance with Article 6(2), the user shall register in the EU database referred to in Article 60.
2022/06/13
Committee: IMCOLIBE
Amendment 2258 #
Proposal for a regulation
Article 51 a (new)
Article 51 a Legal representative 1. Where an operator pursuant to Article 2 is established outside the Union, they shall designate, in writing, a legal representative in the Union. 2. The legal representative shall reside or be established in one of the Member States where the activities pursuant to Article 2, paragraphs 1 and 1a, are taking place. 3. The operator shall provide its legal representative with the necessary powers and resources to comply with its tasks under this Regulation and to cooperate with the competent authorities. 4. The legal representative shall, where appropriate, also carry out the following compliance tasks: (a) keep a copy of the EU declaration of conformity and the technical documentation at the disposal of the national supervisory authority and the national competent authorities and national authorities referred to in Article 63(7); (b) provide a national supervisory authority or a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law; (c) cooperate with the national supervisory authority or the national competent authorities, upon a reasoned request, on any action the latter takes in relation to the high-risk AI system; (d) where applicable, comply with the registration obligations as referred into Article 51. 5. The legal representative shall be mandated to be addressed, in addition to or instead of the operator, by, in particular, national supervisory authority or the national competent authorities and affected persons, on all issues related to ensuring compliance with this Regulation. 6. The legal representative may be held liable for infringements of this Regulation, without prejudice to any liability of or legal actions against the operator, user or provider.
2022/06/13
Committee: IMCOLIBE
Amendment 2262 #
Proposal for a regulation
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/06/13
Committee: IMCOLIBE
Amendment 2265 #
Proposal for a regulation
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2274 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2296 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes 1. established by one or more Member States competentNational supervisory authorities or the European Data Protection Supervisor may establish AI regulatory sandboxes that shall provide a controlled environment that facilitatesing the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
2022/06/13
Committee: IMCOLIBE
Amendment 2301 #
Proposal for a regulation
Article 53 – paragraph 1 a (new)
1 a. National supervisory authorities may establish AI regulatory sandboxes jointly with other national supervisory authorities.
2022/06/13
Committee: IMCOLIBE
Amendment 2307 #
Proposal for a regulation
Article 53 – paragraph 2
2. Member StatesThe national supervisory authority shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national, the national data protection authorities are associated to the operation of the AI regulatory sandbox.
2022/06/13
Committee: IMCOLIBE
Amendment 2316 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to fundamental rights, health and, safety and fundamental rightsor the environment identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension ofand adequate mitigation. Where such mitigation proves to be ineffective, the development and testing process shall be suspended without delay until such mitigation takes place.
2022/06/13
Committee: IMCOLIBE
Amendment 2317 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and, fundamental rights and the environment identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension ofand adequate mitigation. Where such mitigation proves to be ineffective, the development and testing process shall be suspended without delay until such mitigation takes place.
2022/06/13
Committee: IMCOLIBE
Amendment 2328 #
Proposal for a regulation
Article 53 – paragraph 5
5. Member States’ competentThe national supervisory authoritiesy that haves established the AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results ofrom the implementation of those schemes, including good practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox. Those reports or abstracts thereof shall be made available to the public in order to further enable innovation in the Union.
2022/06/13
Committee: IMCOLIBE
Amendment 2345 #
Proposal for a regulation
Article 54
[...]deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2369 #
Proposal for a regulation
Article 54 a (new)
Article 54 a Promotion of AI research and development in support of socially and environmentally beneficial outcomes 1. Member States shall promote research and development of AI solutions which support socially and environmentally beneficial outcomes, including but not limited to development of AI-based solutions to increase accessibility for persons with disabilities, tackle socio- economic inequalities, and meet sustainability and environmental targets, by: (a) providing relevant projects with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions; (b) earmarking public funding, including from relevant EU funds, for AI research and development in support of socially and environmentally beneficial outcomes; (c) organising specific awareness raising activities about the application of this Regulation, the availability of and application procedures for dedicated funding, tailored to the needs of those projects; (d) where appropriate, establishing accessible dedicated channels, including within the sandboxes, for communication with projects to provide guidance and respond to queries about the implementation of this Regulation. 2. Without prejudice to Article 55 a (new)1(a), Member States shall ensure that relevant projects are led by civil society and social stakeholders that set the project priorities, goals, and outcomes.
2022/06/13
Committee: IMCOLIBE
Amendment 2373 #
Proposal for a regulation
Article 55 – paragraph 1 – introductory part
1. Member StatesThe national supervisory authority shall undertake the following actions:
2022/06/13
Committee: IMCOLIBE
Amendment 2395 #
Proposal for a regulation
Article 56 – title
Establishment of the European Artificial Intelligence Board
2022/06/13
Committee: IMCOLIBE
Amendment 2399 #
Proposal for a regulation
Article 56 – paragraph 1
1. An independent ‘European Artificial Intelligence Board’ (the ‘Board’) is hereby established as a body of the Union and shall have legal personality.
2022/06/13
Committee: IMCOLIBE
Amendment 2401 #
Proposal for a regulation
Article 56 – paragraph 1 a (new)
1 a. The Board shall monitor and ensure the effective and consistent application, and contribute to the effective and consistent enforcement, of this Regulation throughout the Union, including with regard to cases involving two or more Member States as set out in Article 59b.
2022/06/13
Committee: IMCOLIBE
Amendment 2413 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
(c a) contribute to the effective cooperation with the competent authorities of third countries and with international organisations.
2022/06/13
Committee: IMCOLIBE
Amendment 2428 #
Proposal for a regulation
Article 57 – title
Structure and independence of the Board
2022/06/13
Committee: IMCOLIBE
Amendment 2437 #
Proposal for a regulation
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor and the FRA. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
2022/06/13
Committee: IMCOLIBE
Amendment 2440 #
Proposal for a regulation
Article 57 – paragraph 1 a (new)
1 a. The Board shall be represented by its Chair.
2022/06/13
Committee: IMCOLIBE
Amendment 2441 #
1 b. The Board shall act independently when performing its tasks or exercising its powers pursuant to Articles 58.
2022/06/13
Committee: IMCOLIBE
Amendment 2442 #
Proposal for a regulation
Article 57 – paragraph 1 c (new)
1 c. The Board shall take decisions by a simple majority of its voting members, unless otherwise provided for in this Regulation. Each national supervisory authority and the EDPS shall have one vote.
2022/06/13
Committee: IMCOLIBE
Amendment 2443 #
Proposal for a regulation
Article 57 – paragraph 2
2. The Board shall adopt its rules of procedure by a simple two-thirds majority of its members, following the consent of the Commission. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish sub-groups as appropriate for the purpose of examining specific questionvoting members and organise its own operational arrangements.
2022/06/13
Committee: IMCOLIBE
Amendment 2449 #
Proposal for a regulation
Article 57 – paragraph 2 a (new)
2 a. The Board may establish sub- groups as appropriate for the purpose of examining specific questions.In any case, the Board shall establish the following permanent sub-groups: a) for the purpose of examining the question of the proper governance of AI systems with indeterminate use; b) for the purpose of examining the question of the proper governance of research and development activities on the topic of AI.
2022/06/13
Committee: IMCOLIBE
Amendment 2450 #
Proposal for a regulation
Article 57 – paragraph 2 b (new)
2 b. The Board shall elect a Chair and two deputy Chairs from among its voting members by simple majority. The term of office of the Chair and of the deputy Chairs shall be three years, renewable once.
2022/06/13
Committee: IMCOLIBE
Amendment 2454 #
Proposal for a regulation
Article 57 – paragraph 3
3. The BoardChair shall be chaired by the Commission. The Commission shall have the following tasks: - convene the meetings of the Board and prepare theits agenda in acc; - ensure the timely perfordmance withof the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation; - notify Member States and the Commission of any recommendations adopted by the Board.
2022/06/13
Committee: IMCOLIBE
Amendment 2458 #
Proposal for a regulation
Article 57 – paragraph 3 a (new)
3 a. The secretariat of the Board shall have the necessary human and financial resources to be able to perform its tasks pursuant to this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2460 #
Proposal for a regulation
Article 57 – paragraph 3 b (new)
3 b. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2466 #
Proposal for a regulation
Article 57 – paragraph 4
4. The Board may invite external experts and observers to attend its meetings and may hold exchanges winational authorities, such as national equality bodies, to its meetings, where the interested thirssues discussed parties to inform its activities to ane of relevance for them. The Board may also invite, where appropriate, extent. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groupsrnal experts, and observers and interested third parties, including stakeholders, such as those referred to in Article 58, paragraph 1c, to attend its meetings and may hold exchanges with them.
2022/06/13
Committee: IMCOLIBE
Amendment 2469 #
Proposal for a regulation
Article 57 – paragraph 4 a (new)
4 a. The Board shall cooperate with Union institutions, bodies, offices, agencies and advisory groups and shall make the results of that cooperation publicly available.
2022/06/13
Committee: IMCOLIBE
Amendment 2485 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
When providing advice and assistance to the Commission and the national supervisory authorities in the context of Article 56(2), the Board shall in particular:
2022/06/13
Committee: IMCOLIBE
Amendment 2489 #
Proposal for a regulation
Article 58 – paragraph 1 – point -a (new)
(-a) issue opinions, recommendations or written contributions with a view to ensuring the consistent implementation of this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2490 #
Proposal for a regulation
Article 58 – paragraph 1 – point -a a (new)
(-a a) examine, on its own initiative or on request of one of its members, any question covering the application of this Regulation and issue guidelines, recommendations and best practices with a view to ensuring the consistent implementation of this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2492 #
Proposal for a regulation
Article 58 – paragraph 1 – point a
(a) collect and share expertise and best practices among Member Statesin implementation of this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2500 #
Proposal for a regulation
Article 58 – paragraph 1 – point b
(b) contribute to uniform administrative practices in the Member States, including for the functioning of the regulatory sandboxes, as referred to in Article 53;
2022/06/13
Committee: IMCOLIBE
Amendment 2503 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – introductory part
(c) issue opinions, recommendations or written contributions on matters related to the implementation of this Regulation, in consultation with relevant stakeholders, in particular
2022/06/13
Committee: IMCOLIBE
Amendment 2504 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – introductory part
(c) issue opinions, recommendations or written contributions on matters related to the implementation of this Regulation, after consulting relevant stakeholders, in particular
2022/06/13
Committee: IMCOLIBE
Amendment 2514 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
(c a) encourage, facilitate and support the drawing up of codes of conduct intended to foster the voluntary application to AI systems of those codes of conduct in close cooperation with relevant stakeholders in accordance with Article 69;
2022/06/13
Committee: IMCOLIBE
Amendment 2518 #
Proposal for a regulation
Article 58 – paragraph 1 – point c b (new)
(c b) cooperate with the European Data Protection Board and with the FRA to receive guidance in relation to the respect of fundamental rights, in particular the right to non-discrimination and to equal treatment, the right to privacy, confidentiality of communications and the protection of personal data;
2022/06/13
Committee: IMCOLIBE
Amendment 2527 #
Proposal for a regulation
Article 58 – paragraph 1 – point c c (new)
(c c) promote public awareness and understanding of the benefits, risks, rules and safeguards and rights in relation to the use of AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 2530 #
Proposal for a regulation
Article 58 – paragraph 1 – point c d (new)
(c d) promote the cooperation and effective bilateral and multilateral exchange of information and best practices between the national supervisory authorities;
2022/06/13
Committee: IMCOLIBE
Amendment 2532 #
Proposal for a regulation
Article 58 – paragraph 1 – point c e (new)
(c e) promote common training programmes and facilitate personnel exchanges between the national supervisory authorities and, where appropriate, with the national supervisory authorities of third countries or with international organisations;
2022/06/13
Committee: IMCOLIBE
Amendment 2537 #
Proposal for a regulation
Article 58 – paragraph 1 – point c f (new)
(c f) advise the Commission on the possible amendment of the Annexes by means of delegated act in accordance with Article 73, in particular the annex listing high-risk AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 2542 #
Proposal for a regulation
Article 58 – paragraph 1 – point c g (new)
(c g) ensure that the national supervisory authorities actively cooperate in the implementation of this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2550 #
Proposal for a regulation
Article 58 – paragraph 1 a (new)
When acting in the context of Article 59c on cases involving two or more Member States, the Board shall adopt binding decisions for national supervisory authorities.
2022/06/13
Committee: IMCOLIBE
Amendment 2551 #
Proposal for a regulation
Article 58 – paragraph 1 b (new)
The Board shall organise consultations with stakeholders twice a year. Such stakeholders shall include representatives from industry, start-ups and SMEs ,organisations from the civil society organisations such as NGOs, consumer associations, the social partners and academia, to assess the evolution of trends in technology, issues related to the implementation and the effectiveness of this Regulation, regulatory gaps or loopholes observed in practice.
2022/06/13
Committee: IMCOLIBE
Amendment 2556 #
Proposal for a regulation
Title VI – Chapter 2 – title
2 nNational competent authorities and national supervisory authorities
2022/06/13
Committee: IMCOLIBE
Amendment 2561 #
Proposal for a regulation
Article 59 – paragraph 2
2. Each Member State shall designate a national supervisory authority among the national competent authorities. The national supervisory authority shall act as notifying authority and market surveillance authority unless a Member State has organisational and administrative reasons to designate more than one authority.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2569 #
Proposal for a regulation
Article 59 – paragraph 4
4. Member States shall ensure that the national competent authorities are provided with adequate technical, financial and human resources, premises and infrastructure necessary to fulfil their tasks under this Regulation. In particular, national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data and data computing, personal data protection, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements. Member States shall assess and update competence and resource requirements referred to in this paragraph on an annual basis.
2022/06/13
Committee: IMCOLIBE
Amendment 2583 #
Proposal for a regulation
Article 59 – paragraph 6
6. The Commission and the Board shall facilitate the exchange of experience between national competent authorities.
2022/06/13
Committee: IMCOLIBE
Amendment 2588 #
Proposal for a regulation
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providers. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the guidance shall be drafted in consultation with the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States may also establish one central contact point for communication with operators, as appropriate.
2022/06/13
Committee: IMCOLIBE
Amendment 2595 #
Proposal for a regulation
Article 59 a (new)
Article 59 a Independent national superviosry authority 1. Each Member State shall establish or designate a single national supervisory authority within 3 months after the entering into force of this Regulation. 2. The national supervisory authority shall act as the lead authority and be responsible for ensuring the effective coordination between the national competent authorities regarding the implementation of this Regulation. It shall represent its Member State on the Board, in accordance with Article 57. 3. Each national supervisory authority shall act with complete independence in performing its tasks and exercising its powers in accordance with this Regulation. 4. The members of each national supervisory authority shall, in the performance of their tasks and exercise of their powers in accordance with this Regulation, remain free from external influence, whether direct or indirect, and shall neither seek nor take instructions from any other body. 5. Members of each national supervisory authority shall refrain from any action incompatible with their duties and shall not, during their term of office, engage in any incompatible occupation, whether gainful or not. 6. Each Member State shall ensure that each national supervisory authority is provided with the human, technical and financial resources, premises and infrastructure necessary for the effective performance of its tasks and exercise of its powers, including those to be carried out in the context of mutual assistance, cooperation and participation in the Board. 7. Each Member State shall ensure that each national supervisory authority chooses and has its own staff which shall be subject to the exclusive direction of the member or members of the supervisory authority concerned. 8. Each Member State shall ensure that each national supervisory authority is subject to financial control which does not affect its independence and that it has separate, public annual budgets, which may be part of the overall state or national budget. 9. Each member of the national supervisory authority shall have the qualifications, experience and skills, in particular an in-depth understanding of artificial intelligence technologies, data and data computing, personal data protection, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements, to perform their duties and exercise their powers. 10. The duties of a member of the national supervisory authority shall end in the event of the expiry of the term of office, resignation or compulsory retirement, in accordance with the law of the Member State concerned. 11. A member of the national supervisory authority shall be dismissed only in cases of serious misconduct or if the member no longer fulfils the conditions required for the performance of the duties. 12. Member States shall make publicly available and communicate to the Commission and the Board, the national supervisory designation, and information on how it can be contacted, by [three months after the entry into force of this Regulation]. 13. For the purposes of the consistent application of the Regulation and for reasons of necessary cooperation with the market surveillance authorities, each national supervisory authority shall have at least one staff member from the market surveillance authority posted as a liaison officer to the national supervisory authority.
2022/06/13
Committee: IMCOLIBE
Amendment 2600 #
Proposal for a regulation
Article 59 b (new)
Article 59 b Tasks of the national supervisory authority 1. Without prejudice to other tasks set out under this Regulation, each national supervisory authority shall on the territory of its Member State: (a) monitor and enforce the application of this Regulation, in particular as to the upholding of the principles of article 4a, fundamental rights of individuals and the Union values, as enshrined in Article 2 TEU; (b) promote public awareness and understanding of the risks, rules, safeguards and rights in relation to use of AI systems; (c) promote the awareness of operators of their obligations under this Regulation; (d) monitor operators’ data governance and management practices, in particular in relation to training, validation and testing datasets; (e) upon request, provide information to affected persons concerning the exercise of their rights under this Regulation and, if appropriate, cooperate with the supervisory authorities in other Member States to that end; (f) handle complaints lodged by an affected person, organisation or association in accordance with Articles 68a and 68b, and investigate, to the extent appropriate, the subject matter of the complaint and inform the complainant of the progress and the outcome of the investigation within a reasonable period, in particular if further investigation or coordination with another national supervisory authority or national competent authority is necessary; (g) assist small-scale providers and users in accordance with Article 55; (h) cooperate with, including by sharing information and providing mutual assistance to, other national supervisory authorities and national competent authorities with a view to ensuring the consistency of application and enforcement of this Regulation; (i) conduct investigations on the application of this Regulation, including on the basis of information received from another national supervisory authority, national competent authority or other public authority; (j) cooperate with other competent authorities in their fields of competence, as necessary; (k) monitor relevant developments, insofar as they have an impact on the protection of fundamental rights and the values enshrined in Article 2 TEU, in particular the development of technologies and commercial practices; (l) contribute to the activities of the Board; 2. National supervisory authorities may establish regulatory sandboxes in accordance with Article 53. 3. Each national supervisory authority shall facilitate the submission of complaints referred to in point (f) of paragraph 1 by measures such as a complaint submission form which can also be completed electronically, without excluding other means of communication. 4. The performance of the tasks of each national supervisory authority shall be free of charge for the affected person.
2022/06/13
Committee: IMCOLIBE
Amendment 2601 #
Proposal for a regulation
Article 59 c (new)
Article 59 c Cooperation and consistency In order to contribute to the consistent application of this Regulation throughout the Union, the national supervisory authorities shall cooperate with each other and, where relevant, with the market surveillance authorities and the Commission, in order to reach consensus.
2022/06/13
Committee: IMCOLIBE
Amendment 2602 #
Proposal for a regulation
Article 59 d (new)
Article 59 d Cooperation mechanism in cases involving two or more Member States 1. Each national supervisory authority shall perform its tasks and powers conferred to it in accordance with this Regulation, on the territory of its own Member State. 2. In the event of a case involving two or more national supervisory authorities, the national supervisory authority of the Member State where the provider or the user of the concerned AI system is established, or where the legal representative resides, shall be considered to be the lead national supervisory authority. 3. In case it is not clear which national supervisory authority should act as the lead authority pursuant to paragraph 2, the Board shall issue a binding decision according to Article 59e. 4. In cases referred to in paragraph 2, the relevant national supervisory authorities shall cooperate and exchange all relevant information in due time. 5. The national supervisory authorities shall, where appropriate, conduct joint operations, including joint investigations, in which members or staff of the national supervisory authorities of other Member States are involved. 6. In case of a serious disagreement between two or more national supervisory authorities, the national supervisory authorities shall notify the Board and communicate without delay all relevant information related to the case to the Board for a binding decision.
2022/06/13
Committee: IMCOLIBE
Amendment 2603 #
Proposal for a regulation
Article 59 e (new)
Article 59 e Binding decisions by the Board 1. In order to ensure the correct and consistent application of this Regulation in individual cases, the Board shall adopt a binding decision in the following cases: (a) where there are conflicting views on which of the national supervisory authorities concerned would be the lead authority pursuant to Article 59c; (b) where, in a case referred to in Article 59c(4), there is a serious disagreement between national supervisory authorities concerned regarding a matter involving two or more Member States; (c) where, in a case referred to in Article 67a, a national supervisory authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the compliance with obligations under Union or national law intended to protect fundamental rights, the principles of Article 4a, the values as enshrined in Article 2 TEU, the environment, or to other aspects of public interest protection; 2. The decisions referred to in paragraph 1, point (a) shall be adopted within one week from the referral of the subject- matter, by a two-thirds majority of the members of the Board. 3. The decisions referred to in paragraph 1, points (b) and (c) shall be adopted within one month from the referral of the subject-matter, by a two-thirds majority of the members of the Board. That period may be extended by a further month on account of the complexity of the subject- matter. The decision referred to in paragraph 1, points (b) and (c) shall be reasoned and addressed to the lead national supervisory authority and all the national supervisory authorities concerned and be binding on them. 4. Where the Board has been unable to adopt a decision within the periods referred to in paragraph 3, it shall adopt its decision within two weeks following the expiration of the second month referred to in paragraph 2 by a simple majority of the members of the Board. Where the members of the Board are split, the decision shall by adopted by the vote of its Chair. 5. The national supervisory authorities concerned shall not adopt a decision on the subject matter submitted to the Board under paragraph 1, points (b) and (c) during the periods referred to in paragraphs 3 and 4. 6. The Chair of the Board shall notify, without undue delay, the decision referred to in paragraph 1 to the national supervisory authorities concerned. It shall also inform the Commission thereof. The decision shall be published on the website of the Board without delay after the national supervisory authorities have been notified.
2022/06/13
Committee: IMCOLIBE
Amendment 2609 #
Proposal for a regulation
Title VII
EU DATABASE FOR STAND-ALONE HIGH-RISK AI SYSTEMS
2022/06/13
Committee: IMCOLIBE
Amendment 2613 #
Proposal for a regulation
Article 60 – title
EU database for stand-alone high-risk AI systems
2022/06/13
Committee: IMCOLIBE
Amendment 2617 #
Proposal for a regulation
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 and 2a concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51, as well as users of any AI systems by public authorities and Union institutions, bodies, offices or agencies.
2022/06/13
Committee: IMCOLIBE
Amendment 2622 #
Proposal for a regulation
Article 60 – paragraph 2 a (new)
2 a. The data listed in Annex VIII, point (2), shall be entered into the EU database by the users, including those who are or who act on behalf of public authorities or Union institutions, bodies, offices or agencies. The Commission shall provide them with technical and administrative support.
2022/06/13
Committee: IMCOLIBE
Amendment 2625 #
Proposal for a regulation
Article 60 – paragraph 3
3. Information contained in the EU database shall be accessible to the public, user-friendly and machine-readable.
2022/06/13
Committee: IMCOLIBE
Amendment 2629 #
Proposal for a regulation
Article 60 – paragraph 4
4. The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the user.
2022/06/13
Committee: IMCOLIBE
Amendment 2633 #
Proposal for a regulation
Article 60 – paragraph 5
5. The Commission shall be the controller of the EU database. It shall also ensure to providers adequate technical and administrative support.
2022/06/13
Committee: IMCOLIBE
Amendment 2641 #
Proposal for a regulation
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Post-market monitoring must include continuous analysis of the AI environment, including other devices, software, and other AI systems that will interact with the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 2651 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
1. Providers and, where users have identified a serious incident or malfunctioning, users of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred and to the affected persons and, where the incident or breach occurs or is likely to occur in at least two Member States, to the Commission.
2022/06/13
Committee: IMCOLIBE
Amendment 2652 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
1. Providers of high-riskand, where users have identified a serious incident or malfunctioning, users of AI systems placed on the Union market shall report any serious incident or any malfunctioning, including near misses, of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights toto the national supervisory authorities and the market surveillance authorities of the Member States where that incident or breach occurred and, where relevant, to the Commission and to the affected persons.
2022/06/13
Committee: IMCOLIBE
Amendment 2660 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1
Such notification shall be made immediately after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a linkwhen an AI system is involved in an incident or malfunctioning, including near misses, and, in any event, not later than 15 day72 hours after the providers or, where applicable, the user becomes aware of the serious incident or of the malfunctioning.
2022/06/13
Committee: IMCOLIBE
Amendment 2667 #
Proposal for a regulation
Article 62 – paragraph 2 a (new)
2 a. The market surveillance authorities shall take appropriate measures within 7 days from the date it received the notification referred to in paragraph 1. Where the infringement takes place or is likely to take place in other Member States, the market surveillance authority shall notify the Commission, the Board and the relevant national competent authorities of these Member States.
2022/06/13
Committee: IMCOLIBE
Amendment 2670 #
Proposal for a regulation
Article 62 – paragraph 3
3. For high-risk AI systems referred to in point 5(b) of Annex III which are placed on the market or put into service by providers that are credit institutions regulated by Directive 2013/36/EU and for high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746, the notification of serious incidents or malfunctioning for the purposes of this Regulation shall be limited to those that that constitute a breach of obligations under Union law intended to protect fundamental rights and the environment.
2022/06/13
Committee: IMCOLIBE
Amendment 2677 #
Proposal for a regulation
Article 63 – paragraph 5
5. For AI systems listed in point 1(a) in so far as the systemsthat are used for law enforcement purposes, points 6 and 7 of Annex III, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Directive (EU) 2016/680, or Regulation 2016/679 or the national competent authorities supervising the activities of the law enforcement, immigration or asylum authorities putting into service or using those systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2678 #
Proposal for a regulation
Article 63 – paragraph 5
5. For AI systems listed in point 1(a) in so far as the systems are used for law enforcement purposes, points 6 and 7 of Annex III, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Directive (EU) 2016/680, or Regulation 2016/679 or the national competent authorities supervising the activities of the law enforcement, immigration or asylum authorities putting into service or using those systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2682 #
Proposal for a regulation
Article 64 – paragraph 1
1. Access to data and documentation iIn the context of their activities, the national supervisory authorities, the market surveillance authorities, or the Commission, shall be granted full access to the training data sets, and where applicable, validation and testing datasets used by the provider or, where relevant, the user, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
2022/06/13
Committee: IMCOLIBE
Amendment 2690 #
Proposal for a regulation
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authoritiesnational supervisory authority, the market surveillance authorities or, where applicable, the Commission shall be granted access to the source code of the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 2697 #
Proposal for a regulation
Article 64 – paragraph 3
3. National public authorities or bodies, which supervise or enforce the respect of obligations under Union law protecting fundamental rights in relation to the use of high-risk AI systems referred to in Annex III shall have the power to request and access any documentation created or maintained under this Regulation when access to that documentation is necessary for the fulfilment of the competences under their mandate within the limits of their jurisdiction. The relevant public authority or body shall inform the market surveillance authority of the Member State concerned of any such request.
2022/06/13
Committee: IMCOLIBE
Amendment 2698 #
Proposal for a regulation
Article 64 – paragraph 4
4. By 3 months after the entering into force of this Regulation, each Member State shall identify the public authorities or bodies referred to in paragraph 3 and make a list publicly available on the website of the national supervisory authority. Member States shall notify the list to the Commission and all other Member States and keep the list up to date.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2700 #
Proposal for a regulation
Article 64 – paragraph 5
5. Where the documentation referred to in paragraph 3 is insufficient to ascertain whether a breach of obligations under Union law intended to protect fundamental rights has occurred, the public authority or body referred to paragraph 3 may make a reasoned request to the market surveillance authoritynational supervisory authority, the market surveillance authority, or where applicable the Commission, to organise testing of the high- risk AI system through technical means. The market surveillance authoritynational supervisory authority, the market surveillance authority or where applicable the Commission shall organise the testing with the close involvement of the requesting public authority or body within reasonable time following the request.
2022/06/13
Committee: IMCOLIBE
Amendment 2704 #
Proposal for a regulation
Article 65 – title
Procedure for dealing with AI systems presenting a risk at national level
2022/06/13
Committee: IMCOLIBE
Amendment 2705 #
Proposal for a regulation
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persmeans an AI system having the potential to affect adversely fundamental rights, health and safety of persons in general, including in the workplace, protection of consumers, the environment, public security, the values enshrined in Article 2 TEU and other public interests, that are protected by the applicable Union harmonisation legislation, to a degree which goes beyond that considered reasonable and acceptable in relation to its intended purpose or under the normal or reasonably foreseeable conditions of use of the system concerned, including the duration of use and, where applicable, its putting into service, installations are concernednd maintenance requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 2708 #
Proposal for a regulation
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks toAI systems having the potential to affect adversely the fundamental rights of persons, their health or safety or to, as well as AI systems having the protecntion of fundamental rights of persons are concernedal to breach the principles defined in Art. 4a or the Union values as enshrined in Article 2 TEU.
2022/06/13
Committee: IMCOLIBE
Amendment 2714 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1to the health and safety of persons, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3).
2022/06/13
Committee: IMCOLIBE
Amendment 2717 #
Proposal for a regulation
Article 65 – paragraph 2 – subparagraph 1
Where, in the course of that evaluation, the market surveillance authority or, where relevant, the national public authority referred to in Article 64(3) finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe, and in any case no later than 15 working days.
2022/06/13
Committee: IMCOLIBE
Amendment 2719 #
Proposal for a regulation
Article 65 – paragraph 2 a (new)
2 a. Where the national supervisory authority has sufficient reasons to consider that an AI system presents a risk to the protection of fundamental rights, the principles as defined in Art 4a or the Union values, as enshrined in Article 2 TEU, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2720 #
Proposal for a regulation
Article 65 – paragraph 2 b (new)
2 b. Where, in the course of that evaluation, the market surveillance authority or, where relevant, the national supervisory authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe, and in any case no later than 15 working days. The market surveillance authority shall inform the relevant notified body accordingly. Article 18 of Regulation (EU) 2019/1020 shall apply to the measures referred to in the first subparagraph.
2022/06/13
Committee: IMCOLIBE
Amendment 2721 #
Proposal for a regulation
Article 65 – paragraph 3
3. Where the market surveillance authority or, where relevant, the national supervisory authority, considers that non- compliance is not restricted to its national territory, it shall inform the Board, the Commission and the other Member States’ competent authorities of the results of the evaluation and of the actions which it has required the operator to take.
2022/06/13
Committee: IMCOLIBE
Amendment 2724 #
Proposal for a regulation
Article 65 – paragraph 5
5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2b, the market surveillance authority or, where relevant, the national supervisory authority, shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market or put into service, to withdraw the productAI system from that market or to recall it. That authority shall immediately inform the Commission and, the oBoard and ther Member States, without delay’ market surveillance authorities, of those measures.
2022/06/13
Committee: IMCOLIBE
Amendment 2725 #
Proposal for a regulation
Article 65 – paragraph 5
5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market or put into service, to withdraw the productAI system from that market or to recall it. That authority shall immediately inform the Commission, the Board and the other Member States, without delay, of those measures.
2022/06/13
Committee: IMCOLIBE
Amendment 2728 #
Proposal for a regulation
Article 65 – paragraph 6 – point a
(a) a failure of the AI system to meet requirements set out in Title III, Chapter 2and obligations set out in this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2733 #
Proposal for a regulation
Article 65 – paragraph 7
7. The market surveillance authorities of the Member States other than the market surveillancer, where applicable, the national supervisory authorityies of the other Member State initiating the procedures shall without delay inform the Commission and the other Member States, the Board and the authority initiating the procedure of any measures adopted and of any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections.
2022/06/13
Committee: IMCOLIBE
Amendment 2736 #
Proposal for a regulation
Article 65 – paragraph 8
8. Where, within three months of receipt of the information referred to in paragraph 5, no objection has been raised by either a Member Statemarket surveillance authority, a national supervisory authority, or the Commission in respect of a provisional measure taken by a Member Statemarket surveillance authority or a national supervisory authority , that measure shall be deemed justified. This is without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020.
2022/06/13
Committee: IMCOLIBE
Amendment 2745 #
Proposal for a regulation
Article 66 a (new)
Article 66 a Requests for Commission intervention 1. Where market surveillance authorities have reasons to suspect that the infringement of a provider or of a user of a high-risk AI system to this Regulation is liable to compromise the health or safety or fundamental of affected persons, the environment and the Union values enshrined in Article 2 TEU amount to a widespread infringement or a widespread infringement with a Uniondimension or affects or is likely affect at least 45 million citizens in the Union. The market surveillance authority may request the Commission to take the necessary investigatory and enforcement measures to ensure compliance with this Regulation. Such request shall set out the reasons for the Commission to intervene. 2. Prior to requesting the Commission to intervene, the market surveillance authority shall notify the Board which shall issue within 7 days a non-binding opinion on the request for the Commission to intervene. The market surveillance authority shall take into account the non-binding opinion of the Board before sending its request to the Commission.
2022/06/13
Committee: IMCOLIBE
Amendment 2747 #
Proposal for a regulation
Article 67 – title
Compliant AI systems which present a risk to the health and safety
2022/06/13
Committee: IMCOLIBE
Amendment 2749 #
Proposal for a regulation
Article 67 – paragraph 1
1. Where, having performed an evaluation under Article 65, in full cooperation with the relevant national public authority referred to in Article 64(3),the market surveillance authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights, environment, European values as enshrined in Article 2 TEU or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
2022/06/13
Committee: IMCOLIBE
Amendment 2750 #
Proposal for a regulation
Article 67 – paragraph 1
1. Where, having performed an evaluation under Article 65, the market surveillance authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
2022/06/13
Committee: IMCOLIBE
Amendment 2754 #
Proposal for a regulation
Article 67 – paragraph 3
3. The Member Statemarket surveillance authority shall immediately inform the Commission, the Board and the other Member States’ market surveillance authorities. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken.
2022/06/13
Committee: IMCOLIBE
Amendment 2759 #
Proposal for a regulation
Article 67 – paragraph 4
4. The Commission shall without delay enter into consultation with the Member Statmarket surveillance authorities and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified or not and, where necessary, propose appropriate measures.
2022/06/13
Committee: IMCOLIBE
Amendment 2761 #
Proposal for a regulation
Article 67 – paragraph 5
5. The Commission shall address its decision to the Member Statemarket surveillance authorities and communicate it to them and to the relevant operators.
2022/06/13
Committee: IMCOLIBE
Amendment 2765 #
Proposal for a regulation
Article 67 a (new)
Article 67 a Compliant AI systems which present a risk to the fundamental rights 1. Where, having performed an evaluation under Article 65, the national supervisory authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the compliance with obligations under Union or national law intended to protect fundamental rights, the principles of Article 4a, the values as enshrined in Article 2 TEU, the environment, or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe. 2. The provider or other relevant operators shall ensure that corrective action is taken in respect of all the AI systems concerned that they have made available on the market throughout the Union within the timeline prescribed by the national supervisory authority of the Member State referred to in paragraph 1. 3. The national supervisory authority shall immediately inform the Board, the Commission and the market surveillance authority. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken. 4. The Board shall without delay enter into consultation with the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Board shall decide whether the measure is justified or not and, where necessary, propose appropriate measures. 5. The Board shall address its decision to the national supervisory authority and to the relevant operators.
2022/06/13
Committee: IMCOLIBE
Amendment 2773 #
Proposal for a regulation
Article 68 a (new)
Article 68 a Right to lodge a complaint 1. Affected persons, affected by an AI system falling within the scope of this Regulation, shall have the right to lodge a complaint against the providers or users of such AI system, with the national supervisory authority of the Member State where they have their habitual place of residence or place of work or where the alleged infringement took place, if they consider that their fundamental rights, health or safety have been breached. 2. Affected persons shall have a right to be heard in the complaint handling procedure and in the context of any investigations conducted by the national supervisory authority as a result of their complaint. 3. The national supervisory authority with which the complaint has been lodged shall inform the complainants about the progress and outcome of their complaint. In particular,the national supervisory authority shall take all the necessary actions to follow up on the complaints it receives and, within three months of the reception of a complaint, give the complainant a preliminary response indicating the measures it intends to take and the next steps in the procedure, if any. 4. The national supervisory authority shall take a decision on the complaint, without delay and no later than six months after the date on which the complaint was lodged.
2022/06/13
Committee: IMCOLIBE
Amendment 2775 #
Proposal for a regulation
Article 68 a (new)
Article 68 a Commission fees 1. The Commission shall charge fees to market surveillance authorities when the Commission initiates proceedings in accordance with Article 68a(1)(c). 2. The overall amount of the fee shall cover the estimated costs the Commission incurs in relation to proceedings carried out under this Regulation, in particular costs related to the investigation and enforcement measures pursuant to Chapter 4 of Title VIII. 3. The Commission shall lay down in a delegated act, adopted pursuant to Article 73, the detailed methodology and procedures for:(a) the determination of the estimated costs referred to in paragraph 2and the necessary payment modalities. 4. The fees charged pursuant to paragraph 1 shall constitute external assigned revenue in accordance with Article 21(5) of Regulation (EU, Euratom) No 2018/1046 of the European Parliament and of the Council. 5. The Commission shall report annually to the European Parliament and to the Council on the overall amount of the costs incurred for the fulfilment of the tasks under this Regulation and the total amount of the fees charged in the preceding year.
2022/06/13
Committee: IMCOLIBE
Amendment 2778 #
Proposal for a regulation
Article 68 b (new)
Article 68 b Representation of affected persons 1. An affected person shall have the right to mandate a not-for-profit body, organisation or association that has been properly constituted in accordance with the law of a Member State, has statutory objectives which are in the public interest, and is active in the field of the protection of rights and freedoms of affected persons, with regard to the protection of their fundamental rights, to lodge the complaint on their behalf, to exercise the rights referred to in Article 68a on his or her behalf, and to exercise the right to receive compensation referred to in Article 70a and 71 on his or her behalf. 2. Any body, organisation or association referred to in paragraph 1 of this Article, independently of an affected person’s mandate, has the right to lodge, in that Member State, a complaint with the national supervisory authority which is competent pursuant to Article 68a, if it considers that the rights of a affected persons under this Regulation have been infringed as a result of them being subject to AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2781 #
Proposal for a regulation
Article 68 b (new)
Article 68 b Representation of affected persons or groups of persons 1. Without prejudice to Directive 2020/1828/EC, the person or groups of persons harmed by AI systems shall have the right to mandate a not-for-profit body, organisation or association which has been properly constituted in accordance with the law of a Member State, has statutory objectives which are in the public interest, and is active in the field of the protection of rights and freedoms impacted by AI to lodge the complaint on his, her or their behalf, to exercise the rights referred to in this Regulation on his, her or their behalf. 2. Without prejudice to Directive 2020/1828/EC, the body, organisation or association referred to in paragraph 1 shall have the right to exercise the rights established in this Regulation independently of a mandate by a person or groups of person if it considers that a provider or a user has infringed any of the rights or obligations set out in this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2783 #
Proposal for a regulation
Article 68 c (new)
Article 68 c Amendment to Directive 2020/1828/EC on Representative Actions for the Protection of the Collective Interests of Consumers The following is added to Annex I of Directive 2020/1828/EC on Representative actions for the protection of the collective interests of consumers: “Regulation xxxx/xxxx of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts”.
2022/06/13
Committee: IMCOLIBE
Amendment 2785 #
Proposal for a regulation
Article 68 d (new)
Article 68 d Reporting of breaches and protection of reporting persons Directive (EU) 2019/1937 of the European Parliament and of the Council shall apply to the reporting of breaches of this Regulation and the protection of persons reporting such breaches.
2022/06/13
Committee: IMCOLIBE
Amendment 2797 #
Proposal for a regulation
Article 70 – paragraph 1 – introductory part
1. National supervisory authorities, national competent authorities and notified bodies involved in the application of this Regulation shall respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
2022/06/13
Committee: IMCOLIBE
Amendment 2806 #
Proposal for a regulation
Article 70 – paragraph 1 a (new)
1 a. The Commission, the Board, national supervisory authorities, national competent authorities and notified bodies involved in the application of this Regulation shall put in place adequate cybersecurity and organisational measures to protect the security and confidentiality of the information and data obtained in carrying out their tasks and activities.
2022/06/13
Committee: IMCOLIBE
Amendment 2809 #
Proposal for a regulation
Article 70 – paragraph 2 – introductory part
2. Without prejudice to paragraphs 1 and 1a, information exchanged on a confidential basis betweenamong the national competentsupervisory authorities and between, national competent authorities and the Commission shall not be disclosed without the prior consultation of the originating national competent authority and the user when high-risk AI systems referred to in points 1, 6 and 7 of Annex III are used by law enforcement, immigration or asylum authorities, when such disclosure would jeopardise public and national security interests.
2022/06/13
Committee: IMCOLIBE
Amendment 2812 #
Proposal for a regulation
Article 70 a (new)
Article 70 a Administrative fines 1. Each national supervisory authority shall ensure that the imposition of administrative fines pursuant to this Article in respect of infringements of this Regulation shall in each individual case be effective, proportionate and dissuasive. 2. When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case due regard shall be given to the following: (a) the nature, gravity and duration of the infringement taking into account the nature, scope or purpose of the processing concerned as well as, where appropriate, the number of affected persons and the level of harm suffered by them; (b) the intentional or negligent character of the infringement; (c) any action taken by the operator to mitigate the harm suffered by the users or the affected persons; (d) the degree of responsibility of the operator taking into account the technical and organisational measures implemented by them; (e) any relevant previous infringements by the operator; (f) the degree of cooperation with the national supervisory authority, in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the national supervisory authority with regard to the same subject matter (g) the manner in which the infringement became known to the national supervisory authority, in particular whether, and if so to what extent, the operator notified the infringement; (h) adherence to approved codes of conduct or approved certification mechanisms; and (i) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement. 3. If an operator, intentionally or negligently, infringes several provisions of this Regulation, the total amount of the administrative fine shall not exceed the amount specified for the gravest infringement. 4. The non-compliance of the AI system with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 50 000 000 or, if the offender is a company, up to 10% of its total worldwide annual turnover for the preceding financial year, whichever is higher. 5. The non-compliance of the AI system with the requirements laid down in Article10 shall be subject to administrative fines of up to 40 000 000 EUR or, if the offender is company, up to 8 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. 6. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 30 000 000 EUR or, if the offender is a company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. 7. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 20 000000 EUR or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. 8. Without prejudice to the corrective powers of national supervisory authorities, each Member State may lay down the rules on whether and to what extent administrative fines may be imposed on public authorities and bodies established in that Member State. 9. The exercise by the national supervisory authority of its powers under this Article shall be subject to appropriate procedural safeguards in accordance with Union and Member State law, including effective judicial remedy and due process. 10. Where the legal system of the Member State does not provide for administrative fines, this Article may be applied in such a manner that the fine is initiated by the national supervisory authority and imposed by competent national courts, while ensuring that those legal remedies are effective and have an equivalent effect to the administrative fines imposed by national supervisory authorities. In any event, the fines imposed shall be effective, proportionate and dissuasive. Those Member States shall notify to the Commission the provisions of their laws which they adopt pursuant to this paragraph by [3 months after entry into force] and, without delay, any subsequent amendment law or amendment affecting them.
2022/06/13
Committee: IMCOLIBE
Amendment 2819 #
Proposal for a regulation
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation, in particular for infringements which are not subject to administrative fines pursuant to Article70a, and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers and start-up and their economic viability.
2022/06/13
Committee: IMCOLIBE
Amendment 2824 #
Proposal for a regulation
Article 71 – paragraph 2
2. The Member States shall notify the Commission[by 3 months following the date of entry into force of this Regulation] the Commission and the Board of those rules and of those measures and shall notify it, without delay, of any subsequent amendment affecting them.
2022/06/13
Committee: IMCOLIBE
Amendment 2825 #
Proposal for a regulation
Article 71 – paragraph 2
2. TWithin [three months following the entry into force of this Regulation], the Member States shall notify the Commission of those rules and of those measures and shall notify it, without delay, of any subsequent amendment affecting them.
2022/06/13
Committee: IMCOLIBE
Amendment 2828 #
Proposal for a regulation
Article 71 – paragraph 2 a (new)
2 a. The non-compliance of the AI system with the prohibition of the practices referred to in Article 5 shall be subject to administrative fines of up to 50 000 000 EUR or, if the offender is a company, up to 10% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
2022/06/13
Committee: IMCOLIBE
Amendment 2829 #
Proposal for a regulation
Article 71 – paragraph 3
3. The following infringements shall be subject to administrative fines of up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher: (a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5; (b) non-compliance of the AI system with the requirements laid down in Article 10.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2832 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
3. The following infringementsnon-compliance of the AI system with the requirements laid down in Article 10 shall be subject to administrative fines of up to 340 000 000 EUR or, if the offender is a company, up to 68 % of its total worldwide annual turnover for the preceding financial year, whichever is higher: .
2022/06/13
Committee: IMCOLIBE
Amendment 2836 #
Proposal for a regulation
Article 71 – paragraph 3 – point a
(a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2844 #
Proposal for a regulation
Article 71 – paragraph 3 – point b
(b) non-compliance of the AI system with the requirements laid down in Article 10.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2846 #
Proposal for a regulation
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2849 #
Proposal for a regulation
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 230 000 000 EUR or, if the offender is a company, up to 46 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
2022/06/13
Committee: IMCOLIBE
Amendment 2854 #
Proposal for a regulation
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2858 #
Proposal for a regulation
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 120 000 000 EUR or, if the offender is a company, up to 24 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
2022/06/13
Committee: IMCOLIBE
Amendment 2861 #
Proposal for a regulation
Article 71 – paragraph 6
6. When deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following: (a) the nature, gravity and duration of the infringement and of its consequences; (b) whether administrative fines have been already applied by other market surveillance authorities to the same operator for the same infringement. (c) the size and market share of the operator committing the infringement;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2865 #
Proposal for a regulation
Article 71 – paragraph 6 – point c
(c) the size and market share of the operator committing the infringement;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2875 #
Proposal for a regulation
Article 71 – paragraph 7
7. Each Member State shall lay down rules on whether and to what extent administrative fines may be imposed on public authorities and bodies established in that Member State.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2877 #
Proposal for a regulation
Article 71 – paragraph 8
8. Depending on the legal system of the Member States, the rules on administrative fines may be applied in such a manner that the fines are imposed by competent national courts of other bodies as applicable in those Member States. The application of such rules in those Member States shall have an equivalent effect.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2885 #
Proposal for a regulation
Article 72 – paragraph 1 – point a
(a) the nature, gravity and duration of the infringement and of its consequences, including to affected persons;
2022/06/13
Committee: IMCOLIBE
Amendment 2886 #
Proposal for a regulation
Article 72 – paragraph 1 – point a a (new)
(a a) any action taken by the Union institution, agency or body to mitigate the harm;
2022/06/13
Committee: IMCOLIBE
Amendment 2892 #
Proposal for a regulation
Article 72 – paragraph 1 – point c a (new)
(c a) the manner in which the infringement became known to the European Data Protection Supervisor, in particular whether, and if so, to what extent, the Union institution, agency or body notified the infringement.
2022/06/13
Committee: IMCOLIBE
Amendment 2894 #
Proposal for a regulation
Article 72 – paragraph 2 – introductory part
2. The following infringementsnon-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 1 000 000 EUR; 2a. The non-compliance of the AI system with the requirements laid down in Article 10 shall be subject to administrative fines of up to 5700 000 EUR: .
2022/06/13
Committee: IMCOLIBE
Amendment 2896 #
Proposal for a regulation
Article 72 – paragraph 2 – introductory part
2. The following infringementsnon-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 51 000 000 EUR:
2022/06/13
Committee: IMCOLIBE
Amendment 2899 #
Proposal for a regulation
Article 72 – paragraph 2 – point a
(a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2900 #
Proposal for a regulation
Article 72 – paragraph 2 – point a
(a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2902 #
Proposal for a regulation
Article 72 – paragraph 2 – point b
(b) non-compliance of the AI system with the requirements laid down in Article 10.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2903 #
Proposal for a regulation
Article 72 – paragraph 2 – point b
(b) non-compliance of the AI system with the requirements laid down in Article 10.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2905 #
Proposal for a regulation
Article 72 – paragraph 2 a (new)
2 a. The non-compliance of the AI system with the requirements laid down in Article 10 shall be subject to administrative fines of up to 700 000 EUR.
2022/06/13
Committee: IMCOLIBE
Amendment 2909 #
Proposal for a regulation
Article 72 – paragraph 3
3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 2500 000 EUR.
2022/06/13
Committee: IMCOLIBE
Amendment 2911 #
Proposal for a regulation
Article 72 – paragraph 3
3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 2500 000 EUR.
2022/06/13
Committee: IMCOLIBE
Amendment 2912 #
Proposal for a regulation
Article 72 – paragraph 5
5. The rights of defense of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection Supervisor’s file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets.
2022/06/13
Committee: IMCOLIBE
Amendment 2913 #
Proposal for a regulation
Article 72 – paragraph 5
5. The rights of defense of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection Supervisor’s file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets.
2022/06/13
Committee: IMCOLIBE
Amendment 2915 #
Proposal for a regulation
Article 72 – paragraph 6
6. Funds collected by imposition of fines in this Article shall be the income ofcontribute to the general budget of the Union.
2022/06/13
Committee: IMCOLIBE
Amendment 2916 #
Proposal for a regulation
Article 72 – paragraph 6 a (new)
6 a. The European Data Protection Supervisor shall, on an annual basis, notify the Board of the fines it has imposed pursuant to this Article.
2022/06/13
Committee: IMCOLIBE
Amendment 2917 #
Proposal for a regulation
Article 73 – paragraph 2
2. The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6, Article 48(5) and Article 48(5)68a shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
2022/06/13
Committee: IMCOLIBE
Amendment 2921 #
Proposal for a regulation
Article 73 – paragraph 3
3. The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6, Article 48(5) and Article 48(5)68a may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
2022/06/13
Committee: IMCOLIBE
Amendment 2932 #
Proposal for a regulation
Article 73 – paragraph 5
5. Any delegated act adopted pursuant to Article 4, Article 7(1), Article 11(3), Article 43(5) and (6) and, Article 48(5) and 68d shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
2022/06/13
Committee: IMCOLIBE
Amendment 2943 #
Proposal for a regulation
Article 83
AI systems already placed on the market or put into service 1. This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [12 months after the date of application of this Regulation referred to in Article 85(2)], unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned. The requirements laid down in this Regulation shall be taken into account, where applicable, in the evaluation of each large-scale IT systems established by the legal acts listed in Annex IX to be undertaken as provided for in those respective acts. 2. This Regulation shall apply to the high- risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes in their design or intended purpose.rticle 83 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2953 #
Proposal for a regulation
Article 83 – paragraph 1 – subparagraph 1
The requirements laid down in this Regulation shall be taken into account, where applicable,apply in the evaluation of each large-scale IT systems established by the legal acts listed in Annex IX to be undertaken as provided for in those respective acts.
2022/06/13
Committee: IMCOLIBE
Amendment 2960 #
Proposal for a regulation
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes in their design or intended purpose.
2022/06/13
Committee: IMCOLIBE
Amendment 2965 #
Proposal for a regulation
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III, including the extension of existing area headings or addition of new area headings, the list of prohibited practices in Article 5, and the list of AI systems requiring additional transparency measures, once a year following the entry into force of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2971 #
Proposal for a regulation
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III once a yearannually following the entry into force of this Regulation and following a recommendation of the Board.
2022/06/13
Committee: IMCOLIBE
Amendment 2975 #
Proposal for a regulation
Article 84 – paragraph 3 – point b
(b) the state of penalties, and notably administrative fines as referred to in Articles 71(1),0a and 71 applied by national supervisory authoritites and Member States to infringements of the provisions of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2985 #
Proposal for a regulation
Article 84 – paragraph 6
6. In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of equality bodies and other relevant bodies or sources, and shall consult relevant external stakeholders, in particular those potentially affected by the AI system, as well as stakeholders from academia and civil society.
2022/06/13
Committee: IMCOLIBE
Amendment 2990 #
Proposal for a regulation
Article 84 – paragraph 7
7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology, the effect of AI systems on health and safety, fundamental rights, the environment, equality, and accessibility for persons with disabilities, and in the light of the state of progress in the information society.
2022/06/13
Committee: IMCOLIBE
Amendment 2997 #
Proposal for a regulation
Article 84 – paragraph 7 a (new)
7 a. To guide the evaluations and reviews referred to in paragraphs 1 to 4, the Board shall undertake to develop an objective and participative methodology for the evaluation of risk level based on the criteria outlined in the relevant articles and inclusion of new systems in: the list in Annex III, including the extension of existing area headings or addition of new area headings; the list of prohibited practices in Article 5; and the list of AI systems requiring additional transparency measures.
2022/06/13
Committee: IMCOLIBE
Amendment 3010 #
Proposal for a regulation
Annex I
ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHESreferred to in Article 3, point 1 (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3012 #
Proposal for a regulation
Annex I
ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHESreferred to in Article 3, point 1 (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3042 #
Proposal for a regulation
Annex III – title
INDICATIVE LIST OF HIGH-RISK AI SYSTEMS REFERRED TO IN ARTICLE 6(2)
2022/06/13
Committee: IMCOLIBE
Amendment 3053 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
1. Biometric identification and categorisation of natural personsAI systems which use biometric or biometrics-based data:
2022/06/13
Committee: IMCOLIBE
Amendment 3058 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3060 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended tothat are or may be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons, including in workplaces, in educational settings and in border surveillance, or for the provision of public or essential services;
2022/06/13
Committee: IMCOLIBE
Amendment 3067 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems that are or may be used for the detection of a person’s presence, in workplaces, in educational settings, and in border surveillance, including in the virtual / online version of these spaces, on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3075 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that are or may be used for monitoring compliance with health and safety measures or inferring alertness /attentiveness for safety purposes, on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3080 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used to diagnose or support diagnosis of medical conditions or medical emergencies on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3099 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions. or monitoring of students during exams, for determining learning objectives, and for allocating personalised learning tasks to students;
2022/06/13
Committee: IMCOLIBE
Amendment 3115 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships,affecting the initiation, establishment, implementation, promotion and termination of an employment relationship, including AI systems intended to support collective legal and regulatory matters, particularly for task allocation and for monitoring and evaluating performance and behavior of persons or in matters of training or further education in such relationships.
2022/06/13
Committee: IMCOLIBE
Amendment 3149 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3157 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3160 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3165 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point c
(c) AI systems intended to be used by law enforcement authorities or on their behalf to detect deep fakes as referred to in article 52(3) and in point 8a(a) and (b) of this Annex;
2022/06/13
Committee: IMCOLIBE
Amendment 3170 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point d
(d) AI systems intended to be used by law enforcement authorities or on their behalf for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3178 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3193 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3194 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3197 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3200 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3209 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3210 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3238 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
8 a. Other applications: (a) AI systems intended to be used to generate, on the basis of limited human input, complex text content that would falsely appear to a person to be human- generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles, except where the content forms part of an evidently artistic, creative or fictional and analogous work; (b) AI systems intended to be used to generate or manipulate audio or video content that appreciably resembles existing natural persons, in a manner that significantly distorts or fabricates the original situation, meaning, content, or context and would falsely appear to a person to be authentic, except where the content forms part of an evidently artistic, creative or fictional cinematographic and analogous work.
2022/06/13
Committee: IMCOLIBE
Amendment 3244 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose or reasonably foreseeable use, the person/s developing the system, the date and the version of the system;
2022/06/13
Committee: IMCOLIBE
Amendment 3246 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose or reasonably foreseeable use, the person/s developing the system the date and the version of the system;
2022/06/13
Committee: IMCOLIBE
Amendment 3251 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point b
(b) how the AI system interacts or can be used to interact with hardware or software, including other AI systems, that isare not part of the AI system itself, where applicable;
2022/06/13
Committee: IMCOLIBE
Amendment 3269 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point g
(g) the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracyperformance, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).
2022/06/13
Committee: IMCOLIBE
Amendment 3273 #
3 a. A description of the appropriateness of the performance metrics for the specific AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 3274 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3 a (new)
3 a. A description of the appropriateness of the performance metrics for the specific AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 3275 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3 b (new)
3 b. Detailed information about the carbon footprint and the energy efficiency of the AI system, in particular with regard to the development of hardware, computational resources, as well as algorithm design and training processes;
2022/06/13
Committee: IMCOLIBE
Amendment 3276 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3 c (new)
3 c. Information about the computational resources required for the functioning of the AI system and its expected energy consumption during its use;
2022/06/13
Committee: IMCOLIBE
Amendment 3289 #
Proposal for a regulation
Annex VIII – title
INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF HIGH- RISK AI SYSTEMS IN ACCORDANCE WITH ARTICLE 5160
2022/06/13
Committee: IMCOLIBE
Amendment 3292 #
Proposal for a regulation
Annex VIII – paragraph 1
The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be registered in accordance with Article 5160.
2022/06/13
Committee: IMCOLIBE
Amendment 3298 #
Proposal for a regulation
Annex VIII – point 3
3. Name, address and contact details of the authorisedlegal representative, where applicable;
2022/06/13
Committee: IMCOLIBE
Amendment 3310 #
Proposal for a regulation
Annex VIII – point 12 a (new)
12 a. The list of users of the AI systems
2022/06/13
Committee: IMCOLIBE