Activities of Nicola BEER related to 2021/0106(COD)
Shadow opinions (1)
OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts
Amendments (313)
Amendment 274 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I anda machine-based system that can, for a given set of human-defined objectives, generate outputs such as content,make predictions, recommendations, or decisions influencing the environments they interact withreal or virtual environments; AI systems can be designed to operate with varying levels of autonomy and can be developed with one or more of the techniques and approaches listed in Annex I;
Amendment 311 #
Proposal for a regulation
Citation 5 a (new)
Citation 5 a (new)
Having regard to the opinion of the European Central Bank,
Amendment 320 #
Proposal for a regulation
Article 4 – paragraph 1
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.
Amendment 348 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules as well as measures in support of innovation with a particular focus on SMEs and start-ups, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 358 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. Therefore, the term AI system should be defined in line with internationally accepted definitions. The definition should be based on the key functional characteristics of the softwareAI systems, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in air physical or digital dimensionenvironment. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to– date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. In order to ensure alignment of definitions on an international level, the European Commission should engage in a dialogue with international organisations such as the Organisation for Economic Cooperation and Development (OECD), should their definitions of the term ‘AI system’ be adjusted.
Amendment 374 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. The notion of remote biometric identification system shall not include verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises.
Amendment 399 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, assessment, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
Amendment 399 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) This Regulation should not undermine research and development activity and should respect freedom of science. It is therefore necessary to exclude from its scope AI systems specifically developed and put into service for the sole purpose of scientific research and development and to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems. As regards product oriented research activity by providers, the provisions of this Regulation should apply insofar as such research leads to or entails placing of an AI system on the market or putting it into service. Under all circumstances, any research and development activity should be carried out in accordance with recognised ethical standards for scientific research.
Amendment 402 #
Proposal for a regulation
Article 10 – paragraph 1 a (new)
Article 10 – paragraph 1 a (new)
1a. The common practices standards for a high-risk AI system assessment shall be developed by the European Artificial Intelligence Board.
Amendment 404 #
Proposal for a regulation
Recital 12 b (new)
Recital 12 b (new)
(12 b) Given the complexity of the value chain for AI systems, it is essential to clarify the role of persons who may contribute to the development of AI systems covered by this Regulation, without being providers and thus being obliged to comply with the obligations and requirements established herein. It is necessary to clarify that general purpose AI systems - understood as AI systems that are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation etc. - should not be considered as having an intended purpose within the meaning of this Regulation, unless those systems have been adapted to a specific intended purpose that falls within the scope of this Regulation. Initial providers of general purpose AI systems should therefore only have to comply with the provisions on accuracy, robustness and cybersecurity as laid down in Art. 15 of this Regulation. If a person adapts a general purpose AI application to a specific intended purpose and places it on the market or puts it into service, it shall be considered the provider and be subject to the obligations laid down in this Regulation. The initial provider of a general purpose AI application shall, after placing it on the market or putting it to service, and without compromising its own intellectual property rights or trade secrets, provide the new provider with all essential, relevant and reasonably expected information that is necessary to comply with the obligations set out in this Regulation.
Amendment 405 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, assessment, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular, the following elements:
Amendment 415 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
Amendment 430 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended towith the objective to or the effect of distorting human behaviour, whereby physical or psychological harms are reasonably likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacitiesspecific groups of persons due to their age, disabilities, social or economic situation. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 443 #
Proposal for a regulation
Recital 17 a (new)
Recital 17 a (new)
(17 a) AI systems used by law enforcement authorities or on their behalf to predict the probability of a natural person to offend or to reoffend, based on profiling and individual risk-assessment hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.
Amendment 449 #
Proposal for a regulation
Article 2 – paragraph 2 a (new)
Article 2 – paragraph 2 a (new)
2a. This Regulation shall not apply to any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
Amendment 450 #
Proposal for a regulation
Recital 18
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The use of those systems in publicly accessible places should therefore be prohibited.
Amendment 458 #
Proposal for a regulation
Article 15 – paragraph 1 a (new)
Article 15 – paragraph 1 a (new)
1a. The definition of "appropriate level" in terms of cybersecurity shall be provided by the European Union Agency for Cybersecurity (ENISA) in line with Article 42(2).
Amendment 459 #
Proposal for a regulation
Article 15 – paragraph 2
Article 15 – paragraph 2
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. European Artificial Intelligence Board shall define a common methodology for the definition and communication of these metrics also referred to in Article 9(7).
Amendment 462 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content,make predictions, recommendations, or decisions influencing the environments they intreal or virtual environments; AI systems can be designed to operacte with varying levels of autonomy and can be developed with one or more of the techniques and approaches listed in Annex I;
Amendment 463 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 1
Article 15 – paragraph 3 – subparagraph 1
The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans by the provider, or where appropriate the users of the product with input from the user, where considered necessary.
Amendment 464 #
Proposal for a regulation
Recital 19
Recital 19
Amendment 465 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 a (new)
Article 3 – paragraph 1 – point 1 a (new)
(1a) 'autonomy' means that to some degree an AI system operates by interpreting certain input and by using a set of pre-determined objectives, without being limited to such instructions, despite the system’s behaviour being constrained by, and targeted at, fulfilling the goal it was given and other relevant design choices made by its developer;
Amendment 466 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 b (new)
Article 3 – paragraph 1 – point 1 b (new)
Amendment 468 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3
Article 3 – paragraph 1 – point 3
Amendment 477 #
Proposal for a regulation
Recital 20
Recital 20
Amendment 486 #
Proposal for a regulation
Recital 21
Recital 21
Amendment 494 #
Proposal for a regulation
Recital 22
Recital 22
Amendment 497 #
Proposal for a regulation
Recital 23
Recital 23
Amendment 498 #
Proposal for a regulation
Article 3 a (new)
Article 3 a (new)
Article 3 a General Purpose AI 1. General purpose AI applications shall not be considered as having an intended purpose within the meaning of this Regulation unless those systems have been adapted to a specific intended purpose that falls within the scope of this Regulation. 2. Any natural or legal person that adapts a general purpose AI application to a specific intended purpose and places it on the market or puts it into service shall be considered the provider and be subject to the obligations laid down in this Regulation. 3.The initial provider of a general purpose AI application shall, after placing it on the market or putting it to service and without compromising its own intellectual property rights or trade secrets, provide the new provider referred to in paragraph 2 with all essential, relevant and reasonably expected information that is necessary to comply with the obligations set out in this Regulation. 4. The initial provider of a general purpose AI application shall only be responsible for the accuracy of the provided information towards the natural or legal person that adapts the general purpose AI application to a specific intended purpose.
Amendment 510 #
Proposal for a regulation
Article 42 – paragraph 1
Article 42 – paragraph 1
1. Taking into account their intended purpose and based on the risk evaluation, high-risk AI systems that have been trained and tested on data concerning the specific geographical, behavioural and functional setting within which they are intended to be used shall be presumed to be in compliance with the requirement set out in Article 10(4).
Amendment 511 #
Proposal for a regulation
Recital 24
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
Amendment 515 #
Proposal for a regulation
Recital 24 a (new)
Recital 24 a (new)
Amendment 534 #
Proposal for a regulation
Recital 30
Recital 30
(30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure in order to ensure compliance with essential safety requirements with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
Amendment 546 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk, except for verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.
Amendment 563 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used for making autonomous decisions or materially influencing decisions in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
Amendment 573 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
Article 55 – paragraph 1 – point a
(a) provide small-scale providers andSME providers, including start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
Amendment 575 #
Proposal for a regulation
Article 55 – paragraph 1 – point b
Article 55 – paragraph 1 – point b
(b) organise specific awareness raising and enhanced digital skills development activities about the application of this Regulation tailored to the needs of the small-scale providerSME providers, including start-ups and users;
Amendment 576 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providerSMEs and start-ups for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 577 #
Proposal for a regulation
Article 55 – paragraph 1 – point c
Article 55 – paragraph 1 – point c
(c) where appropriate, establish a dedicated channel for communication with small-scale providersSME providers, including start-ups, and user and other innovators to provide guidance and respond to queries about the implementation of this Regulation.
Amendment 579 #
Proposal for a regulation
Article 55 – paragraph 1 a (new)
Article 55 – paragraph 1 a (new)
Amendment 580 #
Proposal for a regulation
Article 55 – paragraph 2
Article 55 – paragraph 2
2. The specific interests and needs of the small-scale providersSME providers, including start-ups, shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their development stage, size and market size.
Amendment 582 #
Proposal for a regulation
Article 55 – paragraph 2 a (new)
Article 55 – paragraph 2 a (new)
2a. The Commission shall regularly assess the certification and compliance costs for SMEs, including start-ups, through consultations with the SME providers, start-ups and users.
Amendment 582 #
Proposal for a regulation
Recital 38
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
Amendment 599 #
Proposal for a regulation
Recital 40
Recital 40
(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts andor the law and infor applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
Amendment 662 #
Proposal for a regulation
Recital 56
Recital 56
(56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union.
Amendment 674 #
Proposal for a regulation
Recital 61
Recital 61
(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficientand are not expected to be published within a reasonable period or where they are insufficient, only after consulting the Artificial Intelligence Board, the European standardisation organisations as well as the relevant stakeholders. The Commission should duly justify why it decided not to use harmonised standards. _________________ 54 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
Amendment 683 #
Proposal for a regulation
Recital 64
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which and AI systems intended to be used to make inferences on the basis of biometric data that produce legal effects or affect the rights and freedoms of natural persons. For those types of AI systems the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited..
Amendment 713 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use or where the content is part of an obviously artistic, creative or fictional cinematographic work. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose, in an appropriate, clear and visible manner, that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Amendment 733 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scaletart-ups and SME providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providerSMEs and start-ups shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 741 #
Proposal for a regulation
Recital 76
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established as a body of the Union and should have legal personality. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission and the national competent authorities on specific questions related to artificial intelligence.
Amendment 773 #
Proposal for a regulation
Article 55 – title
Article 55 – title
55 Measures for small-scale providerSMEs, start-ups and users
Amendment 774 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
Article 55 – paragraph 1 – point a
(a) provide small-scale providerSMEs and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
Amendment 775 #
Proposal for a regulation
Article 55 – paragraph 1 – point b
Article 55 – paragraph 1 – point b
(b) organise specific awareness raising activities about the application of this Regulation tailored to the needs of the small-scale providers and userSMEs and start-ups;
Amendment 776 #
Proposal for a regulation
Article 55 – paragraph 1 – point c
Article 55 – paragraph 1 – point c
(c) where appropriate, establish a dedicated channel for communication with small-scale providers and userSMEs, start-ups and other innovators to provide guidance and respond to queries about the implementation of this Regulation.
Amendment 777 #
Proposal for a regulation
Article 55 – paragraph 1 – point c a (new)
Article 55 – paragraph 1 – point c a (new)
(ca) support SME's increased participation in the standardisation development process;
Amendment 778 #
Proposal for a regulation
Article 55 – paragraph 2
Article 55 – paragraph 2
2. The specific interests and needs of the small-scale providerSMEs and start-ups shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size and market size.
Amendment 796 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for certain AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
Amendment 797 #
Proposal for a regulation
Article 1 – paragraph 1 – point e
Article 1 – paragraph 1 – point e
(e) rules on market monitoring and, market surveillance and governance; .
Amendment 802 #
Proposal for a regulation
Article 1 – paragraph 1 – point e a (new)
Article 1 – paragraph 1 – point e a (new)
(e a) measures in support of innovation with a particular focus on SMEs and start-ups, including the setting up of regulatory sandboxes and the reduction of regulatory burdens.
Amendment 820 #
Proposal for a regulation
Article 2 – paragraph 1 – point b
Article 2 – paragraph 1 – point b
(b) users of AI systems locatwho are established within the Union;
Amendment 827 #
Proposal for a regulation
Article 2 – paragraph 1 – point c
Article 2 – paragraph 1 – point c
(c) providers and users of AI systems thatwho are locatestablished in a third country, where the output produced by the system is used in the Union;
Amendment 829 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providerSMEs and start-ups. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States may also establish one central contact point for communication with operators.
Amendment 833 #
Proposal for a regulation
Article 2 – paragraph 1 – point c a (new)
Article 2 – paragraph 1 – point c a (new)
(c a) importers and distributors of AI systems;
Amendment 834 #
Proposal for a regulation
Article 2 – paragraph 1 – point c b (new)
Article 2 – paragraph 1 – point c b (new)
(c b) product placing on the market or putting into service an AI system together with their product and under their own name or trademark;
Amendment 837 #
Proposal for a regulation
Article 2 – paragraph 1 – point c c (new)
Article 2 – paragraph 1 – point c c (new)
(c c) authorised representatives of providers, which are established in the Union.
Amendment 844 #
Proposal for a regulation
Article 2 – paragraph 2 – introductory part
Article 2 – paragraph 2 – introductory part
2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems, falling within the scope of the following acts,classified as high- risk AI in accordance with Article 6 related to products covered by Union harmonisation legislation listed in Annex II, section B only Article 84 of this Regulation shall apply:.
Amendment 845 #
Proposal for a regulation
Article 2 – paragraph 2 – point a
Article 2 – paragraph 2 – point a
Amendment 847 #
Proposal for a regulation
Article 2 – paragraph 2 – point b
Article 2 – paragraph 2 – point b
Amendment 849 #
Proposal for a regulation
Article 2 – paragraph 2 – point c
Article 2 – paragraph 2 – point c
Amendment 851 #
Proposal for a regulation
Article 2 – paragraph 2 – point d
Article 2 – paragraph 2 – point d
Amendment 853 #
Proposal for a regulation
Article 2 – paragraph 2 – point e
Article 2 – paragraph 2 – point e
Amendment 856 #
Proposal for a regulation
Article 2 – paragraph 2 – point f
Article 2 – paragraph 2 – point f
Amendment 857 #
Proposal for a regulation
Article 2 – paragraph 2 – point g
Article 2 – paragraph 2 – point g
Amendment 860 #
Proposal for a regulation
Article 2 – paragraph 2 – point h
Article 2 – paragraph 2 – point h
Amendment 861 #
Proposal for a regulation
Article 2 – paragraph 2 a (new)
Article 2 – paragraph 2 a (new)
2 a. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
Amendment 863 #
Proposal for a regulation
Article 2 – paragraph 2 b (new)
Article 2 – paragraph 2 b (new)
2 b. This Regulation shall not apply to any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
Amendment 875 #
Proposal for a regulation
Article 69 – paragraph 4
Article 69 – paragraph 4
4. The Commission and the Board shall take into account the specific interests and needs of the small-scale providerSMEs and start-ups when encouraging and facilitating the drawing up of codes of conduct.
Amendment 912 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact withreal or virtual environments; AI systems can be designed to operate with varying levels of autonomy and can be developed with one or more of the techniques and approaches listed in Annex I;
Amendment 923 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 a (new)
Article 3 – paragraph 1 – point 1 a (new)
Amendment 926 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 b (new)
Article 3 – paragraph 1 – point 1 b (new)
(1 b) 'general purpose AI system’ means an AI system that is able to perform generally applicable functions for multiple potential purposes, such as image or speech recognition, audio or video generation, pattern detection, question answering, and translation, is largely customizable and often open source software;
Amendment 930 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘provid'developer’' means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view toand placinges it on the market or puttings it into service under its own name or trademark, whether for payment or free of charge or that adapts general purpose AI systems to a specific intended purpose;
Amendment 937 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3
Article 3 – paragraph 1 – point 3
Amendment 939 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3 a (new)
Article 3 – paragraph 1 – point 3 a (new)
(3 a) ‘risk’ means the combination of the probability of occurrence of a harm and the severity of that harm;
Amendment 940 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3 b (new)
Article 3 – paragraph 1 – point 3 b (new)
(3 b) ‘significant harm‘ means a material harm to a person's life, health and safety or fundamental rights or entities or society at large whose severity is exceptional. The severity is in particular exceptional when the harm is hardly reversible, the outcome has a material adverse impact on health or safety of a person or the impacted person is dependent on the outcome;
Amendment 947 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘usdeployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
Amendment 1002 #
Proposal for a regulation
Article 3 – paragraph 1 – point 23
Article 3 – paragraph 1 – point 23
(23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service, which affectsis not foreseen or planned by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation oris affected or which results in a modification to the intended purpose for which the AI system has been assessed. A substantial modification is given if the remaining risk is increased by the modification of the AI system under the application of all necessary protective measures;
Amendment 1009 #
Proposal for a regulation
Article 3 – paragraph 1 – point 24
Article 3 – paragraph 1 – point 24
(24) ‘CE marking of conformity’ (CE marking) means a physical or digital marking by which a provider indicates that an AI system or a product with an embedded AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing;
Amendment 1037 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts or intentions of natural persons on the basis of their biometric or biometrics-based data;
Amendment 1044 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, or inferring their characteristics and attributes on the basis of their biometric or biometrics-based data;
Amendment 1052 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
Amendment 1103 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘regulatory sandbox’ means a facility that provides a controlled environment that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan;
Amendment 1111 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘deep fake’ means an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.
Amendment 1129 #
Proposal for a regulation
Article 3 a (new)
Article 3 a (new)
Amendment 1136 #
Proposal for a regulation
Article 4 – paragraph 1
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, after an adequate and transparent consultation process involving the relevant stakeholders, to amend the list of techniques and approaches listed in Annex I within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of transparent characteristics that are similar to the techniques and approaches listed therein. Providers and users of AI systems should be given 24 months to comply with any amendment to Annex I.
Amendment 1169 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order towith the objective to or the effect of materially distorting a person’s behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
Amendment 1181 #
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of an individual, including characteristics of such individual’s known or predicted personality or social or economic situation, a specific group of persons due to their age, physical or mental or disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1223 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service or use of an AI system for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics or past criminal behaviour of natural persons or groups of natural persons;
Amendment 1234 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:.
Amendment 1254 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
Amendment 1260 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
Amendment 1274 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1286 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(d a) the use of an AI system for the general monitoring, detection and interpretation of private content in interpersonal communication services, including all measures that would undermine end-to-end encryption..
Amendment 1354 #
Proposal for a regulation
Article 5 – paragraph 2
Article 5 – paragraph 2
Amendment 1356 #
Proposal for a regulation
Article 5 – paragraph 2 – point a
Article 5 – paragraph 2 – point a
Amendment 1358 #
Proposal for a regulation
Article 5 – paragraph 2 – point b
Article 5 – paragraph 2 – point b
Amendment 1361 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1
Article 5 – paragraph 2 – subparagraph 1
Amendment 1367 #
Proposal for a regulation
Article 5 – paragraph 3
Article 5 – paragraph 3
Amendment 1375 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
Article 5 – paragraph 3 – subparagraph 1
Amendment 1387 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
Amendment 1423 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a main safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
Amendment 1429 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
Article 6 – paragraph 1 – point b
(b) the product whose main safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment in order to ensure compliance with essential safety requirements with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
Amendment 1437 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk in the meaning of this regulation, if they will be deployed in a critical area referred to in Annex III and an individual assessment of the specific application carried out in accordance with Art. 6a showed that a significant harm is likely to arise.
Amendment 1456 #
Proposal for a regulation
Article 6 a (new)
Article 6 a (new)
Amendment 1466 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where, after an adequate and transparent consultation process involving the relevant stakeholders, to update the list in Annex III by withdrawing areas from that list or by adding critical areas. For additions both of the following conditions arneed to be fulfilled:
Amendment 1503 #
Proposal for a regulation
Article 7 – paragraph 2 – point b a (new)
Article 7 – paragraph 2 – point b a (new)
(b a) the extent to which the AI system acts autonomously;
Amendment 1520 #
Proposal for a regulation
Article 7 – paragraph 2 – point e a (new)
Article 7 – paragraph 2 – point e a (new)
(e a) the potential misuse and malicious use of the AI system and of the technology underpinning it;
Amendment 1531 #
Proposal for a regulation
Article 7 – paragraph 2 – point g a (new)
Article 7 – paragraph 2 – point g a (new)
(g a) magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large;
Amendment 1538 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – introductory part
Article 7 – paragraph 2 – point h – introductory part
(h) the extent to which existing Union legislation, in particular the GDPR, provides for:
Amendment 1549 #
Proposal for a regulation
Article 7 – paragraph 2 a (new)
Article 7 – paragraph 2 a (new)
2 a. The Commission shall provide a transitional period of at least 24 months following each update of Annex III.
Amendment 1555 #
Proposal for a regulation
Article 8 – paragraph 1
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
Amendment 1575 #
Proposal for a regulation
Article 9 – paragraph 1
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in appropriate relation to high- risk AI systems and its risks identified in the risk assessment referred to in Art. 6a.
Amendment 1587 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and foreseeable risks associated with eachmost likely to occur to health, safety and fundamental rights in view of the intended purpose of the high-risk AI system;
Amendment 1591 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
Article 9 – paragraph 2 – point b
Amendment 1598 #
Proposal for a regulation
Article 9 – paragraph 2 – point c
Article 9 – paragraph 2 – point c
(c) evaluation of other possibly arisingnew arising significant risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
Amendment 1601 #
Proposal for a regulation
Article 9 – paragraph 2 – point d
Article 9 – paragraph 2 – point d
(d) adoption of suitable risk management measureappropriate and targeted risk management measures to address identified significant risks in accordance with the provisions of the following paragraphs.
Amendment 1602 #
Proposal for a regulation
Article 9 – paragraph 2 a (new)
Article 9 – paragraph 2 a (new)
2 a. The risks referred to in paragraph 2 shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
Amendment 1605 #
Proposal for a regulation
Article 9 – paragraph 3
Article 9 – paragraph 3
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specification, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.
Amendment 1609 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual significant risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is reasonably judged to be acceptable, having regard to the benefits that the high-risk AI system is reasonably expected to deliver and provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual significant risks shall be communicated to the user.
Amendment 1621 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as posidentified and evaluated risks as far as economically and technologically feasible through adequate design and development of the high-risk AI system;
Amendment 1624 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point b
Article 9 – paragraph 4 – subparagraph 1 – point b
(b) where appropriate, implementation of adequate mitigation and control measures in relation to significant risks that cannot be eliminated;
Amendment 1627 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c
Article 9 – paragraph 4 – subparagraph 1 – point c
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users.
Amendment 1639 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tesevaluated for the purposes of identifying the most appropriate and targeted risk management measures. Testing and weighing any such measures against the potential benefits and intended goals of the system. Evaluations shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the relevant requirements set out in this Chapter.
Amendment 1653 #
Proposal for a regulation
Article 9 – paragraph 7
Article 9 – paragraph 7
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
Amendment 1669 #
Proposal for a regulation
Article 9 – paragraph 9
Article 9 – paragraph 9
9. For credit institutions regulated by Directive 2013/36/EUproviders and AI systems already covered by Union law that require them to establish a specific risk management, the aspects described in paragraphs 1 to 8 shall be part of the risk management procedures established by those institutions pursuant to Article 74 of that Directiveat Union law or deemed to be covered as part of it.
Amendment 1673 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be, as far as this can be reasonably expected and is feasible from a technical and economical point of view, developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
Amendment 1683 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices appropriate for the context of the use as well as the intended purpose of the AI system. Those practices shall concern in particular,
Amendment 1693 #
Proposal for a regulation
Article 10 – paragraph 2 – point c
Article 10 – paragraph 2 – point c
(c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation;
Amendment 1702 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases that are likely to affect the output of the AI system;
Amendment 1707 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possiblesignificant data gaps or shortcomings, and how those gaps and shortcomings can be addressed.
Amendment 1715 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errors and completeHigh-risk AI systems shall be designed and developed with the best efforts to ensure that training, validation and testing data sets shall be relevant, representative, and to the best extent possible, free of errors and complete in accordance with industry standards. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1742 #
Proposal for a regulation
Article 10 – paragraph 6
Article 10 – paragraph 6
6. Appropriate data governance and management practices shall apply fFor the development of high-risk AI systems nother than those which make use of using techniques involving the training of models in order to ensure that those high-risk AI systems comply with paragraph 2, paragraphs 2 to 5 shall apply only to the testing data sets.
Amendment 1753 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or, in the case of SMEs and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent authority.
Amendment 1778 #
Proposal for a regulation
Article 12 – paragraph 4
Article 12 – paragraph 4
Amendment 1790 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title. Transparency shall thereby mean that, to the extent that can be reasonably expected and is feasible in technical terms, the AI systems output is interpretable by the user and the user is able to understand the general functionality of the AI system and its use of data.
Amendment 1793 #
Proposal for a regulation
Article 13 – paragraph 2
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that helps supporting informed decision-making by users and is relevant, accessible and comprehensible to users.
Amendment 1801 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
Article 13 – paragraph 3 – point b – point iii
Amendment 1808 #
Proposal for a regulation
Article 13 – paragraph 3 – point e a (new)
Article 13 – paragraph 3 – point e a (new)
(e a) a description of the mechanisms included within the AI system that allow users to properly collect, store and interpret the logs in accordance with Article 12(1).
Amendment 1812 #
Proposal for a regulation
Article 14 – paragraph 1
Article 14 – paragraph 1
1. HWhere proportionate to the risks associated with the high-risk system and where technical safeguards are not sufficient, high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.
Amendment 1818 #
Proposal for a regulation
Article 14 – paragraph 2
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
Amendment 1830 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
Article 14 – paragraph 4 – introductory part
4. The measures referred to For the purpose of implementing paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate to the circumstances 1 to 3, the high-risk AI system shall be provided to the user in such a way that the individuals to whom human oversight is assigned are enabled as appropriate and proportionate, to the circumstances and in accordance with industry standards:
Amendment 1832 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
Article 14 – paragraph 4 – point a
(a) fulto be aware of and sufficiently understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
Amendment 1833 #
Proposal for a regulation
Article 14 – paragraph 4 – point b
Article 14 – paragraph 4 – point b
(b) remain aware of the possible tendency of automatically relying or over- relying on the output produced by a high- risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
Amendment 1836 #
Proposal for a regulation
Article 14 – paragraph 4 – point c
Article 14 – paragraph 4 – point c
(c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system andfor example the interpretation tools and methods available;
Amendment 1838 #
Proposal for a regulation
Article 14 – paragraph 4 – point d
Article 14 – paragraph 4 – point d
(d) to be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
Amendment 1841 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
Article 14 – paragraph 4 – point e
(e) to be able to intervene on the operation of the high-risk AI system, halt or interrupt the system through a “stop” button or a similar procedurewhere reasonable and technically feasible and except if the human interference increases the risks or would negatively impact the performance in consideration of generally acknowledged state-of-the-art.
Amendment 1844 #
Proposal for a regulation
Article 14 – paragraph 5
Article 14 – paragraph 5
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons separately.
Amendment 1850 #
Proposal for a regulation
Article 15 – paragraph 1
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose and to the extent that can be reasonably expected and is in accordance with relevant industry standards, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
Amendment 1856 #
Proposal for a regulation
Article 15 – paragraph 2
Article 15 – paragraph 2
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systemsrange of expected performance and the operational factors that affect that performance shall be declared in the accompanying instructions of use.
Amendment 1858 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. High-risk AI systems shall be resilientdesigned and developed with safety and security-by-design mechanism so that they achieve, in the light of their intended purpose, an appropriate level of cyber resilience as regards to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
Amendment 1863 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 2
Article 15 – paragraph 3 – subparagraph 2
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as aninfluencing input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures.
Amendment 1867 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 1
Article 15 – paragraph 4 – subparagraph 1
The technical solutions aimed at ensuring and organisational measures designed to uphold the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
Amendment 1887 #
Proposal for a regulation
Article 16 – paragraph 1 – point c
Article 16 – paragraph 1 – point c
(c) draw-up the technical documentation of the high-risk AI system referred to in Article 18;
Amendment 1891 #
Proposal for a regulation
Article 16 – paragraph 1 – point d
Article 16 – paragraph 1 – point d
(d) when under their control, keep the logs automatically generated by their high- risk AI systems as referred to in Article 20;
Amendment 1893 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its placing on the market or putting into service;
Amendment 1899 #
Proposal for a regulation
Article 16 – paragraph 1 – point g
Article 16 – paragraph 1 – point g
(g) take the necessary corrective actions as referred to in Article 21, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title;
Amendment 1902 #
Proposal for a regulation
Article 16 – paragraph 1 – point j
Article 16 – paragraph 1 – point j
(j) upon reasoned request of a national competent authority, provide the relevant information and documentation to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title.
Amendment 1914 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
Article 17 – paragraph 1 – introductory part
1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures andor instructions, and shall include at least the following aspects:
Amendment 1916 #
Proposal for a regulation
Article 17 – paragraph 1 – point a
Article 17 – paragraph 1 – point a
Amendment 1921 #
Proposal for a regulation
Article 17 – paragraph 1 – point e
Article 17 – paragraph 1 – point e
Amendment 1934 #
Proposal for a regulation
Article 17 – paragraph 1 – point j
Article 17 – paragraph 1 – point j
(j) the handling of communication with national competent authorities, competent authorities, including sectoral ones, providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;
Amendment 1935 #
Proposal for a regulation
Article 17 – paragraph 1 – point k
Article 17 – paragraph 1 – point k
Amendment 1956 #
Proposal for a regulation
Article 20 – paragraph 1
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of industry standards, the intended purpose of high-risk AI system and applicable legal obligations under Union or national law.
Amendment 1965 #
Proposal for a regulation
Article 22 – paragraph 1
Article 22 – paragraph 1
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known to the provider of the system, that provider shall immediately inform the national competentmarket surveillance authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system, in particular of the non-compliance and of any corrective actions taken.
Amendment 1969 #
Proposal for a regulation
Article 23 – paragraph 1
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. Any information submitted in accordance with the provision of this article shall be considered by the national competent authority a trade secret of the company that is submitting such information and kept strictly confidential.
Amendment 1977 #
Proposal for a regulation
Article 23 a (new)
Article 23 a (new)
Article 23 a Conditions for other persons to be subject to the obligations of a provider 1. Concerning high risk AI systems any natural or legal person shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: (a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligationsare allocated otherwise; (b) they make a substantial modification to or modify the intended purpose of a high-risk AI system already placed on the market or put into service; (c) they modify the intended purpose of a non-high-risk AI system already placed on the market or put it to service, in a way which makes the modified system a high- risk AI system; (d) they fulfil the conditions referred in Article 3a(2). 2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the high-risk AI system on the market or put it into service shall no longer be considered a provider for the purposes of this Regulation. The initial provider subject to the previous sentence, shall upon request and without compromising its own intellectual property rights or trade secrets, provide the new provider referred to in paragraph (1a), (1b) or (1c) with all essential, relevant and reasonably expected information that is necessary to comply with the obligations set out in this Regulation. 3. For high-risk AI systems that are safety components of products to which the legal acts listed in Annex II, section A apply, the manufacturer of those products shall be considered the provider of the high- risk AI system and shall be subject to the obligations referred to in Article 16 under either of the following scenarios: (i) the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer; or (ii) the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market. 4. Third parties involved in the sale and the supply of software including general purpose application programming interfaces (API), software tools and components, providers who develop and train AI systems on behalf of a deploying company in accordance with their instruction, or providers of network services shall not be considered providers for the purposes of this Regulation.
Amendment 1978 #
Proposal for a regulation
Article 24
Article 24
Amendment 1981 #
Proposal for a regulation
Article 25 – paragraph 1
Article 25 – paragraph 1
1. Prior to making their systems available on the Union market, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union.
Amendment 1991 #
Proposal for a regulation
Article 25 – paragraph 2 – point c
Article 25 – paragraph 2 – point c
(c) cooperate with competent national authorities, upon a reasoned request, on any action the latter takes into relation to the high-risk AI systemduce and mitigate the risks posed by a high-risk AI system covered by the authorised representative's mandate.
Amendment 2011 #
Proposal for a regulation
Article 27 – paragraph 2
Article 27 – paragraph 2
2. Where a distributor considers or has reason to consider that a high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the provider or the importer of the system as well as the market surveillance authorities, as applicable, to that effect.
Amendment 2015 #
Proposal for a regulation
Article 27 – paragraph 4
Article 27 – paragraph 4
4. A distributor that considers or has reason to consider that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the provider or the importer of the system as well as the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
Amendment 2018 #
Proposal for a regulation
Article 27 – paragraph 5
Article 27 – paragraph 5
5. Upon a reasoned request from a national competent authority, distributors of high-risk AI systems shall provide that authority with all the information and documentation necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title. Distributors shall also cooperate with that national competent authority on any action taken by that authorityregarding its activities pursuant to paragraphs 1 to 4.
Amendment 2024 #
Proposal for a regulation
Article 28
Article 28
Amendment 2039 #
Proposal for a regulation
Article 29 – paragraph 1
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systems and implement human oversight in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this article.
Amendment 2044 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
Article 29 – paragraph 1 a (new)
1 a. Users shall assign human oversight to natural persons who have the necessary competence, training and authority.
Amendment 2049 #
Proposal for a regulation
Article 29 – paragraph 3
Article 29 – paragraph 3
3. Without prejudice to paragraph 1, to the extent the user exercises control over the input data, that user shall ensure that input data is relevant in view of the intended purpose of the high-risk AI system. To the extent the user exercises control over the high-risk AI system, that user shall also ensure that relevant and appropriate robustness and cybersecurity measures are in place and are regularly adjusted or updated.
Amendment 2054 #
Proposal for a regulation
Article 29 – paragraph 4 – introductory part
Article 29 – paragraph 4 – introductory part
4. Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use and, when relevant, inform providers in accordance with Article 61. To the extent the user exercises control over the high-risk AI system, the user shall also establish a risk management system in line with Article 9 but limited to the potential adverse effects of using the high-risk AI system, the respective mitigation measures. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall inform the provider or distributor and suspend the use of the system. They shall also inform the provider or distributor when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.
Amendment 2059 #
Proposal for a regulation
Article 29 – paragraph 5 – introductory part
Article 29 – paragraph 5 – introductory part
5. Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control. The logs shall be kept for a period that is appropriate in the light of industry standards, the intended purpose of the high-risk AI system and applicable legal obligations under Union or national law.
Amendment 2064 #
Proposal for a regulation
Article 29 – paragraph 6
Article 29 – paragraph 6
6. Users of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, where applicaband may revert in part to those data protection impact assessments for fulfilling the obligations set out in this article.
Amendment 2068 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Where a user of a high risk AI system is obliged pursuant to Regulation (EU) 2016/679 to provide information regarding the use of automated decision making procedures, the user shall not be obliged to provide information on how the AI system reached a specific result. When fulfilling the information obligations under Regulation (EU) 2016/679, the user shall not be obliged to provide information beyond the information he or she received from the provider under Article 13 of this Regulation.
Amendment 2076 #
Proposal for a regulation
Article 29 – paragraph 6 b (new)
Article 29 – paragraph 6 b (new)
6 b. The obligations established by this Article shall not apply to users who use the AI system in the course of a personal non-professional activity.
Amendment 2133 #
Proposal for a regulation
Article 41 – paragraph 1
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist and are not expected to be published within a reasonable period or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2138 #
Proposal for a regulation
Article 41 – paragraph 1 a (new)
Article 41 – paragraph 1 a (new)
1 a. When deciding to draft and adopt common specifications, the Commission shall consult the Board, the European standardisation organisations as well as the relevant stakeholders, and duly justify why it decided not to use harmonised standards. The abovementioned organisations shall be regularly consulted while the Commission is in the process of drafting the common specifications.
Amendment 2141 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of stakeholders, including SMEs and start- ups, relevant bodies or expert groups established under relevant sectorial Union law.
Amendment 2149 #
Proposal for a regulation
Article 41 – paragraph 4
Article 41 – paragraph 4
4. Where providers of high-risk AI systems do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that are at least equivalent thereto.
Amendment 2150 #
Proposal for a regulation
Article 41 – paragraph 4 a (new)
Article 41 – paragraph 4 a (new)
4 a. If harmonised standards referred to in Article 40 are developed and the references to them are published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 in the future, the relevant common specifications shall no longer apply.
Amendment 2191 #
Proposal for a regulation
Article 43 – paragraph 4 – introductory part
Article 43 – paragraph 4 – introductory part
4. High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure whenever they are substantially modified, regardless of whetherif the modified system is intended to be further distributed or continues to be used by the current user.
Amendment 2193 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1
Article 43 – paragraph 4 – subparagraph 1
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. The same should apply to updates of the AI system for security reasons in general and to protect against evolving threats of manipulation of the system as long as the update does not include significant changes to the functionality of the system.
Amendment 2201 #
Proposal for a regulation
Article 43 – paragraph 5
Article 43 – paragraph 5
5. TAfter consulting the AI Board referred to in Article 56 and after providing substantial evidence, followed by thorough consultation and the involvement of the affected stakeholders, the Commission is empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating Annexes VI and Annex VII in order to introduce elements of the conformity assessment procedures that become necessary in light of technical progress.
Amendment 2208 #
Proposal for a regulation
Article 43 – paragraph 6
Article 43 – paragraph 6
Amendment 2232 #
Proposal for a regulation
Article 49 – paragraph 1
Article 49 – paragraph 1
1. The physical CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
Amendment 2234 #
Proposal for a regulation
Article 49 – paragraph 1 a (new)
Article 49 – paragraph 1 a (new)
1 a. A digital CE marking may be used instead of or additionally to the physical marking if it can be accessed via the display of the product or via a machine- readable code or other electronic means.
Amendment 2241 #
Proposal for a regulation
Article 50 – paragraph 1 – introductory part
Article 50 – paragraph 1 – introductory part
The provider shall, for a period ending 105 years after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:
Amendment 2244 #
Proposal for a regulation
Article 51 – paragraph 1
Article 51 – paragraph 1
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) and Article 6a, the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Amendment 2252 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Article 51 – paragraph 1 a (new)
Before putting into service or using a high-risk AI system in one of the areas listed in Annex III, users who are public authorities or Union institutions, bodies, offices or agencies or users acting on their behalf shall register in the EU database referred to in Article 60.
Amendment 2268 #
Proposal for a regulation
Article 52 – paragraph 2
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Amendment 2271 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose, in an appropriate, clear and visible manner, that the content has been artificially generated or manipulated.
Amendment 2278 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detectcontent is part of an obviously artistic, pcrevent, investigate and prosecute criminal offencesative or fictional cinematographic work or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 2294 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisorthe European Commission, one or more Member States, or other competent entities shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuringin collaboration with and guidance by the European Commission or the competent authorities in order to identify risks to health and safety and fundamental rights, test mitigation measures for identified risks, demonstrate prevention of these risks and otherwise ensure compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Amendment 2309 #
Proposal for a regulation
Article 53 – paragraph 2
Article 53 – paragraph 2
2. The European Commission in collaboration with Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox.
Amendment 2329 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. The European Commission, Member States’ competent authorities and other entities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the CommissionCommission’s AI Regulatory Sandboxing programme. The European Commission shall submit annual reports to the European Artificial Intelligence Board on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 2340 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
Article 53 – paragraph 6 a (new)
6 a. The Commission shall establish an EU AI Regulatory Sandboxing Programme whose modalities referred to in Article 53(6) shall cover the elements set out in Annex IXa. The Commission shall proactively coordinate with national, regional and also local authorities, as relevant.
Amendment 2372 #
Proposal for a regulation
Article 55 – title
Article 55 – title
Measures for small-scale providerSMEs, start-ups and users
Amendment 2375 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
Article 55 – paragraph 1 – point a
(a) provide small-scale providerSMEs and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
Amendment 2377 #
Proposal for a regulation
Article 55 – paragraph 1 – point b
Article 55 – paragraph 1 – point b
(b) organise specific awareness raising activities about the application of this Regulation tailored to the needs of the small-scale providerSMEs, sart-ups and users;
Amendment 2379 #
(c) where appropriate, establish a dedicated channel for communication with small-scale providers andSMEs, start-ups, users and other innovators to provide guidance and respond to queries about the implementation of this Regulation.
Amendment 2381 #
Proposal for a regulation
Article 55 – paragraph 1 – point c a (new)
Article 55 – paragraph 1 – point c a (new)
(c a) support SME's increased participation in the standardisation development process;
Amendment 2387 #
Proposal for a regulation
Article 55 – paragraph 2
Article 55 – paragraph 2
2. The specific interests and needs of the small-scale providerSMEs and start-ups shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size and market size.
Amendment 2389 #
Proposal for a regulation
Article 55 a (new)
Article 55 a (new)
Article 55 a Promoting research and development of AI in support of socially and environmentally beneficial outcomes Member States shall promote research and development of AI solutions which support socially and environmentally beneficial outcomes, including but not limited to development of AI-based solutions to increase accessibility for persons with disabilities, tackle socio- economic inequalities, and meet sustainability and environmental targets, by: (a) providing relevant projects with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions; (b) earmarking public funding, including from relevant EU funds, for AI research and development in support of socially and environmentally beneficial outcomes; (c) organising specific awareness raising activities about the application of this Regulation, the availability of and application procedures for dedicated funding, tailored to the needs of those projects; (d) where appropriate, establishing accessible dedicated channels for communication with projects to provide guidance and respond toqueries about the implementation of this Regulation.
Amendment 2400 #
Proposal for a regulation
Article 56 – paragraph 1
Article 56 – paragraph 1
1. A ‘European Artificial Intelligence Board’ (the ‘Board’) is established as a body of the Union and shall have legal personality.
Amendment 2405 #
Proposal for a regulation
Article 56 – paragraph 2 – introductory part
Article 56 – paragraph 2 – introductory part
2. The Board shall provide advice and assistance to the Commission and to the national supervisory authorities in order to:
Amendment 2408 #
Proposal for a regulation
Article 56 – paragraph 2 – point b
Article 56 – paragraph 2 – point b
(b) coordinate and contribute toprovide guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;
Amendment 2410 #
Proposal for a regulation
Article 56 – paragraph 2 – point c
Article 56 – paragraph 2 – point c
(c) contribute to the effective and consistent application of this Regulation and assist the national supervisory authorities and the Commission in ensuring the consistent application of this Regulationthat regard.
Amendment 2414 #
(c a) contribute to the effective cooperation with the competent authorities of third countries and with international organisations.
Amendment 2431 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisorthe European Data Protection Supervisor as the EU Agency for Fundamental Rights, the EU Agency for Cybersecurity, the Joint Research Centre, the European Committee for Standardization, the European Committee for Electrotechnical Standardization, and the European Telecommunications Standards Institute, each with one representative. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 2439 #
Proposal for a regulation
Article 57 – paragraph 1 a (new)
Article 57 – paragraph 1 a (new)
1 a. The Board shall act independently when performing its tasks or exercising its powers.
Amendment 2464 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The Board may invite external experts and observers to attend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end t, and hold consultations with relevant stakeholders and ensure appropriate participation. The Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory. The Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
Amendment 2484 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
Article 58 – paragraph 1 – introductory part
When providing advice and assistance to the Commission and to the national supervisory authorities in the context of Article 56(2), the Board shall in particular:
Amendment 2498 #
Proposal for a regulation
Article 58 – paragraph 1 – point b
Article 58 – paragraph 1 – point b
(b) contribute to uniform administrative practices in the Member States, including for the assessment , establishing, managing with the meaning of fostering cooperation and guaranteeing consistency among regulatory sandboxes, and functioning of regulatory sandboxes referred to in Article 53, Article54 and Annex IXa;
Amendment 2513 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
Article 58 – paragraph 1 – point c a (new)
(c a) carry out annual reviews and analyses of the complaints sent to and findings made by national competent authorities, of the serious incidents reports referred to in Article 62, and of the new registration in the EU Database referred to in Article 60 to identify trends and potential emerging issues threatening the future health and safety and fundamental rights of citizens that are not adequately addressed by this Regulation;
Amendment 2521 #
Proposal for a regulation
Article 58 – paragraph 1 – point c b (new)
Article 58 – paragraph 1 – point c b (new)
(c b) coordinate among national competent authorities; issue guidelines, recommendations and best practices with a view to ensuring the consistent implementation of this Regulation;
Amendment 2526 #
Proposal for a regulation
Article 58 – paragraph 1 – point c c (new)
Article 58 – paragraph 1 – point c c (new)
(c c) promote the cooperation and effective bilateral and multilateral exchange of information and best practices between the national supervisory authorities;
Amendment 2529 #
Proposal for a regulation
Article 58 – paragraph 1 – point c d (new)
Article 58 – paragraph 1 – point c d (new)
Amendment 2533 #
Proposal for a regulation
Article 58 – paragraph 1 – point c e (new)
Article 58 – paragraph 1 – point c e (new)
(c e) carry out biannual horizon scanning and foresight exercises to extrapolate the impact the trends and emerging issues can have on the Union;
Amendment 2539 #
Proposal for a regulation
Article 58 – paragraph 1 – point c f (new)
Article 58 – paragraph 1 – point c f (new)
(c f) promote public awareness and understanding of the benefits, rules and safeguards and rights in relation to the use of AI systems.
Amendment 2572 #
Proposal for a regulation
Article 59 – paragraph 4
Article 59 – paragraph 4
4. Member States shall ensure that national competent authorities are provided with adequate financial, technical and human resources to fulfil their tasks under this Regulation. In particular, national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data and data computing, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements.
Amendment 2584 #
Proposal for a regulation
Article 59 – paragraph 6
Article 59 – paragraph 6
6. The Commission and the board shall facilitate the exchange of experience between national competent authorities.
Amendment 2589 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providerSMEs and start-ups. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States mayshall also establish one central contact point for communication with operators and other stakeholders.
Amendment 2593 #
Proposal for a regulation
Article 59 – paragraph 8
Article 59 – paragraph 8
8. When Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as the competent authority for their supervision and coordination.
Amendment 2615 #
Proposal for a regulation
Article 60 – paragraph 1
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2)in one of the areas listed in Annex III which are registered in accordance with Article 51 and their uses by public authorities and Union institutions, bodies, offices or agencies or on their behalf.
Amendment 2627 #
Proposal for a regulation
Article 60 – paragraph 4
Article 60 – paragraph 4
4. The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider. or the user, if the user is a public authority or a Union institution, body, office or agency or a user acting on their behalf.
Amendment 2643 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources, to the extent such data are readily accessible to the provider and taking into account the limits resulting from data protection, copyright and competition law, on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
Amendment 2648 #
Proposal for a regulation
Article 61 – paragraph 3
Article 61 – paragraph 3
3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by ... [12 months following the entry into force of this Regulation].
Amendment 2655 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
Article 62 – paragraph 1 – introductory part
1. Providers and, where applicable, users of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred.
Amendment 2657 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1
Article 62 – paragraph 1 – subparagraph 1
Such notification shall be made immediatwithout undue delay after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 15 day72 hours after the providers becomes aware of the serious incident or of the malfunctioning.
Amendment 2664 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1 a (new)
Article 62 – paragraph 1 – subparagraph 1 a (new)
No report under this Article is required if the serious incident also leads to reporting requirements under other laws. In that case, the authorities competent under those laws shall forward the received report to the national competent authority.
Amendment 2668 #
Proposal for a regulation
Article 62 – paragraph 2 a (new)
Article 62 – paragraph 2 a (new)
2 a. Upon establishing a causal link between the AI system and the serious incident or malfunctioning or the reasonable likelihood of such a link, providers shall take appropriate corrective actions pursuant to Article 21.
Amendment 2673 #
Proposal for a regulation
Article 62 – paragraph 3 a (new)
Article 62 – paragraph 3 a (new)
3 a. National supervisory authorities shall on an annual basis notify the Board of the serious incidents and malfunctioning reported to them in accordance with this Article.
Amendment 2674 #
Proposal for a regulation
Article 63 – paragraph 2
Article 63 – paragraph 2
2. The national supervisory authority shall report annually to the Commission on a regular basis the outcomes of relevant market surveillance activities. The national supervisory authority shall report, without delay, to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules.
Amendment 2676 #
Proposal for a regulation
Article 63 – paragraph 3 a (new)
Article 63 – paragraph 3 a (new)
3 a. The procedures referred to in Articles 65, 66, 67 and 68 of this Regulation shall not apply to AI systems related to products, to which legal acts listed in Annex II, section A apply, when such legal acts already provide for procedures having the same objective. In such a case, these sectoral procedures shall apply instead.
Amendment 2679 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. AWithout prejudice to powers provided under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, market surveillance authorities may request access to data and documentation in the context of their activities, the market surveillance authorities shall be granted full access to the training, validation and testing datasets used by the provider, including that are strictly necessary for the purpose of its request., including, where appropriate and subject to security safeguards, through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
Amendment 2689 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. WhereMarket surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when the following cumulative conditions are fulfilled: a) Access to source code is necessary to assess the conformity of thea high-risk AI system with the requirements set out in Title III, Chapter 2, and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system. b) testing/auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient.
Amendment 2709 #
Proposal for a regulation
Article 65 – paragraph 1
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persons are concerned.
Amendment 2715 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3).
Amendment 2722 #
Proposal for a regulation
Article 65 – paragraph 3
Article 65 – paragraph 3
3. Where the market surveillance authority considers that non-compliance is not restricted to its national territory, it shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the operator to take.
Amendment 2726 #
Proposal for a regulation
Article 65 – paragraph 5
Article 65 – paragraph 5
5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market, to withdraw the product from that market or to recall it. That authority shall informnotify the Commission and the other Member States, without delay, of those measures.
Amendment 2727 #
Proposal for a regulation
Article 65 – paragraph 6 – introductory part
Article 65 – paragraph 6 – introductory part
6. The informnotification referred to in paragraph 5 shall include all available details, in particular the datainformation necessary for the identification of the non- compliant AI system, the origin of the AI system, the nature of the non-compliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, the market surveillance authorities shall indicate whether the non-compliance is due to one or more of the following:
Amendment 2729 #
Proposal for a regulation
Article 65 – paragraph 6 – point a
Article 65 – paragraph 6 – point a
(a) a failure of the high-risk AI system to meet requirements set out in Title III, Chapter 2;
Amendment 2730 #
Proposal for a regulation
Article 65 – paragraph 6 – point b a (new)
Article 65 – paragraph 6 – point b a (new)
(b a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;
Amendment 2731 #
Proposal for a regulation
Article 65 – paragraph 6 – point b b (new)
Article 65 – paragraph 6 – point b b (new)
(b b) non-compliance with provisions set out in Article 52;
Amendment 2735 #
Proposal for a regulation
Article 65 – paragraph 8
Article 65 – paragraph 8
8. Where, within three months of receipt of the informnotification referred to in paragraph 5, no objection has been raised by either a Member State or the Commission in respect of a provisional measure taken by a Member State, that measure shall be deemed justified. This is without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020. The period referred to in the first sentence of this paragraph shall be reduced to 30 days in the case of non- compliance with the prohibition of the artificial intelligence practices referred to in Article 5.
Amendment 2737 #
Proposal for a regulation
Article 65 – paragraph 9
Article 65 – paragraph 9
9. The market surveillance authorities of all Member States shall ensure that appropriate restrictive measures are taken in respect of the productAI system concerned, such as withdrawal of the product from their market, without delay.
Amendment 2739 #
Proposal for a regulation
Article 66 – paragraph 1
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), or 30 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, objections are raised by a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, the Commission shall without delay enter into consultation with the relevant Member State’s market surveillance authority and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months, or 60 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, starting from the notification referred to in Article 65(5) and notify such decision to the Member State concerned. The Commission shall also inform all other Member States of such decision.
Amendment 2751 #
Proposal for a regulation
Article 67 – paragraph 1
Article 67 – paragraph 1
1. Where, having performed an evaluation under Article 65, the market surveillance authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection or to fundamental rights, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
Amendment 2758 #
Proposal for a regulation
Article 67 – paragraph 4
Article 67 – paragraph 4
4. The Commission shall without delay enter into consultation with the Member States concerned and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified or not and, where necessary, propose appropriate measures.
Amendment 2762 #
5. The Commission shall address its decision to the Member States concerned, and inform all other Member States.
Amendment 2769 #
Proposal for a regulation
Article 68 – paragraph 2
Article 68 – paragraph 2
2. Where the non-compliance referred to in paragraph 1 persists, the Member State concerned shall take all appropriate and proportionate measures to restrict or prohibit the high- risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market.
Amendment 2772 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Amendment 2779 #
Proposal for a regulation
Article 68 b (new)
Article 68 b (new)
Article 68 b Right to an effective judicial remedy against a national supervisory authority 1. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them. 2. Without prejudice to any other administrative or non-judicial remedy, each data subject shall have the right to a an effective judicial remedy where the national supervisory authority does not handle a complaint, does not inform the complainant on the progress or preliminary outcome of the complaint lodged within three months pursuant to Article 68a(3) or does not comply with its obligation to reach a final decision on the complaint within six months pursuant to Article 68a(4) or its obligations under Article 65. 3. Proceedings against a supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established.
Amendment 2787 #
Proposal for a regulation
Article 69 – paragraph 1
Article 69 – paragraph 1
1. The Commission and the Member Statesboard shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.
Amendment 2793 #
Proposal for a regulation
Article 69 – paragraph 4
Article 69 – paragraph 4
4. The Commission and the Board shall take into account the specific interests and needs of the small-scale providerSMEs and start-ups when encouraging and facilitating the drawing up of codes of conduct.
Amendment 2796 #
Proposal for a regulation
Article 70 – paragraph 1 – introductory part
Article 70 – paragraph 1 – introductory part
1. National competent authorities and, notified bodies involved in the application of this Regulation shall respect, the Commission, the Board, and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, put appropriate technical and organisational measures in place to ensure the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
Amendment 2803 #
Proposal for a regulation
Article 70 – paragraph 1 – point c a (new)
Article 70 – paragraph 1 – point c a (new)
Amendment 2821 #
Proposal for a regulation
Article 71 – paragraph 1
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the size and interests of small-scale providerSMEs and start-ups and their economic viability.
Amendment 2827 #
Proposal for a regulation
Article 71 – paragraph 2
Article 71 – paragraph 2
2. The Member States shall without delay notify the Commission of those rules and of those measures and shall notify it, without delay, of any subsequent amendment affecting them.
Amendment 2830 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
Article 71 – paragraph 3 – introductory part
3. The following infringementsNon-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 320 000 000 EUR or, if the offender is a company, up to 64 % of its total worldwide annual turnover for the preceding financial year, and in case of SMEs and start-ups, up to 3% of its worldwide annual turnover for the preceding financial year, whichever is higher: . .
Amendment 2838 #
Proposal for a regulation
Article 71 – paragraph 3 – point a
Article 71 – paragraph 3 – point a
Amendment 2840 #
Proposal for a regulation
Article 71 – paragraph 3 – point b
Article 71 – paragraph 3 – point b
Amendment 2848 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The grossly negligent non- compliance by the provider or the user of the AI s ystem with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 210 000 000 EUR or, if the offender is a company, up to 42 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2864 #
Proposal for a regulation
Article 71 – paragraph 6 – point b
Article 71 – paragraph 6 – point b
(b) whether administrative fines have been already applied by other market surveillance authorities of one or more Member States to the same operator for the same infringement.
Amendment 2866 #
Proposal for a regulation
Article 71 – paragraph 6 – point c
Article 71 – paragraph 6 – point c
(c) the size, the annual turnover and market share of the operator committing the infringement;
Amendment 2881 #
Proposal for a regulation
Article 71 – paragraph 8 a (new)
Article 71 – paragraph 8 a (new)
Amendment 2962 #
Proposal for a regulation
Article 83 – paragraph 2
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changesubstantial modification in their design or intended purpose as defined in Article 3(23) .
Amendment 2968 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III once a year following the entry into force of this Regulation. The findings of that assessment shall be presented to the European Parliament and the Council.
Amendment 2972 #
Proposal for a regulation
Article 84 – paragraph 1 a (new)
Article 84 – paragraph 1 a (new)
Amendment 2974 #
Proposal for a regulation
Article 84 – paragraph 3 – point a
Article 84 – paragraph 3 – point a
(a) the status of the financial, technical and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation;
Amendment 3001 #
Proposal for a regulation
Article 85 – paragraph 2
Article 85 – paragraph 2
2. This Regulation shall apply from [248 months following the entering into force of the Regulation].
Amendment 3007 #
Proposal for a regulation
Article 85 – paragraph 3 a (new)
Article 85 – paragraph 3 a (new)
3 a. Member States shall not until... [24 months after the date of application of this Regulation] impede the making available of AI systems and products which were placed on the market inconformity with Union harmonisation legislation before [the date of application of this Regulation].
Amendment 3008 #
Proposal for a regulation
Article 85 – paragraph 3 b (new)
Article 85 – paragraph 3 b (new)
Amendment 3017 #
Proposal for a regulation
Annex I – point b
Annex I – point b
Amendment 3024 #
Proposal for a regulation
Annex I – point c
Annex I – point c
Amendment 3045 #
Proposal for a regulation
Annex III – title
Annex III – title
Amendment 3059 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons; , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
Amendment 3066 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems intended to be used to make inferences on the basis of biometric data, including emotion recognition systems, or biometrics-based data, including speech patterns, tone of voice, lip-reading and body language analysis, that produces legal effects or affects the rights and freedoms of natural persons.
Amendment 3102 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
Annex III – paragraph 1 – point 3 – point b
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educationalthose institutions.
Amendment 3108 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used formake autonomous decisions or materially influence decisions about recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
Amendment 3118 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for makingmake autonomous decisions or materially influence decisions on promotion and termination of work- related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
Amendment 3136 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providerSMEs and start-ups for their own use;
Amendment 3154 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities or on their behalf for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
Amendment 3164 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
Amendment 3168 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point c
Annex III – paragraph 1 – point 6 – point c
(c) AI systems intended to be used by law enforcement authorities or on their behalf to detect deep fakes as referred to in article 52(3);
Amendment 3172 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point d
Annex III – paragraph 1 – point 6 – point d
(d) AI systems intended to be used by law enforcement authorities or on their behalf for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
Amendment 3177 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 3184 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
Annex III – paragraph 1 – point 6 – point f
(f) AI systems intended to be used by law enforcement authorities or on their behalf for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
Amendment 3188 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point g
Annex III – paragraph 1 – point 6 – point g
(g) AI systems intended to be used by law enforcement authorities or on their behalf for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
Amendment 3195 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
Amendment 3205 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities or on their behalf to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
Amendment 3207 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point c
Annex III – paragraph 1 – point 7 – point c
(c) AI systems intended to be used by competent public authorities or on their behalf for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
Amendment 3214 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities or on their behalf for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 3216 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assistbe used by competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 3233 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a
Annex III – paragraph 1 – point 8 – point a
(a) AI systems intended to assist abe used by judicial authority in researching andies or on their behalf in interpreting facts andor the law and infor applying the law to a concrete set of facts.
Amendment 3279 #
Proposal for a regulation
Annex IV – paragraph 1 – point 5
Annex IV – paragraph 1 – point 5
5. A description of relevanyt changes made by providers to the system through its lifecycle;
Amendment 3281 #
Proposal for a regulation
Annex IV – paragraph 1 – point 6
Annex IV – paragraph 1 – point 6
6. A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Title III, Chapter 2, including a list of common specifications or other relevant standards and technical specifications applied;
Amendment 3312 #
Proposal for a regulation
Annex IX a (new)
Annex IX a (new)