BETA

182 Amendments of Hélène LAPORTE related to 2021/0106(COD)

Amendment 317 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation and without prejudice to stricter national legislation governing the protection of fundamental rights.
2022/06/13
Committee: IMCOLIBE
Amendment 323 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A minimum, consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/06/13
Committee: IMCOLIBE
Amendment 361 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectivesobjectives or parameters which have human control at their origin, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. These delegated acts should consist only of additions to the list of techniques used.
2022/06/13
Committee: IMCOLIBE
Amendment 381 #
Proposal for a regulation
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis by the competent judicial or administrative authority, having regard to the specificities of the individual situation at hand.
2022/06/13
Committee: IMCOLIBE
Amendment 389 #
Proposal for a regulation
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.
2022/06/13
Committee: IMCOLIBE
Amendment 398 #
Proposal for a regulation
Recital 12
(12) This Regulation should also apply to Unionthe institutions, offices, bodibodies, offices and agencies when acting as a provider or user of an AI systemof the Union. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
2022/06/13
Committee: IMCOLIBE
Amendment 410 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, minimum common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non- discriminatory and in line with the Union’s international trade commitments.
2022/06/13
Committee: IMCOLIBE
Amendment 412 #
Proposal for a regulation
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. It is also necessary to establish the criteria and conditions which determinine the category to which an AI system belongs.
2022/06/13
Committee: IMCOLIBE
Amendment 416 #
Proposal for a regulation
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union the values of respect for human dignity, freedom, equality, democracy and the rule of law, which are protected values under EU law, and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.
2022/06/13
Committee: IMCOLIBE
Amendment 426 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the userpeople such as children or people who are vulnerable due to their age, physical or mental incapacities, or other traits. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations with uninformed or non-consenting third parties that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/06/13
Committee: IMCOLIBE
Amendment 432 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groupsare, by definition, discriminatory. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may leads to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 455 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affects the private life of a large part of the population, evoke a feeling ofconstitutes constant surveillance and indirectly dissuades the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
2022/06/13
Committee: IMCOLIBE
Amendment 471 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is ad hoc and strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38if those criminal offences are punishable in the Member State concernedif they are punishable by a custodial sentence or a detention order for a maximum period of at least threen years and as they are defined in the law of thatin the Member State concerned. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionateThe nature of the offences deemed sufficiently serious to justify a penalty up to thighly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1)s threshold is a matter for the national legislation of each Member State in accordance with its own criminal law.
2022/06/13
Committee: IMCOLIBE
Amendment 505 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use ofThe use of biometric identification systems, including ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to itset by this Regulation, with the exception of customs formalities and individual authentication.
2022/06/13
Committee: IMCOLIBE
Amendment 526 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union, as well as the public order and national security of the Member States, and such limitation minimises any potential restriction to international trade, if any.
2022/06/13
Committee: IMCOLIBE
Amendment 536 #
Proposal for a regulation
Recital 31
(31) The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council47 and Regulation (EU) 2017/746 of the European Parliament and of the Council48, where a third-party conformity assessment is provided for medium-risk and high-risk products. However, the classification of an AI system as high risk for the sole purpose of this Regulation will apply to all products which use that AI system or which are themselves AI systems, irrespective of their classification under the sector-specific harmonisation legislation of the Union under which they are otherwise covered. _________________ 47 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1). 48 Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
2022/06/13
Committee: IMCOLIBE
Amendment 555 #
Proposal for a regulation
Recital 34
(34) As regards the management and operation of critical infrastructure, iIt is appropriate to classify as high- risk the AI systems intended to be used as safety components in the management and operation of critical infrastructure such as road traffic andor the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities.
2022/06/13
Committee: IMCOLIBE
Amendment 558 #
Proposal for a regulation
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.
2022/06/13
Committee: IMCOLIBE
Amendment 562 #
Proposal for a regulation
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, in so far as such use does not correspond to practices prohibited by this Regulation, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns oflead to discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
2022/06/13
Committee: IMCOLIBE
Amendment 575 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, in so far as such use does not correspond to practices prohibited by this Regulation, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small- scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they maywill have a significant impact on persons’ livelihood and maywill infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural personallow for experimentation in the public administration, in a regulatory sandbox, with innovative approaches which would stand to benefit from a wider use of compliant and safe AI systems, in accordance with the established rules. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk sincebe prohibited as they make decisions in very critical situations for the life and health of persons and their property, and such ethical choices should not be given over to computer systems.
2022/06/13
Committee: IMCOLIBE
Amendment 580 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It isAI systems intended to assess or rank the refore appropriate to classify as high-risk a number of AI systemsliability of natural persons, to identify natural persons based on biometric data, to serve as polygraphs or similar tools, to detect the emotional state of natural persons, to predict the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons or to assess personality traits of natural persons or groups for profiling in the course of detection, investigation or prosecution of criminal offences, shall be prohibited except in the three specific cases provided for in this Regulation. AI systems other than the aforementioned and intended to be used in thea law enforcement context where accuracy, reliability and transparency is particularly important shall be classed as high-risk AI systems to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits andor assessing characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 590 #
Proposal for a regulation
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularlysometimes in a vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably, and where applicable, their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49, the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 598 #
Proposal for a regulation
Recital 40
(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-riskprohibited, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-riskprohibit the use of AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
2022/06/13
Committee: IMCOLIBE
Amendment 624 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 632 #
Proposal for a regulation
Recital 45
(45) For the development of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission, developed and operated by European actors and which do not transfer any data outside the territory or legal jurisdiction of the European Union, and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non- discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non- discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy- preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 642 #
Proposal for a regulation
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons can actually oversee their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and, that it cannot make decisions without approval by the human operator, that it is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
2022/06/13
Committee: IMCOLIBE
Amendment 646 #
Proposal for a regulation
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the userdefined by standards or common technical specifications and communicated to the users. The European Commission should be able to decide on such standards or common technical specifications or to adopt existing ones developed by third parties such as suppliers, stakeholders or standardisation bodies.
2022/06/13
Committee: IMCOLIBE
Amendment 654 #
Proposal for a regulation
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
2022/06/13
Committee: IMCOLIBE
Amendment 655 #
Proposal for a regulation
Recital 53
(53) It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system, without prejudice to the right of a provider to take action against the manufacturer of that system.
2022/06/13
Committee: IMCOLIBE
Amendment 664 #
Proposal for a regulation
Recital 58
(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regard the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Users should in particular use high-risk AI systems in accordance with the instructions of usefor the purpose for which they were intended and in accordance with the instructions of use, to that end high-risk AI systems should structurally limit, to the greatest extent possible, the technical possibility for a user to use these AI systems in another way, and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record- keeping, as appropriate.
2022/06/13
Committee: IMCOLIBE
Amendment 669 #
Proposal for a regulation
Recital 59
(59) It is appropriate to envisage that the user of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non- professional activity.
2022/06/13
Committee: IMCOLIBE
Amendment 673 #
Proposal for a regulation
Recital 61
(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined, in particular as regards the levels and metrics of accuracy and robustness for high-risk AI systems. The Commission should be able to adopt common technical specifications in areas where no harmonised standards exist or where they are insufficient. The Commission should also be able to adopt standards or common technical specifications developed by third parties such as suppliers, stakeholders or standardisation bodies. Compliance with the common technical specifications adopted by the Commission should be a means for suppliers to demonstrate compliance with the requirements of this Regulation. Compliance with other harmonised standards set out in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficientalso help to demonstrate suppliers’ compliance with the requirements of this Regulation, without having the same probative value as the common technical specifications adopted by the Commission. _________________ 54 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
2022/06/13
Committee: IMCOLIBE
Amendment 680 #
Proposal for a regulation
Recital 63
(63) It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high- risk AI systems related to products which are covered by existing Union harmonisation legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation. However, should this Regulation and another legislative act of the European Union both cover the same product or component of a product and provide diverging definitions or impose different safety requirements, the applicable text shall be the one with the definition or safety requirements offering the best protection for people, Member States, society and fundamental rights.
2022/06/13
Committee: IMCOLIBE
Amendment 682 #
Proposal for a regulation
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent theyallow them to carry out a conformity assessment for AI systems, including high-risk AI systems, as qualified bodies, to the extent that these systems are not prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 692 #
Proposal for a regulation
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that constitute substantial modifications are subject to new conformity assessments, including in cases where the substantial modifications have been pre-determined by the provider and assessed at the moment of the initial conformity assessment should not constitute a substantial modification.
2022/06/13
Committee: IMCOLIBE
Amendment 715 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI systems used to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclosesystematically contain an indication on the content generated that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origi, and users who use such AI systems or reuse the content generated should not be allowed to remove or conceal that indication.
2022/06/13
Committee: IMCOLIBE
Amendment 731 #
Proposal for a regulation
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
2022/06/13
Committee: IMCOLIBE
Amendment 747 #
Proposal for a regulation
Recital 78
(78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. In view of the sensitive nature of high-risk AI systems, this post-market monitoring system should not be able to automatically send data or error reports to the supplier via the AI system. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law protecting fundamental rights resulting from the use of their AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 769 #
Proposal for a regulation
Recital 85
(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. As the purpose of delegating that power is to allow this Regulation to be adapted to technical advancements, the Commission should only be able to adopt such delegated acts to include non- restrictive additions or clarifications in the lists in those Annexes, whereas deletions, restrictive clarifications or amendments to the definitions of the items in those Annexes should only result from the adoption of amending regulations. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. _________________ 58 OJ L 123, 12.5.2016, p. 1.
2022/06/13
Committee: IMCOLIBE
Amendment 785 #
Proposal for a regulation
Article premier – paragraph 1 – point a
(a) harmonised minimum rules for the development of human-centric AI in the Union through the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 877 #
Proposal for a regulation
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 915 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectivesobjectives or parameters subject to human command, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
2022/06/13
Committee: IMCOLIBE
Amendment 922 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 a (new)
(1a) ‘human-centric AI’ means an approach which strives to ensure that human values are central to the development, deployment, use and monitoring of AI systems, by ensuring respect for fundamental rights, including those set out in the Treaties of the European Union and the Charter of Fundamental Rights of the European Union, all of which are united by reference to a common foundation rooted in respect for human dignity, in which every human being enjoys a unique and inalienable moral status, which also entails consideration of the natural environment and of other living beings that are part of the human ecosystem, as well as a sustainable approach enabling the flourishing of future generations;
2022/06/13
Committee: IMCOLIBE
Amendment 941 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
2022/06/13
Committee: IMCOLIBE
Amendment 996 #
Proposal for a regulation
Article 3 – paragraph 1 – point 18 a (new)
(18a) ‘lifecycle of AI’ means the process of developing, deploying and using an AI system, including the research, design, data supply, training, limited-scale deployment, implementation and withdrawal stages;
2022/06/13
Committee: IMCOLIBE
Amendment 1006 #
Proposal for a regulation
Article 3 – paragraph 1 – point 23
(23) ‘substantial modification’ means a change, including a change based on ‘learning’, to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessed;
2022/06/13
Committee: IMCOLIBE
Amendment 1011 #
Proposal for a regulation
Article 3 – paragraph 1 – point 25
(25) ‘post-market monitoring’ means all activities carried out by providers of AI systems to proactively collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions, whereby such activities may not consist in the AI system automatically sending data or error reports to the provider;
2022/06/13
Committee: IMCOLIBE
Amendment 1013 #
Proposal for a regulation
Article 3 – paragraph 1 – point 28 a (new)
(28a) ‘sandbox’, in connection with the development of AI systems, means an isolated operating and experimental environment enabling certain actions to be carried out using an AI system while protecting the user from any harm resulting from computer bias, damage or compromise;
2022/06/13
Committee: IMCOLIBE
Amendment 1024 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33
(33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, or physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data;
2022/06/13
Committee: IMCOLIBE
Amendment 1034 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric dataor behavioural data or by means of biological or brain implants;
2022/06/13
Committee: IMCOLIBE
Amendment 1042 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, health, ethnic origin or sexual or political orientation, on the basis of their biometric data;
2022/06/13
Committee: IMCOLIBE
Amendment 1060 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose, after a unique process, of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
2022/06/13
Committee: IMCOLIBE
Amendment 1066 #
Proposal for a regulation
Article 3 – paragraph 1 – point 38
(38) ‘‘post’ remote biometric identification system’ means a remote biometric identification system other than a ‘real-time’ remote biometric identification system, regardless of whether the acquired data is hosted in a separate system prior to the comparison and identification;
2022/06/13
Committee: IMCOLIBE
Amendment 1070 #
Proposal for a regulation
Article 3 – paragraph 1 – point 40 – introductory part
(40) ‘law enforcement authority’ means: any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;
2022/06/13
Committee: IMCOLIBE
Amendment 1071 #
Proposal for a regulation
Article 3 – paragraph 1 – point 40 – point a
(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; ordeleted
2022/06/13
Committee: IMCOLIBE
Amendment 1073 #
Proposal for a regulation
Article 3 – paragraph 1 – point 40 – point b
(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1082 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – introductory part
(44) ‘serious incident’ means any incident or malfunctioning that directly or indirectly leads, might have led or might lead to any of the following:
2022/06/13
Committee: IMCOLIBE
Amendment 1087 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s health or wealth, to property or the environment,
2022/06/13
Committee: IMCOLIBE
Amendment 1093 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b
(b) a serious and irreversible disruption of the management and operation of critical infrastructure.,
2022/06/13
Committee: IMCOLIBE
Amendment 1094 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
(ba) a breach of obligations under national law or Union law intended to protect fundamental rights.
2022/06/13
Committee: IMCOLIBE
Amendment 1102 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44a) ‘bias’ means any inclination of prejudice towards or against a person, object or position, whether voluntary or involuntary, that may arise as a result of the design, data supply, interactions, personalisation or configuration of an IA system;
2022/06/13
Committee: IMCOLIBE
Amendment 1113 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44b) ‘auditability’ means the ability of an AI system to undergo an assessment of the system’s algorithms, data and design processes;
2022/06/13
Committee: IMCOLIBE
Amendment 1115 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
(44c) ‘reproducibility’ means the ability of an AI system to exhibit the same behaviour when an experiment is repeated under the same conditions;
2022/06/13
Committee: IMCOLIBE
Amendment 1139 #
Proposal for a regulation
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments by means of additions or non-restrictive precisions on the basis of characteristics that are similar to the techniques and approaches listed therein.
2022/06/13
Committee: IMCOLIBE
Amendment 1142 #
Proposal for a regulation
Article 4 – paragraph 1 a (new)
The techniques and approaches listed in Annex I may only be amended by an amending regulation if the amendment concerns a withdrawal, a restrictive precision or a change in the definition of those techniques and approaches.
2022/06/13
Committee: IMCOLIBE
Amendment 1161 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1179 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their, such as age, or physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1188 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:;
2022/06/13
Committee: IMCOLIBE
Amendment 1198 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1210 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1224 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service or use of an AI system that takes decisions to dispatch or set priorities for dispatching emergency response services on which the lives of those rescued depend;
2022/06/13
Committee: IMCOLIBE
Amendment 1226 #
Proposal for a regulation
Article 5 – paragraph 1 – point c b (new)
(c b) the placing on the market, putting into service or use of an AI system that performs individual risk assessments, serves as polygraphs or similar tools, or analyses the emotional state of natural persons, or predicts the occurrence or repetition of an actual or potential criminal offence on the basis of profiling of natural persons or groups, or which assesses the personality traits of natural persons or groups for profiling purposes in the context of detection, investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 1227 #
Proposal for a regulation
Article 5 – paragraph 1 – point c c (new)
(c c) the placing on the market, putting into service or use of an AI system for the administration of justice and for democratic processes, which helps judicial authorities to investigate and interpret facts and the law, and to apply the law to a specific set of facts, with the exception of purely ancillary administrative activities which have no impact on the actual administration of justice in individual cases;
2022/06/13
Committee: IMCOLIBE
Amendment 1228 #
Proposal for a regulation
Article 5 – paragraph 1 – point c d (new)
(c d) the placing on the market, putting into service or use of an AI system that performs genomic, physiological, psychological or behavioural analyses of a natural person for the purpose of profiling that natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 1229 #
Proposal for a regulation
Article 5 – paragraph 1 – point c e (new)
(c e) the placing on the market, putting into service or use of an AI system that may affect the cognitive integrity or personality of a natural person, with or without the support of physical implants;
2022/06/13
Committee: IMCOLIBE
Amendment 1231 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of, except those strictly used for individual authentication of access to protected spaces or systems, those used for the execution of administrative procedures by tax and customs authorities, and by law enforcement, unless authorities if and in as far as such use is strictly necessary for one of the following objectives:
2022/06/13
Committee: IMCOLIBE
Amendment 1281 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least threen years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 1378 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2. It shall grant the authorisation for a limited period and scope. Any renewal or amendment of the authorisation shall be subject to the submission of a new request to the competent judicial or administrative authority.
2022/06/13
Committee: IMCOLIBE
Amendment 1465 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list set out in Annex III by adding fields of high-risk AI systems where both of the following conditions are fulfilled:they present a risk of harm to health and safety or a risk of a negative impact on fundamental rights which, taking into account its severity and likelihood of occurrence, is equivalent to or higher than the risk of harm or negative impact of high-risk AI systems already listed in Annex III.
2022/06/13
Committee: IMCOLIBE
Amendment 1471 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1477 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1487 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing an AI system for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
2022/06/13
Committee: IMCOLIBE
Amendment 1546 #
Proposal for a regulation
Article 7 – paragraph 2 a (new)
2a. When assessing an AI system for the purposes of paragraph 1, the Commission shall consult, where appropriate, national and European authorities and bodies, representatives of the groups concerned by that system, industry professionals, independent experts and civil society organisations. The Commission shall organise public consultations in this regard.
2022/06/13
Committee: IMCOLIBE
Amendment 1551 #
Proposal for a regulation
Article 7 – paragraph 2 b (new)
2b. The Commission shall publish a detailed report on the assessment referred to in paragraph 2.
2022/06/13
Committee: IMCOLIBE
Amendment 1552 #
Proposal for a regulation
Article 7 – paragraph 2 c (new)
2c. The Commission shall consult the Board before adopting delegated acts pursuant to paragraph 1.
2022/06/13
Committee: IMCOLIBE
Amendment 1589 #
Proposal for a regulation
Article 9 – paragraph 2 – point a a (new)
(aa) identification of the risks, damage and harm actually caused by the high-risk AI system in the past, whether these are the result of use of the high-risk AI system for its intended purpose or of another use;
2022/06/13
Committee: IMCOLIBE
Amendment 1599 #
Proposal for a regulation
Article 9 – paragraph 2 – point c a (new)
(ca) sandbox experimentation on the functioning of the AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 1604 #
Proposal for a regulation
Article 9 – paragraph 3
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in the common technical specifications adopted by the Commission or in relevant harmonised standards or common specifications.
2022/06/13
Committee: IMCOLIBE
Amendment 1608 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user., is:
2022/06/13
Committee: IMCOLIBE
Amendment 1636 #
Proposal for a regulation
Article 9 – paragraph 4 – point a (new)
(a) technically and structurally minimised by the high-risk AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1637 #
Proposal for a regulation
Article 9 – paragraph 4 – point b (new)
(b) deemed acceptable, provided that the high-risk AI system is used for its intended purpose or under conditions of reasonably foreseeable misuse.
2022/06/13
Committee: IMCOLIBE
Amendment 1638 #
Proposal for a regulation
Article 9 – paragraph 4 a (new)
4a. Those residual risks shall be communicated to the user.
2022/06/13
Committee: IMCOLIBE
Amendment 1646 #
Proposal for a regulation
Article 9 – paragraph 6
6. Testing procedures shall be suitable to achieve the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose.
2022/06/13
Committee: IMCOLIBE
Amendment 1649 #
Proposal for a regulation
Article 9 – paragraph 6 – subparagraph 1 (new)
They shall test:
2022/06/13
Committee: IMCOLIBE
Amendment 1650 #
Proposal for a regulation
Article 9 – paragraph 6 – point a (new)
(a) the ability of the high-risk AI system to generate an accurate and robust result;
2022/06/13
Committee: IMCOLIBE
Amendment 1651 #
(b) the trustworthiness of the high- risk AI system and its ability to actually generate a result such as that expected in accordance with its intended purpose;
2022/06/13
Committee: IMCOLIBE
Amendment 1652 #
Proposal for a regulation
Article 9 – paragraph 6 – point c (new)
(c) the structural and technical capacity of the high-risk AI system to ensure it cannot be used for purposes other than its intended purpose.
2022/06/13
Committee: IMCOLIBE
Amendment 1654 #
Proposal for a regulation
Article 9 – paragraph 7
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are preliminarily defined according to common standards or technical specifications and appropriate to the intended purpose of the high-risk AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1687 #
Proposal for a regulation
Article 10 – paragraph 2 – point a
(a) the relevant design choices, including the extent to which the functioning of the algorithms can be audited and reproduced;
2022/06/13
Committee: IMCOLIBE
Amendment 1734 #
Proposal for a regulation
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state- of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1741 #
Proposal for a regulation
Article 10 – paragraph 5 a (new)
5a. The dissemination of data by an AI system to other AI systems, whether or not they are of the same origin and whether or not they are installed on the same medium, shall be checked by the provider and may be retracted if necessary.
2022/06/13
Committee: IMCOLIBE
Amendment 1763 #
Proposal for a regulation
Article 11 – paragraph 3
3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amenddd to Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the necessary information to assess the compliance of the system with the requirements set out in this Chapter.
2022/06/13
Committee: IMCOLIBE
Amendment 1765 #
Proposal for a regulation
Article 12 – paragraph 1
1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems isare operating. Those logging capabilities shall conform to recognised standards or common specifications. Where possible, these capabilities shall be local ones and the logs shall be stored on the medium employed by the user of the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1800 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point ii
(ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated before being placed on the market and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
2022/06/13
Committee: IMCOLIBE
Amendment 1815 #
Proposal for a regulation
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risksprovided that those risks, if they persist notwithstanding the application of other requirements set out in this Chapter, do not result in a requirement for the high-risk AI system to be recalled or withdrawn.
2022/06/13
Committee: IMCOLIBE
Amendment 1864 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 2 a (new)
It shall be possible for the user, the provider, the national competent authority or authorities and the Commission, as appropriate, to audit and reproduce the functioning of the high-risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 1892 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service, and ensure it is periodically reviewed;
2022/06/13
Committee: IMCOLIBE
Amendment 1898 #
Proposal for a regulation
Article 16 – paragraph 1 – point g
(g) take the necessary corrective actions, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, before the high- risk AI system concerned is placed on the market, made available on the market or put into service, or before a high-risk AI system that has been withdrawn or recalled is placed on the market, made available on the market or put into service once again;
2022/06/13
Committee: IMCOLIBE
Amendment 1951 #
Proposal for a regulation
Article 19 – paragraph 1
1. Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43, prior to their placingbefore they are placed on the market, made available on the market or putting into service. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49.
2022/06/13
Committee: IMCOLIBE
Amendment 1954 #
Proposal for a regulation
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keepguarantee the storage of the logs automatically generated by their high-risk AI systems, where possible on the media employed by users, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose of the high-risk AI system and applicable legal obligations under Union or national law.
2022/06/13
Committee: IMCOLIBE
Amendment 1961 #
Proposal for a regulation
Article 21 – paragraph 1
Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformitywithdraw or recall the system, as appropriate, tso withdraw it or to recall it, as appropriateas to bring it into conformity. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.
2022/06/13
Committee: IMCOLIBE
Amendment 1983 #
Proposal for a regulation
Article 25 – paragraph 1 a (new)
1a. As of the time they are appointed, authorised representatives must be able to correspond, exchange technical information and carry out the duties required of them under this Regulation with the national authorities and in the official languages of all the Member States.
2022/06/13
Committee: IMCOLIBE
Amendment 1985 #
Proposal for a regulation
Article 25 – paragraph 2 – point a
(a) keep a copy of the EU declaration of conformity and the technical documentation at the disposal of the national competent authorities and national authoritiescarry out or commission the conformity assessment referred to in Article 63(7)43;
2022/06/13
Committee: IMCOLIBE
Amendment 1987 #
Proposal for a regulation
Article 25 – paragraph 2 – point b
(b) provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system wikeep a copy of the EU declaration of conformity and the the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by lawechnical documentation at the disposal of the national competent authorities and national authorities referred to in Article 63(7);
2022/06/13
Committee: IMCOLIBE
Amendment 1990 #
Proposal for a regulation
Article 25 – paragraph 2 – point c
(c) cooperate withprovide a national competent national authoritiesy, upon a reasoned request, on any action the latter takes in relation to the high-risk AI system.with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law;
2022/06/13
Committee: IMCOLIBE
Amendment 1994 #
Proposal for a regulation
Article 25 – paragraph 2 – point c a (new)
(ca) cooperate with competent national authorities, upon a reasoned request, on any action the latter takes in relation to the high-risk AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1997 #
Proposal for a regulation
Article 26 – paragraph 1 – point a
(a) the appropriate conformity assessment procedure has been carried out by the provider of that AI system following its import and prior to its deployment;
2022/06/13
Committee: IMCOLIBE
Amendment 2007 #
Proposal for a regulation
Article 26 – paragraph 5
5. Importers shall provide national competent authorities, upon a reasoned request, with all necessary information and documentation to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood byn official language of that national competent authority, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law. They shall also cooperate with those authorities on any action national competent authority takes in relation to that system.
2022/06/13
Committee: IMCOLIBE
Amendment 2016 #
Proposal for a regulation
Article 27 – paragraph 4
4. A distributor that considers or has reason to consider that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that systemto withdraw or recall that system in order to bring it into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high- risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
2022/06/13
Committee: IMCOLIBE
Amendment 2029 #
Proposal for a regulation
Article 28 – paragraph 1 – point b a (new)
(ba) they have placed on the market or put into service a high-risk AI system which they have substantially modified by their own means;
2022/06/13
Committee: IMCOLIBE
Amendment 2091 #
Proposal for a regulation
Article 30 – paragraph 8
8. Notifying authorities shall make sure that conformity assessments are carried out in a proportionate manner, avoiding unnecessary burdens for providers and that notified bodies perform their activities taking due account of the size of an undertaking, the sector in which it operates, its structure and the degree of complexity of and risk posed by the AI system in question.
2022/06/13
Committee: IMCOLIBE
Amendment 2099 #
Proposal for a regulation
Article 32 – paragraph 4
4. The conformity assessment body concerned may begin to perform the activities of a notified body only where no objections are raised by the Commission or the other Member States within one month of a notification.
2022/06/13
Committee: IMCOLIBE
Amendment 2106 #
Proposal for a regulation
Article 33 – paragraph 7
7. Notified bodies shall have procedures for the performance of activities which take due account of the size of an undertaking, the sector in which it operates, its structure, and the degree of complexity of and risk posed by the AI system in question.
2022/06/13
Committee: IMCOLIBE
Amendment 2108 #
Proposal for a regulation
Article 34 – paragraph 3
3. Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider and the notifying authority.
2022/06/13
Committee: IMCOLIBE
Amendment 2114 #
Proposal for a regulation
Article 37 – paragraph 4
4. Where the Commission ascertains that a notified body does not meet or no longer meets the requirements laid down in Article 33, it shall adopt a reasoned decision requesting the notifying Member State to take the necessary corrective measures, including withdrawal of notification if necessary. That implementing acrequest shall be adopted in accordance with the examination procedure referred to in Article 74(2).
2022/06/13
Committee: IMCOLIBE
Amendment 2117 #
Proposal for a regulation
Article 39
39 Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorised to carry out the activities of notified Bodies under this Regulation. Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorised to carry out the activities of notified Bodies under this Regulation.Article 39 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2123 #
Proposal for a regulation
Article 40 – paragraph 1
High-risk AI systems which arshall be in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those standards cover those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 2145 #
Proposal for a regulation
Article 41 – paragraph 3
3. High-risk AI systems which are in conformity with the common specifications referred to in paragraph 1 shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those common specifications cover those requirements.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2147 #
Proposal for a regulation
Article 41 – paragraph 4
4. Where providers do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that are at least equivalent thereto.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2158 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
2022/06/13
Committee: IMCOLIBE
Amendment 2162 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
(a) the conformity assessment procedure based on internal control referred to in Annex VI;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2167 #
Proposal for a regulation
Article 43 – paragraph 1 – point b
(b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2184 #
Proposal for a regulation
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide forassessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
2022/06/13
Committee: IMCOLIBE
Amendment 2187 #
Proposal for a regulation
Article 43 – paragraph 3 a (new)
3a. High-risk AI systems shall periodically be subject to a conformity assessment review procedure.
2022/06/13
Committee: IMCOLIBE
Amendment 2195 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance tshatll constitute a substantial modification, including if they have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification.
2022/06/13
Committee: IMCOLIBE
Amendment 2202 #
Proposal for a regulation
Article 43 – paragraph 5
5. The Commission is empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating Annexes VI and Annex VII in order to introduce elements of the conformity assessment procedures that become necessary in light of technical progress.
2022/06/13
Committee: IMCOLIBE
Amendment 2203 #
Proposal for a regulation
Article 43 – paragraph 6
6. The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.
2022/06/13
Committee: IMCOLIBE
Amendment 2209 #
Proposal for a regulation
Article 44 – paragraph 1
1. Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in anthe official Union language determined byof the Member State in which the notified body is established or in an official Union language otherwise acceptable to the notified body.
2022/06/13
Committee: IMCOLIBE
Amendment 2211 #
Proposal for a regulation
Article 46 – paragraph 2 – introductory part
2. Each notified body shall inform the other notified bodies and the notifying authority of:
2022/06/13
Committee: IMCOLIBE
Amendment 2231 #
Proposal for a regulation
Article 49 – paragraph 1
1. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems before they are placed on the market, made available on the market or put into service. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
2022/06/13
Committee: IMCOLIBE
Amendment 2354 #
Proposal for a regulation
Article 54 – paragraph 1 – point a a (new)
(aa) natural persons whose personal data are used for the development and testing of certain innovative AI systems in the sandbox shall be informed of the collection and use of their data and shall have given their consent thereto;
2022/06/13
Committee: IMCOLIBE
Amendment 2433 #
Proposal for a regulation
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor and the national data protection bodies. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
2022/06/13
Committee: IMCOLIBE
Amendment 2446 #
Proposal for a regulation
Article 57 – paragraph 2
2. The Board shall adopt its rules of procedure by a simpletwo-thirds majority of its members, following the consent of the Commission. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish sub-groups as appropriate for the purpose of examining specific questions.
2022/06/13
Committee: IMCOLIBE
Amendment 2452 #
Proposal for a regulation
Article 57 – paragraph 3
3. The Board shall be chaired by the Commission. The Commissionnational supervisory authority of the Member State holding the Presidency of the Council of the European Union. The latter shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2565 #
Proposal for a regulation
Article 59 – paragraph 2
2. Each Member State shall designate aone or more national supervisory authorityies among the national competent authorities. The national supervisory authority or authorities shall act as notifying authorityies and market surveillance authority unless a Member State has organisational and administrative reasons to designate more than one authorityies.
2022/06/13
Committee: IMCOLIBE
Amendment 2567 #
Proposal for a regulation
Article 59 – paragraph 3
3. Member States shall inform the Commission of their designation or designations and, where applicable, the reasons for designating more than one authority.
2022/06/13
Committee: IMCOLIBE
Amendment 2579 #
Proposal for a regulation
Article 59 – paragraph 5
5. Member States shall report to the Commission on an annual basis on the status of the financial and human resources of the national competent authorities with an assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations.
2022/06/13
Committee: IMCOLIBE
Amendment 2645 #
Proposal for a regulation
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources, not including the automated transmission of data, on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
2022/06/13
Committee: IMCOLIBE
Amendment 2647 #
Proposal for a regulation
Article 61 – paragraph 3
3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan. These provisions shall not provide for the automated and systematic transmission of data.
2022/06/13
Committee: IMCOLIBE
Amendment 2659 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1
Such notification shall be made immediately after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 15 day72 hours after the providers becomes aware of the serious incident or of the malfunctioning.
2022/06/13
Committee: IMCOLIBE
Amendment 2707 #
Proposal for a regulation
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persons, or of public order or the national security of the Member States are concerned.
2022/06/13
Committee: IMCOLIBE
Amendment 2718 #
Proposal for a regulation
Article 65 – paragraph 2 – subparagraph 1
Where, in the course of that evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliancwithin a reasonable period, commensurate with the nature of the risk, and which it may prescribe, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribto bring it into compliance.
2022/06/13
Committee: IMCOLIBE
Amendment 2732 #
Proposal for a regulation
Article 65 – paragraph 7
7. The market surveillance authorities of the Member States other than the market surveillance authority of the Member State initiating the procedure shall without delay inform the Commission and the other Member States of any measures adopted and of any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections.
2022/06/13
Committee: IMCOLIBE
Amendment 2734 #
Proposal for a regulation
Article 65 – paragraph 8
8. Where, within three months of receipt of the information referred to in paragraph 5, no objection has been raised by either a Member State or the Commission in respect of a provisional measure taken by a Member State, that measure shall be deemed justified. This is without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2738 #
Proposal for a regulation
Article 66
Union safeguard procedure 1. Where, within three months of receipt of the notification referred to in Article 65(5), objections are raised by a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, the Commission shall without delay enter into consultation with the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months from the notification referred to in Article 65(5) and notify such decision to the Member State concerned. 2. If the national measure is considered justified, all Member States shall take the measures necessary to ensure that the non-compliant AI system is withdrawn from their market, and shall inform the Commission accordingly. If the national measure is considered unjustified, the Member State concerned shall withdraw the measure. 3. Where the national measure is considered justified and the non- compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.Article 66 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2757 #
Proposal for a regulation
Article 67 – paragraph 4
4. The Commission shall without delay enter into consultation with the Member States and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified or not and, where necessary, propose appropriate measures.
2022/06/13
Committee: IMCOLIBE
Amendment 2767 #
Proposal for a regulation
Article 68 – paragraph 2
2. Where the non-compliance referred to in paragraph 1 persists for longer than one week following receipt of the relevant notice, the Member State concerned shall take all appropriate measures to restrict or prohibit the high- risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market. , imposing, where necessary, the penalties laid down in national law.
2022/06/13
Committee: IMCOLIBE
Amendment 2789 #
Proposal for a regulation
Article 69 – paragraph 2
2. The Commission and the Board shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, accessibility for persons with a disability, and stakeholders participation in the design and development of the AI systems and diversity of development teams on the basis of clear objectives and key performance indicators to measure the achievement of those objectives.
2022/06/13
Committee: IMCOLIBE
Amendment 2814 #
Proposal for a regulation
Article 71 – paragraph 1
1. In compliance with the terms and 1. conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers and start-ups and their economic viability, as well as the extent to which the infringement was intentionally committed and the extent of the harm sustained.
2022/06/13
Committee: IMCOLIBE
Amendment 2834 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
3. The following infringements shall be subject to administrative fines of up to 31 000 000 000 EUR or, if the offender is a company, up to 610 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:
2022/06/13
Committee: IMCOLIBE
Amendment 2889 #
Proposal for a regulation
Article 72 – paragraph 1 – point b
(b) the cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the Union institution or agency or body concerned with regard to the same subject matter;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2897 #
Proposal for a regulation
Article 72 – paragraph 2 – introductory part
2. The following infringements shall be subject to administrative fines of up to 530 000 000 EUR:
2022/06/13
Committee: IMCOLIBE
Amendment 2907 #
Proposal for a regulation
Article 72 – paragraph 3
3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 250 000 000 EUR.
2022/06/13
Committee: IMCOLIBE
Amendment 2914 #
Proposal for a regulation
Article 72 – paragraph 6
6. Funds collected by imposition of fines in this Article shall be the income of the general budget of the Union.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2944 #
Proposal for a regulation
Article 83 – paragraph 1 – introductory part
1. This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [12 months after the date of application of this Regulation referred to in Article 85(2)], unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned.
2022/06/13
Committee: IMCOLIBE
Amendment 2958 #
Proposal for a regulation
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes in their design or intended purpose.
2022/06/13
Committee: IMCOLIBE
Amendment 3026 #
Proposal for a regulation
Annex I – point c a (new)
(c a) Approaches based on neural network imitation and neuro-robotic networks;
2022/06/13
Committee: IMCOLIBE
Amendment 3027 #
Proposal for a regulation
Annex I – point c b (new)
(c b) Machine learning tasks on graphs for repetition tasks or pattern recognition;
2022/06/13
Committee: IMCOLIBE
Amendment 3028 #
(c c) Natural language programming techniques, including emotion detection and recognition systems, using interactions between human language and computer language;
2022/06/13
Committee: IMCOLIBE
Amendment 3029 #
Proposal for a regulation
Annex I – point c d (new)
(c d) Artificial vision for pattern recognition, including graphical analysis or digital signature identification;
2022/06/13
Committee: IMCOLIBE
Amendment 3030 #
Proposal for a regulation
Annex I – point c e (new)
(c e) Interactive systems related to mechatronics, robotics and automation systems.
2022/06/13
Committee: IMCOLIBE
Amendment 3036 #
Proposal for a regulation
Annex II – Part A – point 12 a (new)
12a. [REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC]
2022/06/13
Committee: IMCOLIBE
Amendment 3037 #
Proposal for a regulation
Annex II – Part A – point 12 b (new)
12b. [REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on contestable and fair markets in the digital sector (Digital Markets Act)].
2022/06/13
Committee: IMCOLIBE
Amendment 3061 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons, within the strict limits of the exemption from the general prohibition on their use laid down in Article 5;
2022/06/13
Committee: IMCOLIBE
Amendment 3070 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems intended to be used by autonomous devices, drones or vehicles to transport or collect natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3141 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c
(c) AI systems intended to be used, without taking any decisions on the matter, to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
2022/06/13
Committee: IMCOLIBE
Amendment 3148 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3156 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3173 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3180 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
(f) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3228 #
Proposal for a regulation
Annex III – paragraph 1 – point 8
8. Administration of justice and democratic processes: (a) AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3283 #
Proposal for a regulation
Annex VI
CONFORMITY ASSESSMENT PROCEDURE BASED ON INTERNAL CONTROL 1. The conformity assessment procedure based on internal control is the conformity assessment procedure based on points 2 to 4. 2. The provider verifies that the established quality management system is in compliance with the requirements of Article 17. 3. The provider examines the information contained in the technical documentation in order to assess the compliance of the AI system with the relevant essential requirements set out in Title III, Chapter 2. 4. The provider also verifies that the design and development process of the AI system and its post-market monitoring as referred to in Article 61 is consistent with the technical documentation.deleted
2022/06/13
Committee: IMCOLIBE