BETA

Activities of Francesca DONATO related to 2021/0106(COD)

Plenary speeches (1)

Artificial Intelligence Act (debate)
2023/06/13
Dossiers: 2021/0106(COD)

Amendments (81)

Amendment 128 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values and the Charter of Fundamental Rights of the European Union. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/03/31
Committee: ITRE
Amendment 134 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented or reduced, by laying down minimum uniform obligations for operators and guaranteeing the uniformorganic and consistent protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/03/31
Committee: ITRE
Amendment 136 #
Proposal for a regulation
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, tThe use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
2022/03/31
Committee: ITRE
Amendment 142 #
Proposal for a regulation
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate serious risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial.
2022/03/31
Committee: ITRE
Amendment 148 #
Proposal for a regulation
Recital 12
(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should also be exincluded fromin the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
2022/03/31
Committee: ITRE
Amendment 155 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established to restrict or prevent the use or marketing of systems known to be high-risk. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non- discriminatory and in line with the Union’s international trade commitments.
2022/03/31
Committee: ITRE
Amendment 157 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
2022/03/31
Committee: ITRE
Amendment 160 #
Proposal for a regulation
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, including those defined as ‘high-risk’, and to lay down requirements for highmedium/low-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.
2022/03/31
Committee: ITRE
Amendment 164 #
Proposal for a regulation
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and shouldabsolutely must be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.
2022/03/31
Committee: ITRE
Amendment 165 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/03/31
Committee: ITRE
Amendment 172 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be thereforIt is therefore extremely important for such AI systems to be prohibited.
2022/03/31
Committee: ITRE
Amendment 179 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement shouldmust therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use isas a matter of principle and without any general exceptions. Only in exceptional cases and on the basis of decisions taken by the judicial authority competent on the matter and in the territory of one of the Member States, within the scope of the following three exhaustively listed and narrowly defined situations, may the use of such systems be permitted to the extent and for the time period strictly necessary to achieve an extremely substantial public interest, the importance of which outweighsis considered by the relevant judicial authority to prevail over the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least threen years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
2022/03/31
Committee: ITRE
Amendment 180 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/03/31
Committee: ITRE
Amendment 185 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement shouldmust be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earliern independent judicial authority of a Member State. Such authorisation absolutely must be obtained prior to the use.
2022/03/31
Committee: ITRE
Amendment 188 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.
2022/03/31
Committee: ITRE
Amendment 190 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
2022/03/31
Committee: ITRE
Amendment 193 #
Proposal for a regulation
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments and always under close supervision by human intelligence, with the ability to stop any of their actions quickly, if necessary. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate and never totally independent of human control. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. or medium/low-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons.
2022/03/31
Committee: ITRE
Amendment 197 #
Proposal for a regulation
Recital 30
(30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is also appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular,Examples of such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
2022/03/31
Committee: ITRE
Amendment 198 #
Proposal for a regulation
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability ofthe possibility that it may occurrence and they are used in a number of specifically pre- defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
2022/03/31
Committee: ITRE
Amendment 203 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/03/31
Committee: ITRE
Amendment 204 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scpolitical orientation or personale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own usepinions, or create new forms of discriminatory impacts. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/03/31
Committee: ITRE
Amendment 206 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
2022/03/31
Committee: ITRE
Amendment 209 #
Proposal for a regulation
Recital 42
(42) To mitigeliminate the risks from high- risk AI systems placed or otherwise put into service on the Union market for users and affected persons, the use of these systems must be prohibited, and only systems known to be medium/low-risk must be permitted to be placed on the market, applying to the latter certain mandatory requirements should apply, taking into account the intended purpose of the use of the system and according to the risk management system to be established by the provider.
2022/03/31
Committee: ITRE
Amendment 228 #
Proposal for a regulation
Recital 68
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.deleted
2022/03/31
Committee: ITRE
Amendment 232 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, tThe use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
2022/03/31
Committee: ITRE
Amendment 238 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouragadvised to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
2022/03/31
Committee: ITRE
Amendment 252 #
Proposal for a regulation
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged tononetheless create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
2022/03/31
Committee: ITRE
Amendment 257 #
Proposal for a regulation
Article 1 – paragraph 1 – point c
(c) specific requirements for high-risk and non-high-risk AI systems and obligations for operators of such systems;
2022/03/31
Committee: ITRE
Amendment 261 #
Proposal for a regulation
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems developed or used exclusively for military purposes.deleted
2022/03/31
Committee: ITRE
Amendment 276 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means an automated system or software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
2022/03/31
Committee: ITRE
Amendment 319 #
Proposal for a regulation
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.
2022/03/31
Committee: ITRE
Amendment 344 #
Proposal for a regulation
Article 5 – paragraph 2 – point b
(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular therrespective of the level of seriousness, probability andor scale of those consequences.
2022/03/31
Committee: ITRE
Amendment 347 #
Proposal for a regulation
Article 5 – paragraph 3 – introductory part
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use.
2022/03/31
Committee: ITRE
Amendment 372 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is not easily reversible or remedied, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible or remedied;
2022/03/31
Committee: ITRE
Amendment 386 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known or andy foreseeable risks associated with each high-risk AI system;
2022/03/31
Committee: ITRE
Amendment 393 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptablelimited and acceptable by the user, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user.
2022/03/31
Committee: ITRE
Amendment 430 #
Proposal for a regulation
Article 11 – paragraph 3
3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the necessary information to assess the compliance of the system with the requirements set out in this Chapter.
2022/03/31
Committee: ITRE
Amendment 431 #
Proposal for a regulation
Article 12 – paragraph 1
1. High-risk AI systems shall be designed and developed with capabilities enabloffering the technical possibility of automatically recording of events (‘logs’) while the high-risk AI system is operating. Those logging capabilities shall conform to recognised standards or common specifications.
2022/03/31
Committee: ITRE
Amendment 433 #
Proposal for a regulation
Article 13 – paragraph 2
2. HAll AI systems, including high-risk AI systems, shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users.
2022/03/31
Committee: ITRE
Amendment 434 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point ii
(ii) the level of accuracy, robustness and cybersecurity referred to in Article 15, where applicable, against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
2022/03/31
Committee: ITRE
Amendment 435 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights;
2022/03/31
Committee: ITRE
Amendment 440 #
Proposal for a regulation
Article 14 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can always be effectively overseen by natural persons during the period in which the AI system is in use.
2022/03/31
Committee: ITRE
Amendment 442 #
Proposal for a regulation
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when an AI system, especially a high-risk AI system, is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
2022/03/31
Committee: ITRE
Amendment 449 #
Proposal for a regulation
Article 14 – paragraph 4 – point b
(b) remain vigilant and aware of the possible tendency of automatically relying or over- relying on the output produced by a high- risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
2022/03/31
Committee: ITRE
Amendment 451 #
Proposal for a regulation
Article 14 – paragraph 4 – point d
(d) be able to decide, in any particular situationll cases, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
2022/03/31
Committee: ITRE
Amendment 456 #
Proposal for a regulation
Article 15 – paragraph 1
1. HAll high-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
2022/03/31
Committee: ITRE
Amendment 462 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
3. High-riskAll AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
2022/03/31
Committee: ITRE
Amendment 465 #
Proposal for a regulation
Article 15 – paragraph 4 – introductory part
4. HAll AI systems, especially high-risk AI systems, shall be resilient as regards attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities.
2022/03/31
Committee: ITRE
Amendment 467 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 1
The technical solutions aimed at ensuring the cybersecurity of high-riskall AI systems shall be appropriate to the relevant circumstances and the risks.
2022/03/31
Committee: ITRE
Amendment 470 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 2
The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for every possible attack, including attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws.
2022/03/31
Committee: ITRE
Amendment 472 #
Proposal for a regulation
Article 16 – paragraph 1 – introductory part
Providers of AI systems, and high-risk AI systems in particular, shall:
2022/03/31
Committee: ITRE
Amendment 473 #
Proposal for a regulation
Article 16 – paragraph 1 – point a
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title;
2022/03/31
Committee: ITRE
Amendment 474 #
Proposal for a regulation
Article 16 – paragraph 1 – point d
(d) when under their control, keep the logs automatically generated by their high- risk AI systems;
2022/03/31
Committee: ITRE
Amendment 475 #
Proposal for a regulation
Article 16 – paragraph 1 – point j
(j) upon request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title.
2022/03/31
Committee: ITRE
Amendment 478 #
Proposal for a regulation
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose of the high-risk AI system and applicable legal obligations under Union or national law.
2022/03/31
Committee: ITRE
Amendment 480 #
Proposal for a regulation
Article 20 – paragraph 2
2. Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation under Article 74 of that Directive.
2022/03/31
Committee: ITRE
Amendment 481 #
Proposal for a regulation
Article 21 – paragraph 1
PAll providers of AI systems, and high-risk AI systems in particular, which consider or have reason to consider that a high-riskn AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.
2022/03/31
Committee: ITRE
Amendment 482 #
Proposal for a regulation
Article 22 – paragraph 1
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known to the provider of the system, that provider shall immediately inform the national competent authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system, in particular of the non- compliance and of any corrective actions taken.
2022/03/31
Committee: ITRE
Amendment 485 #
Proposal for a regulation
Article 23 – paragraph 1
PAll providers of AI systems, especially high-risk AI systems, shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
2022/03/31
Committee: ITRE
Amendment 486 #
Proposal for a regulation
Article 26 – paragraph 2
2. Where an importer considers or has reason to consider that a high-riskn AI system is not in conformity with this Regulation, it shall not place that system on the market until that AI system has been brought into conformity. Where the high- risk AI system presents a risk within the meaning of Article 65(1), the importer shall inform the provider of the AI system and the market surveillance authorities to that effect.
2022/03/31
Committee: ITRE
Amendment 487 #
Proposal for a regulation
Article 26 – paragraph 3
3. Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable.
2022/03/31
Committee: ITRE
Amendment 488 #
Proposal for a regulation
Article 26 – paragraph 4
4. Importers shall ensure that, while a high-riskn AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise its compliance with the requirements set out in Chapter 2 of this Title.
2022/03/31
Committee: ITRE
Amendment 489 #
Proposal for a regulation
Article 26 – paragraph 5
5. Importers shall provide national competent authorities, upon a reasoned request, with all necessary information and documentation to demonstrate the conformity of a high-riskn AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood by that national competent authority, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law. They shall also cooperate with those authorities on any action national competent authority takes in relation to that system.
2022/03/31
Committee: ITRE
Amendment 490 #
Proposal for a regulation
Article 27 – paragraph 2
2. Where a distributor considers or has reason to consider that a high-riskn AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect.
2022/03/31
Committee: ITRE
Amendment 491 #
Proposal for a regulation
Article 27 – paragraph 3
3. Distributors shall ensure that, while a high-riskn AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise the compliance of the system with the requirements set out in Chapter 2 of this Title.
2022/03/31
Committee: ITRE
Amendment 492 #
Proposal for a regulation
Article 27 – paragraph 4
4. A distributor that considers or has reason to consider that a high-riskn AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
2022/03/31
Committee: ITRE
Amendment 502 #
Proposal for a regulation
Article 40 – paragraph 1
High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those standards cover those requirements.
2022/03/31
Committee: ITRE
Amendment 509 #
Proposal for a regulation
Article 41 – paragraph 3
3. High-risk AI systems which are in conformity with the common specifications referred to in paragraph 1 shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those common specifications cover those requirements.
2022/03/31
Committee: ITRE
Amendment 518 #
Proposal for a regulation
Article 47
[...]deleted
2022/03/31
Committee: ITRE
Amendment 520 #
Proposal for a regulation
Article 48 – paragraph 2
2. The EU declaration of conformity shall state that the high-risk AI system in question meets the requirements set out in Chapter 2 of this Title. The EU declaration of conformity shall contain the information set out in Annex V and shall be translated into an official Union language or languages required by the Member State(s) in which the high-risk AI system is made available.
2022/03/31
Committee: ITRE
Amendment 530 #
Proposal for a regulation
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/03/31
Committee: ITRE
Amendment 536 #
Proposal for a regulation
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
2022/03/31
Committee: ITRE
Amendment 540 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.deleted
2022/03/31
Committee: ITRE
Amendment 562 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and fundamental rights identified during the development and testing of such systems shall result in immediate mitigation or closure of the sandbox and, failing that, in the suspension of the development and testing process until such mitigation takes place.
2022/03/31
Committee: ITRE
Amendment 570 #
Proposal for a regulation
Article 55 – paragraph 1 – introductory part
1. Member States shallmay undertake the following actions:
2022/03/31
Committee: ITRE
Amendment 613 #
Proposal for a regulation
Article 61 – paragraph 1
1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.
2022/03/31
Committee: ITRE
Amendment 614 #
Proposal for a regulation
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
2022/03/31
Committee: ITRE
Amendment 616 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
1. Providers of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred.
2022/03/31
Committee: ITRE
Amendment 627 #
Proposal for a regulation
Article 72 – paragraph 2 – introductory part
2. The following infringements shall be subject to administrative fines of up to 51 000 000 EUR:
2022/03/31
Committee: ITRE
Amendment 628 #
Proposal for a regulation
Article 72 – paragraph 3
3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 2500 000 EUR.
2022/03/31
Committee: ITRE
Amendment 629 #
Proposal for a regulation
Article 83 – paragraph 1 – introductory part
1. This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [123 months after the date of application of this Regulation referred to in Article 85(2)], unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned.
2022/03/31
Committee: ITRE