BETA

467 Amendments of Elena KOUNTOURA related to 2021/0106(COD)

Amendment 129 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights, as well as consumer protection and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/03/31
Committee: ITRE
Amendment 140 #
Proposal for a regulation
Recital 3 a (new)
(3a) Furthermore, in order for Member States to fight against climate change, to achieve climate-neutrality and to meet the Sustainable Development Goals (SDGs), the European companies should ensure the sustainable design of AI systems to reduce resource usage and energy consumption, thereby limiting the risks to the environment; AI systems have the potential to automatically provide businesses with detailed insight into their emissions, including value chains, and forecast future emissions, thus helping to adjust and achieve the Union's emission targets.
2022/03/31
Committee: ITRE
Amendment 141 #
Proposal for a regulation
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial. and might affect one or more persons, a groups of persons or society as a whole, as well as the environment.
2022/03/31
Committee: ITRE
Amendment 143 #
Proposal for a regulation
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety, the environment and the protection of fundamental rights and values, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
2022/03/31
Committee: ITRE
Amendment 147 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software (and possibly also hardware), in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
2022/03/31
Committee: ITRE
Amendment 150 #
Proposal for a regulation
Recital 12 a (new)
(12a) In order to ensure a minimum level of transparency on the ecological sustainability aspects of an AI system, providers and users should document parameters including but not limited to resource consumption, resulting from the design, data management and training, the underlying infrastructures of the AI system, and of the methods to reduce such impact for any AI system.
2022/03/31
Committee: ITRE
Amendment 153 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, the environment and fundamental rights and values, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal), the Joint Declaration on Digital Rights of the Union (the Declaration) and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) of the High- Level Expert Group on Artificial Intelligence (AI HLEG), and should be non-discriminatory and in line with the Union’s international trade commitments.
2022/03/31
Committee: ITRE
Amendment 167 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/03/31
Committee: ITRE
Amendment 173 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
2022/03/31
Committee: ITRE
Amendment 175 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. Such AI systems should be therefore prohibited.
2022/03/31
Committee: ITRE
Amendment 178 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
2022/03/31
Committee: ITRE
Amendment 182 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.
2022/03/31
Committee: ITRE
Amendment 183 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/03/31
Committee: ITRE
Amendment 192 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any, but also on the environment, democracy and the rule of law in the Union.
2022/03/31
Committee: ITRE
Amendment 202 #
Proposal for a regulation
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-riskprohibited, since those systems may appreciably impact the health, safety and security rules applicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may alsoshould also be prohibited, since they may impact their rights to data protection and privacy.
2022/03/31
Committee: ITRE
Amendment 205 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-riskprohibited. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a highan unacceptable risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/03/31
Committee: ITRE
Amendment 207 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of and where a redress procedure is not foreseen. It is therefore appropriate to prohibit some AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress, including the availability of redress-by-design mechanisms and procedures. In view of the nature of the activities in question and the risks relating thereto, those high-risk AIprohibited systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offencesincluded in such a ban.
2022/03/31
Committee: ITRE
Amendment 208 #
Proposal for a regulation
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-riskprohibit AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status.Other AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
2022/03/31
Committee: ITRE
Amendment 220 #
Proposal for a regulation
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated in a clear, transparent, easily understandable and intelligible way to the users.
2022/03/31
Committee: ITRE
Amendment 223 #
Proposal for a regulation
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities, also taking into account as appropriate the underlying ICT infrastructure.
2022/03/31
Committee: ITRE
Amendment 230 #
Proposal for a regulation
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers and users of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system in a EU database, to be established and managed by the Commission. Certain AI systems listed in Article 52(1b) and (2) and uses thereof shall be registered in the EU database. In order to facilitate this, users shall request information listed in Annex VIII point 2(g) from providers of AI systems. Any uses of AI systems by public authorities or on their behalf shall also be registered in the EU database. In order to facilitate this, public authorities shall request information listed in Annex VIII point 3(g) from providers of AI systems. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the European Accessibility Act. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
2022/03/31
Committee: ITRE
Amendment 231 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use or where the content forms part of an evidently creative, artistic or fictional cinematographic or analogous work. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose in an appropriate, clear and transparent manner that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
2022/03/31
Committee: ITRE
Amendment 235 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe and fully controlled space for experimentation, while ensuring responsible innovation and integration of appropriate ethical safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. Regulatory sandboxes involving activities that may impact health, safety and fundamental rights, democracy and rule of law or the environment shall be developed in accordance with redress-by-design principles. Any significant risks identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place. The legal basis of such sandboxes should comply with the requirements established in the existing data protection framework and should be consistent with the Charter of fundamental rights of the European Union.
2022/03/31
Committee: ITRE
Amendment 240 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a strictly controlled experimentation and testing environment in the development and pre- marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation, as well as with the Charter of fundamental rights of the European Union and the General Data Protection Regulation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, to provide safeguards needed to build trust and reliance on AI systems and to accelerate access to markets, including by removing barriers for the public sector, small and medium enterprises (SMEs) and start-ups; to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/03/31
Committee: ITRE
Amendment 247 #
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possib, as well as the ENISA, the EU Agency for Fundamental Rights, EIGE, and the European Data Protection Supervisor should constantly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
2022/03/31
Committee: ITRE
Amendment 255 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into service and the use of human-centric and trustworthy artificial intelligence systems (‘AI systems’) in the Union;
2022/03/31
Committee: ITRE
Amendment 270 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) 'artificial intelligence system' (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with(and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal; AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions; AI systems can be developed with one or more of the techniques and approaches listed in Annex I;
2022/03/31
Committee: ITRE
Amendment 297 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, health, personal traits, ethnic origin or sexual or political orientation, on the basis of their biometric data;
2022/03/31
Committee: ITRE
Amendment 299 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – introductory part
(44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following:
2022/03/31
Committee: ITRE
Amendment 302 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s health, to propertyfundamental rights, health, safety or property, to democracy, the rule of law or the environment,
2022/03/31
Committee: ITRE
Amendment 305 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
(ba) breach of obligations under Union law intended to protect fundamental rights;
2022/03/31
Committee: ITRE
Amendment 306 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b b (new)
(bb) breach of obligations under Union law intended to protect personal data;
2022/03/31
Committee: ITRE
Amendment 308 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b c (new)
(bc) serious damage to the environment;
2022/03/31
Committee: ITRE
Amendment 312 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44a) ‘deep fake’ means generated or manipulated image, audio or video content produced by an AI system that appreciably resembles existing persons, objects, places or other entities or events and falsely appears to a person to be authentic or truthful;
2022/03/31
Committee: ITRE
Amendment 314 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44b) ‘personal data’ means data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;
2022/03/31
Committee: ITRE
Amendment 316 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
(44c) ‘non-personal data’ means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;
2022/03/31
Committee: ITRE
Amendment 317 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 d (new)
(44d) ‘public interest organisation’ means a not-for-profit body, organisation or association which has been properly constituted in accordance with the law of a Member State, has statutory objectives which are in the public interest;
2022/03/31
Committee: ITRE
Amendment 318 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 e (new)
(44e) ‘redress by design’ means technical mechanisms and/or operational procedures, established from the design phase, in order to be able to effectively detect, audit, rectify the consequences and implications of wrong predictions by an AI system and improve it.
2022/03/31
Committee: ITRE
Amendment 318 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights, as well as consumer protection and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 322 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible and online spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/06/13
Committee: IMCOLIBE
Amendment 330 #
Proposal for a regulation
Recital 3 a (new)
(3 a) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support such measures through allocating sufficient resources, including public funding, and giving priority access to regulatory sandboxes to projects led by civil society and social stakeholders. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts in equality and non- discrimination, accessibility, and consumer, environmental, and digital rights, and the academic community.
2022/06/13
Committee: IMCOLIBE
Amendment 332 #
Proposal for a regulation
Recital 3 a (new)
(3 a) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support such measures through allocating sufficient resources, including public funding, and giving priority access to regulatory sandboxes to projects led by civil society and social stakeholders. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts in equality and non- discrimination, accessibility, and consumer, environmental, and digital rights, and the academic community.
2022/06/13
Committee: IMCOLIBE
Amendment 333 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(da) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
2022/03/31
Committee: ITRE
Amendment 333 #
(3 b) Furthermore, in order for Member States to fight against climate change, to achieve climate-neutrality and to meet the Sustainable Development Goals (SDGs), the European companies should ensure the sustainable design of AI systems to reduce resource usage and energy consumption, thereby limiting the risks to the environment; AI systems have the potential to automatically provide businesses with detailed insight into their emissions, including value chains, and forecast future emissions, thus helping to adjust and achieve the Union's emission targets.
2022/06/13
Committee: IMCOLIBE
Amendment 335 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
(db) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
2022/03/31
Committee: ITRE
Amendment 336 #
Proposal for a regulation
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial and might affect one or more persons, a groups of persons or society as a whole, as well as the environment.
2022/06/13
Committee: IMCOLIBE
Amendment 337 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
(dc) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/03/31
Committee: ITRE
Amendment 338 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(dd) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU)2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
2022/03/31
Committee: ITRE
Amendment 339 #
Proposal for a regulation
Article 5 – paragraph 1 – point d e (new)
(de) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status;
2022/03/31
Committee: ITRE
Amendment 340 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
(df) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships;
2022/03/31
Committee: ITRE
Amendment 340 #
Proposal for a regulation
Recital 4 a (new)
(4 a) The concept of decision autonomy for machines is at its core in conflict with fundamental notions of our societies, such as human dignity, autonomy, and the rights to private life and the protection of personal data. This Regulation should reconcile the potential benefits to society offered by AI with the primacy of humans over machines;
2022/06/13
Committee: IMCOLIBE
Amendment 351 #
Proposal for a regulation
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safetythe environment and the protection of fundamental rights and values, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
2022/06/13
Committee: IMCOLIBE
Amendment 356 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk. The provider shall apply a precautionary principle and, in case of uncertainty over the AI system's classification, shall consider the AI system high-risk.
2022/03/31
Committee: ITRE
Amendment 359 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73, following an adequate consultation with all relevant stakeholders, including, the European Artificial Intelligence Board, the EU Agency for Fundamental Rights, and the European Data Protection Supervisor, to update the list in Annex III, by adding high-risk AI systems where both of the following conditions are fulfilled:
2022/03/31
Committee: ITRE
Amendment 360 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;
2022/03/31
Committee: ITRE
Amendment 361 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to theeconomic harm, negative societal impacts or harm to the environment, health and safety, or a risk of adverse impact on fundamental rights, democracy and the rule of law, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/03/31
Committee: ITRE
Amendment 365 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights or on the environment, democracy and rule of law, that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
2022/03/31
Committee: ITRE
Amendment 367 #
(c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights, democracy, rule of law and the environment or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
2022/03/31
Committee: ITRE
Amendment 367 #
Proposal for a regulation
Recital 7
(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 . An additional definition has been added for ‘biometrics-based data’ to cover physical, physiological or behavioural data that may not meet the criteria to be defined as biometric data (i.e. would not allow or confirm the unique identification of a natural person) but which may be used for purposes such as emotion recognition or biometric categorisation. The addition of this definition does not narrow the scope of, nor exclude anything from, the definition of biometric data, but rather provides for a comprehensive scope for additional forms of data which may be used for purposes such as biometric categorisation but which would not allow or confirm unique identification. _________________ 35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 36 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 37 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).
2022/06/13
Committee: IMCOLIBE
Amendment 368 #
Proposal for a regulation
Recital 7
(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 . An additional definition has been added for ‘biometrics-based data’ to cover physical, physiological or behavioural data that may not meet the criteria to be defined as biometric data (i.e. would not allow or confirm the unique identification of a natural person) but which may be used for purposes such as emotion recognition or biometric categorisation. The addition of this definition does not narrow the scope of, nor exclude anything from, the definition of biometric data, but rather provides for a comprehensive scope for additional forms of data which may be used for purposes such as biometric categorisation but which would not allow or confirm unique identification. _________________ 35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 36 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 37 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).
2022/06/13
Committee: IMCOLIBE
Amendment 369 #
Proposal for a regulation
Article 7 – paragraph 2 – point d
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or the environment;
2022/03/31
Committee: ITRE
Amendment 374 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – point i
(i) effective measures of redress, the availability of redress-by-design mechanisms and procedures in relation to the risks posed by an AI system, with the exclusion of claims forincluding claims for material and non-material damages;
2022/03/31
Committee: ITRE
Amendment 375 #
Proposal for a regulation
Article 7 – paragraph 2 – point h a (new)
(h a) The general capabilities and functions of the AI system regardless of its purpose.
2022/03/31
Committee: ITRE
Amendment 378 #
Proposal for a regulation
Article 7 – paragraph 2 – point h b (new)
(h b) The potential misuse and malicious use of an AI system and the technology that underpins it.
2022/03/31
Committee: ITRE
Amendment 384 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updatingreview and updating, including when the high-risk AI system is subject to significant changes in its design or purpose. It shall comprise the following steps:
2022/03/31
Committee: ITRE
Amendment 384 #
Proposal for a regulation
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to online and public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case- by-case basis, having regard to the specificities of the individual situation at hand.
2022/06/13
Committee: IMCOLIBE
Amendment 385 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and foreseeable risks associated with each high-risk AI system; In particular the risks that a high-risk AI system will: (i) affect a person’s legal rights or legal status; (ii) affect a person’s access to credit, education, employment, healthcare, housing, insurance, or social welfare benefits or services, or the terms on which these are provided; (iii) undermine a person's safety; (iv) result in significant physical or psychological harm to a person; (v) restrict, infringe, or undermine the ability to realize a person’s fundamental rights; (vi) breach of obligations under Union law intended to protect personal data; (vii) result in serious damage to the environment; (viii) fail to achieve a high level of cybersecurity;
2022/03/31
Committee: ITRE
Amendment 388 #
Proposal for a regulation
Article 9 – paragraph 2 – point d
(d) adoption of suitableffective risk management measures in accordance with the provisions of the following paragraphs.
2022/03/31
Committee: ITRE
Amendment 391 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user in a clear, easily understandable and intelligible way.
2022/03/31
Committee: ITRE
Amendment 395 #
Proposal for a regulation
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children, the elderly, refugees or other vulnerable groups.
2022/03/31
Committee: ITRE
Amendment 403 #
Proposal for a regulation
Recital 12 a (new)
(12 a) In order to ensure a minimum level of transparency on the ecological sustainability aspects of an AI system, providers and users should document parameters including but not limited to resource consumption, resulting from the design, data management and training, the underlying infrastructures of the AI system, and of the methods to reduce such impact for any AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 404 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices for the entire lifecycle of data processing. Those practices shall concern in particular,
2022/03/31
Committee: ITRE
Amendment 406 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. In order to ensure a minimum level of transparency on the ecological sustainability aspects of an AI system, providers and users should document (i) parameters including, but not limited to, resource consumption resulting from the design, data management, training and from the underlying infrastructures of the AI system; as well as (ii) the methods to reduce such impact.
2022/06/13
Committee: IMCOLIBE
Amendment 407 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, the environment and fundamental rights, and values, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal), the Joint Declaration on Digital Rights of the Union (the Declaration) and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) of the High- Level Expert Group on Artificial Intelligence (AI HLEG), and should be non-discriminatory and in line with the Union’s international trade commitments.
2022/06/13
Committee: IMCOLIBE
Amendment 409 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biasesbiases, including where data outputs are used as an input for future operations (‘feedback loops’) that are likely to affect health, fundamental rights and safety of persons or lead to discrimination prohibited by Union law;
2022/03/31
Committee: ITRE
Amendment 417 #
Proposal for a regulation
Article 10 – paragraph 2 – point g a (new)
(ga) the purpose and the environment in which the system is to be used.
2022/03/31
Committee: ITRE
Amendment 418 #
Proposal for a regulation
Recital 15 a (new)
(15 a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (CRPD), the European Union and all Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality (Article 5). They are also obliged to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems. (Article 9). Finally, they are obliged to ensure respect for privacy of persons with disabilities (Article 22).
2022/06/13
Committee: IMCOLIBE
Amendment 419 #
Proposal for a regulation
Recital 15 a (new)
(15 a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (CRPD), the European Union and all Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality (Article 5). They are also obliged to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems. (Article 9). Finally, they are obliged to ensure respect for privacy of persons with disabilities (Article 22).
2022/06/13
Committee: IMCOLIBE
Amendment 422 #
Proposal for a regulation
Recital 15 b (new)
(15 b) Given the growing importance and use of AI systems, the strict application of universal design principles to all new technologies and services should ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is essential to ensure that providers of AI systems design them, and users use them, in accordance with the accessibility requirements set out in Directive (EU) 2019/882. Union law should be further developed, including through this Regulation, so that no one is left behind as result of digital innovation.
2022/06/13
Committee: IMCOLIBE
Amendment 424 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, and to the best extent possible free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/03/31
Committee: ITRE
Amendment 437 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 452 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. Such AI systems should be therefore prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 453 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible or online spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
2022/06/13
Committee: IMCOLIBE
Amendment 457 #
Proposal for a regulation
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate high level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
2022/03/31
Committee: ITRE
Amendment 457 #
Proposal for a regulation
Recital 18 a (new)
(18 a) The notion of ‘at a distance’ in Remote Biometric Identification (RBI) means the use of systems as described in Article 3(36), at a distance great enough that the system has the capacity to scan multiple persons in its field of view (or the equivalent generalised scanning of online / virtual spaces), which would mean that the identification could happen without one or more of the data subjects’ knowledge. Because RBI relates to how a system is designed and installed, and not solely to whether or not data subjects have consented, this definition applies even when warning notices are placed in the location that is under the surveillance of the RBI system, and is not de facto annulled by pre-enrolment.
2022/06/13
Committee: IMCOLIBE
Amendment 458 #
Proposal for a regulation
Recital 18 a (new)
(18 a) The notion of ‘at a distance’ in Remote Biometric Identification (RBI) means the use of systems as described in Article 3(36), at a distance great enough that the system has the capacity to scanmultiple persons in its field of view (or the equivalent generalised scanning of online / virtual spaces), which would mean that the identification could happen without one or more of the data subjects’ knowledge. Because RBI relates to how a system is designed and installed, and not solely to whether or not data subjects have consented, this definition applies even when warning notices are placed in the location that is under the surveillance of the RBI system, and is not defacto annulled by pre-enrollment.
2022/06/13
Committee: IMCOLIBE
Amendment 461 #
Proposal for a regulation
Recital 18 b (new)
(18 b) ‘Biometric categorisation systems’ are defined as AI systems that assign natural persons to specific categories, or infer their characteristics or attributes. ‘Categorisation’ shall include any sorting of natural persons, whether into discrete categories (e.g. male/female, suspicious/not-suspicious), on a numerical scale (e.g. using the Fitzpatrick scale for skin type) or any other form of assigning labels or values to people. ‘Inferring an attribute or characteristic’ shall include any situation in which an AI system uses one type of data about a natural person (e.g. hair colour) to ascribe a different attribute or characteristic to that person (e.g. ethnic origin).
2022/06/13
Committee: IMCOLIBE
Amendment 469 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 479 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.
2022/06/13
Committee: IMCOLIBE
Amendment 480 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible or online spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.
2022/06/13
Committee: IMCOLIBE
Amendment 485 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 488 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible or online spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.
2022/06/13
Committee: IMCOLIBE
Amendment 497 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Obligation on users to define affected persons Before putting into use a high-risk AI system as defined in Article 6(2), the user shall define categories of natural persons and groups likely to be affected by the use of the system.
2022/03/31
Committee: ITRE
Amendment 499 #
Proposal for a regulation
Article 29 b (new)
Article 29 b Fundamental rights impact assessments for high-risk AI systems 1. Users of high-risk AI systems shall conduct an assessment of the systems’ impact in the context of use before putting the system into use. This assessment shall include, but is not limited to, the following: a. a clear outline of the intended purpose for which the system will be used; b. a clear outline of the intended geographic and temporal scope of the system’s use; c. verification of the legality of the system in accordance with Union and national law, fundamental rights law, Union accessibility legislation, and the extent to which the system is in compliance with this Regulation; d. the likely impact on fundamental rights of the high-risk AI system, including any indirect impacts or consequences of the system’s use; e. any specific risk of harm likely to impact marginalised persons or those groups at risk of discrimination, or increase existing societal inequalities; f. the foreseeable impact of the use of the system on the environment, including but not limited to energy consumption; g. any other negative impact on the public interest; and h. clear steps as to how the harms identified will be mitigated, and how effective this mitigation is likely to be. 2. If adequate steps to mitigate the risks outlined in the course of the assessment in paragraph 1 cannot be identified, the system shall not be put into use. Market surveillance authorities, pursuant to their capacity under Articles 65 and 67, may take this information into account when investigating systems which present a risk at national level. 3. The obligation outlined under paragraph 1 applies for each new deployment of the high-risk AI system. 4. In the course of the impact assessment, the user shall notify relevant national authorities and all relevant stakeholders with a view to receiving input into the impact assessment. 5. Where, following the impact assessment process, the user decides to put the high- risk AI system into use, the user shall be required to publish the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51(2). 6. Where the user is already required to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU)2016/680, the impact assessment outlined in paragraph 1 shall be conducted in conjunction to the data protection impact assessment and be published as an addendum. 7. Users of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation under paragraph 1. 8. The obligations on users in paragraph 1 is without prejudice to the obligations on users of all high risk AI systems as outlined in Article 29.
2022/03/31
Committee: ITRE
Amendment 503 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it. The lex specialis nature of the prohibition on RBI does not provide a legal basis for law enforcement uses of RBI, nor does it weaken existing protections of biometric data under the Data Protection Law Enforcement Directive (LED) or national implementations of the LED.
2022/06/13
Committee: IMCOLIBE
Amendment 504 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible or online spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible or online spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.
2022/06/13
Committee: IMCOLIBE
Amendment 507 #
Proposal for a regulation
Recital 23 a (new)
(23 a) ‘Biometric categorisation systems’ are defined as AI systems that assign natural persons to specific categories, or infer their characteristics or attributes. ‘Categorisation’ shall include any sorting of natural persons, whether into discrete categories (e.g. male/female, suspicious/not-suspicious), on a numerical scale (e.g. using the Fitzpatrick scale for skin type) or any other form of assigning labels or values to people. ‘Inferring an attribute or characteristic’ shall include any situation in which an AI system uses one type of data about a natural person (e.g. hair colour) to ascribe a different attribute or characteristic to that person (e.g. ethnic origin).
2022/06/13
Committee: IMCOLIBE
Amendment 512 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:
2022/03/31
Committee: ITRE
Amendment 513 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
(a) the conformity assessment procedure based on internal control referred to in Annex VI;deleted
2022/03/31
Committee: ITRE
Amendment 514 #
Proposal for a regulation
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
2022/03/31
Committee: ITRE
Amendment 514 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real- time’ remote biometric identification systems in publicly accessible or online spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible or online spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
2022/06/13
Committee: IMCOLIBE
Amendment 515 #
Where the legal acts listed in Annex II, section A, enable the manufacturer of the product to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may make use of that option only if he has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering the requirements set out in Chapter 2 of this Title.deleted
2022/03/31
Committee: ITRE
Amendment 516 #
Proposal for a regulation
Article 43 – paragraph 6
6. The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.
2022/03/31
Committee: ITRE
Amendment 522 #
Proposal for a regulation
Article 51 – paragraph 1
1. Before placing on the market or putting into service a high-riskn AI system referred to in Article 6(2),the following paragraphs the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
2022/03/31
Committee: ITRE
Amendment 524 #
Proposal for a regulation
Article 51 – paragraph 1 – point a (new)
(a) a high-risk AI system referred to in Article 6(2);
2022/03/31
Committee: ITRE
Amendment 525 #
Proposal for a regulation
Article 51 – paragraph 1 – point b (new)
(b) any AI system referred to in Article 52, paragraphs 1b and 2;
2022/03/31
Committee: ITRE
Amendment 526 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
2. Before using an AI system referred to in the following paragraphs the user or, where applicable, the authorised representative shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each use of any of these AI systems: a. high-risk AI systems referred to in Article 6 paragraph 2; b. any AI system referred to in Article 52 paragraphs 1b and 2.
2022/03/31
Committee: ITRE
Amendment 528 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any, but also on the environment, democracy and the rule of law in the Union..
2022/06/13
Committee: IMCOLIBE
Amendment 529 #
Proposal for a regulation
Article 51 – paragraph 1 b (new)
3. Before using an AI system, public authorities shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each use of an AI system.
2022/03/31
Committee: ITRE
Amendment 531 #
Proposal for a regulation
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed in a clear, easily understandable and intelligible way that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/03/31
Committee: ITRE
Amendment 535 #
Proposal for a regulation
Article 52 – paragraph 1 a (new)
1a. Users of a high-risk AI system, referred to in Article 6(2), shall inform natural persons exposed thereto of the operation of the system.
2022/03/31
Committee: ITRE
Amendment 537 #
Proposal for a regulation
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
2022/03/31
Committee: ITRE
Amendment 538 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose in an appropriate, clear and visible manner, that the content has been artificially generated or manipulated.
2022/03/31
Committee: ITRE
Amendment 540 #
Proposal for a regulation
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose or reasonably foreseeable uses, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. (This amendment should apply throughout the text, i.e. any occurrence of "intended purpose" should be followed by "or reasonably foreseeable uses")
2022/06/13
Committee: IMCOLIBE
Amendment 541 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detectcontent forms part of an evidently artistic, pcrevent, investigate and prosecute criminal offencesative or fictional cinematographic and analogous work, or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
2022/03/31
Committee: ITRE
Amendment 542 #
Proposal for a regulation
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpoforeseeable uses, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 543 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
3a. Providers of any AI system shall document and make available upon request the parameters regarding the environmental impact, including but not limited to resource consumption, resulting from the design, data management and training, the underlying infrastructures of the AI system, and of the methods to reduce such impact.
2022/03/31
Committee: ITRE
Amendment 544 #
Proposal for a regulation
Article 52 – paragraph 4
4. 5. Paragraphs 1, 2, 3 and 3a shall not affect the requirements and obligations set out in Title III of this Regulation.
2022/03/31
Committee: ITRE
Amendment 550 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversightprohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 555 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a strictly controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to identifying risks in particular to health, safety, and fundamental rights, ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
2022/03/31
Committee: ITRE
Amendment 556 #
Proposal for a regulation
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. Therefore, AI systems in education shall be prohibited to be used by public authorities in education of underaged children to meet the requirement in this regulation, to not exploit any of the vulnerabilities of the group of persons due to their age.
2022/06/13
Committee: IMCOLIBE
Amendment 560 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks toRegulatory sandboxes involving activities that may impact health and, safety and fundamental rights, democracy and rule of law or the environment shall be developed in accordance with redress-by-design principles. Any significant risks identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
2022/03/31
Committee: ITRE
Amendment 567 #
Proposal for a regulation
Recital 36 b (new)
(36 b) Given the significance of Artificial Intelligence impact assessments according to the usage Artificial Intelligence applications in the workplace, the EU will consider a corresponding directive with specific provisions for an impact assessment to ensure the protection of the rights and freedoms of workers affected by AI systems through collective agreements of national legislation.
2022/06/13
Committee: IMCOLIBE
Amendment 570 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systemsprohibited, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/06/13
Committee: IMCOLIBE
Amendment 572 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
(a) provide small-scale providers and start-ups established in the EU with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
2022/03/31
Committee: ITRE
Amendment 584 #
Proposal for a regulation
Article 55 a (new)
Article 55 a Right not to be subject to non-compliant AI systems 1. Natural persons shall have the right not to be subject to AI systems that: (a) pose an unacceptable risk pursuant to Article 5, or (b) otherwise do not comply with the requirements of this Regulation.
2022/03/31
Committee: ITRE
Amendment 584 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of and where a redress procedure is not foreseen. It is therefore appropriate to prohibit some AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress, including the availability of redress-by-design mechanisms and procedures. In view of the nature of the activities in question and the risks relating thereto, those high-risk AIprohibited systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offencesincluded in such a ban.
2022/06/13
Committee: IMCOLIBE
Amendment 585 #
Proposal for a regulation
Article 55 b (new)
Article 55 b Right to information about the use and functioning of AI systems 1. Natural persons shall have the right to be informed that they have been exposed to high-risk AI systems as defined in Article 6, and other AI systems as defined in Article 52. 2. Natural persons shall have the right to be provided upon request, with an explanation for decisions producing legal effects or otherwise significantly affecting them or outcomes related to them taken by or with the assistance of systems within the scope of this Regulation, pursuant to Article 52 paragraph (3b). 3. The information outlined in paragraphs 1 and 2 shall be provided in a clear, easily understandable and intelligible way, in a manner that is accessible for persons with disabilities.
2022/03/31
Committee: ITRE
Amendment 586 #
Proposal for a regulation
Article 55 c (new)
Article 55 c Right to lodge a complaint with a national supervisory authority 1. Natural persons affected by the operation of AI systems within the scope of this Regulation, who consider that their rights under this Regulation have been infringed shall have the right to lodge a complaint with a national supervisory authority in the Member State of their habitual residence, place of work, or place of the alleged infringement. 2. National supervisory authorities have the duty to investigate, in conjunction with relevant market surveillance authority if applicable, the alleged infringement and inform the complainant, within a period of 3 months, of the outcome of the complaint, including the possibility of a judicial remedy pursuant to Article 55e.
2022/03/31
Committee: ITRE
Amendment 586 #
Proposal for a regulation
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 587 #
Proposal for a regulation
Article 55 d (new)
Article 55 d Representation of natural persons and the right for public interest organisations to lodge a complaint with national supervisory authority 1. Natural persons who consider that their rights under this Regulation have been infringed shall have the right to mandate a public interest organisation to lodge a complaint on their behalf with a national competent authority and to exercise on their behalf their rights as, referred to in Articles 55c and 55e. 2. Public interest organisations shall have the right to lodge complaints with national competent authorities, independently of the mandate of the natural person, if they consider that an AI system has been placed on the market, put into service, or used in a way that infringes this Regulation, or is otherwise in violation of fundamental rights or other aspects of public interest protection, pursuant to article 67.
2022/03/31
Committee: ITRE
Amendment 588 #
Proposal for a regulation
Article 55 e (new)
Article 55 e Right to an effective remedy against the national supervisory authority 1. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them. 2. Without prejudice to any other administrative or non-judicial remedy, each natural person shall have the right to an effective judicial remedy where the national supervisory authority does not handle a complaint or does not inform the person within three months on the progress or outcome of the complaint lodged pursuant to Articles 55c and 55d. 3. Proceedings against a national supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established.
2022/03/31
Committee: ITRE
Amendment 589 #
Proposal for a regulation
Article 55 f (new)
Article 55 f Right to an effective remedy against a user for the infringement of rights 1. Without prejudice to any available administrative or non-judicial remedy, any natural person shall have the right to an effective judicial remedy against a user where they consider that their rights under this Regulation have been infringed or has been subject to an AI system otherwise in non-compliance with this Regulation. 2. Any person who has suffered material or non-material damage as a result of an infringement of this Regulation shall have the right to receive compensation from the user for the damage suffered.
2022/03/31
Committee: ITRE
Amendment 593 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
(ca) (d) launch an evaluation procedure for an AI system.
2022/03/31
Committee: ITRE
Amendment 593 #
Proposal for a regulation
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or European Union institutions as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non- refoulement, or deny safe and effective legal avenues into the territory of the Union, including the right to international protection;
2022/06/13
Committee: IMCOLIBE
Amendment 594 #
Proposal for a regulation
Article 56 – paragraph 2 – point c b (new)
(cb) assist providers and users of AI systems, in particular SMEs and start- ups to meet the requirements of this Regulation.
2022/03/31
Committee: ITRE
Amendment 594 #
Proposal for a regulation
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or European Union institutions as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January1967, nor should they be used to in any way infringe on the principle of non-refoulement, or deny safe and effective legal avenues into the territory of the Union, including the right to international protection;
2022/06/13
Committee: IMCOLIBE
Amendment 596 #
Proposal for a regulation
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor, the EU Agency for Fundamental Rights, ENISA and EIGE. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
2022/03/31
Committee: ITRE
Amendment 599 #
Proposal for a regulation
Article 57 – paragraph 4
4. The Board may invite external experts and observers toshall be reinforced on at tend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groupschnical level by the creation of a specialised body of external experts and observers. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and the specialised body. The composition of the specialised body shall ensure fair representation of consumer organisations, social partners, civil society organisations and academics specialised in AI. Its meetings and their minutes shall be published online.
2022/03/31
Committee: ITRE
Amendment 600 #
Proposal for a regulation
Recital 40 a (new)
(40 a) Another area in which the use of AI systems deserves special consideration is the use for health-related purposes, including healthcare. Next to medical devices (as per EU regulation 2017/745), other health-related AI systems also bring about risks which should be regulated. These include systems that influence individual’s health outcomes but do not meet the criteria for a medical device, systems that influence population health outcomes or health equality, systems that impact the distribution of healthcare resources and systems used by pharmaceutical and medical technology companies in research and development, pharmacovigilance, market optimisation and pharmaceutical marketing. Bias and errors in health-related AI systems can have major and immediate consequences for individuals’ and populations’ health and wellbeing. Further, many systems will use sensitive and personal data, which needs to be justified, and about which patients need to be properly informed. What is more, systems that work on hospital, health system, or population level may have a major effect on societal health because they influence the distribution of healthcare resources and health policy design. For these reasons, there is a need for trustworthy AI in healthcare, meaning people must be able to trust that systems used in healthcare are scientifically, technically and clinically valid, safe and accountable, and safeguard individuals’ autonomy and privacy.
2022/06/13
Committee: IMCOLIBE
Amendment 601 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – introductory part
(c) issue guidelines, opinions, recommendations or written contributions on matters related to the implementation of this Regulation, in particular
2022/03/31
Committee: ITRE
Amendment 603 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point iii a (new)
(iiia) on the need for the amendment of the Annexes,
2022/03/31
Committee: ITRE
Amendment 605 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
(ca) to provide specific guidance and assistance to SMEs and start- ups regarding the compliance of the obligations set out in this Regulation;
2022/03/31
Committee: ITRE
Amendment 607 #
Proposal for a regulation
Article 60 – title
EU database for stand-alone high-risk AI systems and certain AI systems, uses thereof, and uses of AI systems by public authorities
2022/03/31
Committee: ITRE
Amendment 608 #
Proposal for a regulation
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 concerning: a. high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51(1); b. any AI system referred to in Article 52 paragraphs 1b and 2 which are registered in accordance with Article 51(1); c. any uses of high-risk AI systems referred to in Article 6(2)which are registered in accordance with Article 51(2); d. any uses of AI systems referred to in Article 52 paragraph1b and 2 which are registered in accordance with Article 51(2); e. any uses of AI systems by or on behalf of public authorities registered in accordance with Article 51(3).
2022/03/31
Committee: ITRE
Amendment 609 #
Proposal for a regulation
Article 60 – paragraph 2
2. The data listed in Annex VIII shall be entered into the EU database by the providers and users. The Commission shall provide them with technical and administrative support. The following information should be included in the EU database: (a) For registrations according to paragraph 1(a) and 1(b), the data listed in Annex VIII point 1 shall be entered into the EU database by the providers. (b) For registrations according to paragraph 1(c), 1(d) and 1(e), the data listed in Annex VIII point 2 shall be entered into the EU database by the users.
2022/03/31
Committee: ITRE
Amendment 610 #
Proposal for a regulation
Article 60 – paragraph 3
3. Information contained in the EU database shall be accessible to the public, comply with the accessibility requirements of Annex I to Directive 2019/882, and be user-friendly, navigable, and machine- readable, containing structured digital data based on a standardised protocol.
2022/03/31
Committee: ITRE
Amendment 611 #
Proposal for a regulation
Article 60 – paragraph 4
4. The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the user.
2022/03/31
Committee: ITRE
Amendment 612 #
Proposal for a regulation
Article 60 – paragraph 5
5. The Commission shall be the controller of the EU database. It shall also ensure to providers and users adequate technical and administrative support, in particular in relation to registrations according to paragraph 1(e).
2022/03/31
Committee: ITRE
Amendment 616 #
Proposal for a regulation
Recital 42
(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of thr reasonably foreseeable use of the system and according to the risk management system to be established by the provider.
2022/06/13
Committee: IMCOLIBE
Amendment 617 #
Proposal for a regulation
Recital 42
(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of thforeseeable uses of the system and according to the risk management system to be established by the provider.
2022/06/13
Committee: IMCOLIBE
Amendment 619 #
Proposal for a regulation
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persons are concerned. in general, including safety in the workplace, protection of consumers, the environment, or to the protection of fundamental rights of persons are concerned, including autonomy of choice, access to goods and services, unfair discrimination and economic harm, privacy and data protection, as well as societal risks.
2022/03/31
Committee: ITRE
Amendment 620 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the Board and the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3).
2022/03/31
Committee: ITRE
Amendment 620 #
Proposal for a regulation
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose or reasonably foreseeable use of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
2022/06/13
Committee: IMCOLIBE
Amendment 621 #
Proposal for a regulation
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), objections are raised by the European Parliament or a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, or has sufficient reasons to believe that an AI system presents a risk or affects consumers in more than one Member State the Commission shall without delay enter into consultation with the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months from the notification referred to in Article 65(5) and notify such decision to the Member State concerned.
2022/03/31
Committee: ITRE
Amendment 621 #
Proposal for a regulation
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpoforeseeable uses of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
2022/06/13
Committee: IMCOLIBE
Amendment 622 #
Proposal for a regulation
Article 66 – paragraph 3
3. Where the national measure is considered justified and the non- compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.The Commission shall also have the possibility to suggest alternative measures to the Member State concerned.
2022/03/31
Committee: ITRE
Amendment 626 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpoforeseeable uses of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpoforeseeable uses, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 627 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose or reasonably foreseeable use of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose or reasonably foreseeable use , the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended or foreseeable to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 636 #
(aa) AI systems intended to be used to make inferences on the basis of biometric data, including emotion recognition systems, or biometrics-based data, including speech patterns, tone of voice, lip-reading and body language analysis, that produces legal effects or affects the rights and freedoms of natural persons.
2022/03/31
Committee: ITRE
Amendment 642 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;deleted
2022/03/31
Committee: ITRE
Amendment 645 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c a (new)
(ca) AI systems intended for making individual risk assessments of natural persons in the context of access to private and public services, including determining the amounts of insurance premiums.
2022/03/31
Committee: ITRE
Amendment 646 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c b (new)
(cb) AI systems intended for or used in the context of payment and debt collection services.
2022/03/31
Committee: ITRE
Amendment 647 #
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;deleted
2022/03/31
Committee: ITRE
Amendment 648 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/03/31
Committee: ITRE
Amendment 649 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/03/31
Committee: ITRE
Amendment 650 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/03/31
Committee: ITRE
Amendment 651 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.deleted
2022/03/31
Committee: ITRE
Amendment 651 #
Proposal for a regulation
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities, also taking into account as appropriate the underlying ICT infrastructure.
2022/06/13
Committee: IMCOLIBE
Amendment 658 #
Proposal for a regulation
Annex VIII
INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF HIGH-RISK AI SYSTEMS IN ACCORDANCE WITH ARTICLE 51 The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be registered in accordance with Article 51. 1. Name, address and contact details of the provider; 2. Where submission of information is carried out by another person on behalf of the provider, the name, address and contact details of that person; 3. Name, address and contact details of the authorised representative, where applicable; 4. AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system; 5. Description of the intended purpose of the AI system; 6. Status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled); 7. Type, number and expiry date of the certificate issued by the notified body and the name or identification number of that notified body, when applicable; 8. A scanned copy of the certificate referred to in point 7, when applicable; 9. Member States in which the AI system is or has been placed on the market, put into service or made available in the Union; 10. A copy of the EU declaration of conformity referred to in Article 48; 11. Electronic instructions for use; this information shall not be provided for high-risk AI systems in the areas of law enforcement and migration, asylum and border control management referred to in Annex III, points 1, 6 and 7. 12. URL for additional information (optional).deleted
2022/03/31
Committee: ITRE
Amendment 659 #
Proposal for a regulation
Annex VIII – title
INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF HIGH- RISK AI SYSTEMS, USES THEREOF, AND USES OF AI SYSTEMS BY PUBLIC AUTHORITIES IN ACCORDANCE WITH ARTICLE 51
2022/03/31
Committee: ITRE
Amendment 660 #
Proposal for a regulation
Annex VIII – paragraph 1
1 The following information shall be provided and thereafter kept up to date by the provider with regard to high-risk AI systems to be registered in accordance with Article 51. referred to in Article 6(2)and to any AI system referred to in Article 52 1(b) and (2) to be registered in accordance with Article 51 (1): (a) Name, address and contact details of the provider; (b) Where submission of information is carried out by another person on behalf of the provider, the name, address and contact details of that person; (c) Name, address and contact details of the authorised representative, where applicable; (d) AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system; (e) Description of the intended purpose of the AI system;(f) Status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled); (g) Type, number and expiry date of the certificate issued by the notified body and the name or identification number of that notified body, when applicable; (h) A scanned copy of the certificate referred to in point 7,when applicable; (i) Member States in which the AI system is or has been placed on the market, put into service or made available in the Union; (j) A copy of the EU declaration of conformity referred to in Article 48; (k) Electronic instructions for use as listed in Article 13(3) and basic explanation of the general logic and key design as listed in Annex IV point 2(b) and of optimization choices as listed in Annex IV point (3). (l) Assessment of the environmental impact, including but not limited to resource consumption, resulting from the design, data management and training, and underlying infrastructures of the AI system, and of the methods to reduce such impact; (m) A description of how the system meets the relevant accessibility requirements of Annex I to Directive 2019/882. (n) URL for additional information (optional).
2022/03/31
Committee: ITRE
Amendment 661 #
Proposal for a regulation
Annex VIII – paragraph 1 a (new)
2. The following information shall be provided and thereafter kept up to date by the user with regard to uses of high-risk AI systems referred to in Article 6(2) and any AI system referred to in Article 52 1(b) and (2) to be registered in accordance with Article 51(2): (a) Name, address and contact details of the user; (b) Where submission of information is carried out by another person on behalf of the user, the name, address and contact details of that person; (c) Name, address and contact details of the authorised representative, where applicable; (d) URL of the entry of the AI system in the EU database by its provider, or, where unavailable, AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system; (e) Description of the intended purpose of the intended use of the AI system; (f) Description of the context and the geographical and temporal scope of application, geographic and temporal, of the intended use of the AI system; (g) Basic explanation of design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to categories persons or groups of persons on which the system is intended to be used; the main classification choices; and what the system is designed to optimise for and the relevance of the different parameters. (h) For high-risk AI systems and for systems referred to in Article 52 1(b) and (2), designation of persons foreseeably impacted by the intended use of the AI system as required by Article X; (i) For high-risk AI systems, results of the impact assessment on the use of the AI system that is conducted under obligations imposed by Article XX of this Regulation. Where full public disclosure of these results cannot be granted for reasons of privacy and data protection, disclosure must be granted to the national supervisory authority, which in turn must be indicated in the EU database. (j) A description of how the relevant accessibility requirements set out in Annex I to Directive 2019/882 are met by the use of the AI system.
2022/03/31
Committee: ITRE
Amendment 662 #
Proposal for a regulation
Annex VIII – paragraph 1 b (new)
3. The following information shall be provided and thereafter kept up to date by the user with regard to uses of AI systems by public authorities to be registered in accordance with Article 51(3): (a) Name, address and contact details of the user; (b) Where submission of information is carried out by another person on behalf of the user, the name, address and contact details of that person; (c) Name, address and contact details of the authorised representative, where applicable; (d) For high-risk AI systems, URL of the entry of the AI system in the EU database by its provider, or, for non-high risk systems, AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system; (e) Description of the intended purpose of the intended use of the AI system; (f) Description of the context and the geographical and temporal scope of application, geographic and temporal, of the intended use of the AI system; (g) Basic explanation of design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to categories persons or groups of persons on which the system is intended to be used; the main classification choices; and what the system is designed to optimise for and the relevance of the different parameters. (h) Designation of persons foreseeably impacted by the intended use of the AI system; (i) If available, results of any impact assessment or due diligence process regarding the use of the AI system that the user has conducted; (j) Assessment of the foreseeable impact on the environment, including but not limited to energy consumption, resulting from the use of the AI system over its entire lifecycle, and of the methods to reduce such impact; (k) A description of how the relevant accessibility requirements set out in Annex I to Directive 2019/882 are met by the use of the AI system.
2022/03/31
Committee: ITRE
Amendment 693 #
Proposal for a regulation
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose or reasonably foreseeable use of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.
2022/06/13
Committee: IMCOLIBE
Amendment 695 #
Proposal for a regulation
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpoforeseeable uses of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.
2022/06/13
Committee: IMCOLIBE
Amendment 702 #
Proposal for a regulation
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers and users of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system or the use thereof in a EU database, to be established and managed by the Commission. Certain AI systems listed in Article 52 (1b) and (2) and uses thereof shall be registered in the EU database. In order to facilitate this, users shall request information listed in Annex VIII point 2(g) from providers of AI systems. Any uses of AI systems by public authorities or on their behalf shall also be registered in the EU database. In order to facilitate this, public authorities shall request information listed in Annex VIII point 3(g) from providers of AI systems. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1)In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the European Accessibility Act.
2022/06/13
Committee: IMCOLIBE
Amendment 703 #
Proposal for a regulation
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers and users of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system or the use thereof in a EU database, to be established and managed by the Commission. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the European Accessibility Act. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 718 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe and fully controlled space for experimentation, while ensuring responsible innovation and integration of appropriate ethical safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. Regulatory sandboxes involving activities that may impact health, safety and fundamental rights, democracy and the rule of law or the environment should be developed in accordance with redress-by-design principles. Any significant risks identified during the development and testing of such systems should result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place. The legal basis of such sandboxes should comply with the requirements established in the existing data protection framework and should be consistent with the Charter of fundamental rights of the European Union.
2022/06/13
Committee: IMCOLIBE
Amendment 725 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a strictly controlled experimentation and testing environment in the development and pre- marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation, as well as with the Charter of Fundamental Rights of the European Union and the General Data Protection Regulation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, andto provide safeguards needed to build trust and reliance on AI systems, to accelerate access to markets, including by removing barriers for the public sector, small and medium enterprises (SMEs) and start-ups; and to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/06/13
Committee: IMCOLIBE
Amendment 736 #
Proposal for a regulation
Recital 74
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possib, as well as the ENISA, the EU Agency for Fundamental Rights, EIGE, and the European Data Protection Supervisor should constantly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
2022/06/13
Committee: IMCOLIBE
Amendment 766 #
(84 a) Union legislation on the protection of whistleblowers (Directive (EU) 2019/1937) has full application to academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators acquiring information on breaches of Union law by a provider of AI system or its AI system, even if they are not explicitly mentioned in Article 4(1)a-4(1)d of that Directive.
2022/06/13
Committee: IMCOLIBE
Amendment 768 #
Proposal for a regulation
Recital 84 b (new)
(84 b) Union legislation on consumer protection(notably Directives (EU) 2019/2161, 2005/29/EC,2011/83/EU) applies to AI systems to the extent determined in these legislations, regardless of whether these systems are categorized as high-risk.
2022/06/13
Committee: IMCOLIBE
Amendment 781 #
Proposal for a regulation
Article 1 – paragraph -1 (new)
-1 The purpose of this Regulation is to ensure a high level of protection of health, safety, fundamental rights and the environment, from harmful effects of artificial intelligence systems ("AI systems") in the Union, while enhancing innovation.
2022/06/13
Committee: IMCOLIBE
Amendment 788 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
(a) harmonised rules for the development, placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 809 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
The purpose of this Regulation is to ensure protection of health, safety, fundamental rights and the environment, from harmful effects of artificial intelligence systems in the Union, while supporting innovation.
2022/06/13
Committee: IMCOLIBE
Amendment 810 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
These provisions shall apply to AI systems as a product, service or practice, or as part of a product, service or practice.
2022/06/13
Committee: IMCOLIBE
Amendment 812 #
Proposal for a regulation
Article 1 – paragraph 1 b (new)
This Regulation is based on the principle that it is for developers, importers, distributors and downstream users to ensure that they develop, place on the market or use artificial intelligence that does not adversely affect health, safety, fundamental rights, and the environment. Its provisions are underpinned by the precautionary principle.
2022/06/13
Committee: IMCOLIBE
Amendment 813 #
Proposal for a regulation
Article 1 – paragraph 1 b (new)
This Regulation is based on the principle that it is for developers, importers, distributors and downstream users to ensure that they develop, place on the market or use artificial intelligence that does not adversely affect health, safety, fundamental rights, or the environment. Its provisions are underpinned by the precautionary principle.
2022/06/13
Committee: IMCOLIBE
Amendment 814 #
Proposal for a regulation
Article 1 – paragraph 1 c (new)
Any processing of personal data for the purposes of this Regulation shall take place in accordance with Union legislation for the protection of personal data, in particular Regulation 2016/679, Directive 2016/680, Regulation 2018/1725 and Directive 2002/58.
2022/06/13
Committee: IMCOLIBE
Amendment 817 #
Proposal for a regulation
Article 2 – paragraph 1 – point a a (new)
(a a) providers of AI systems that have their main establishment in the EU;
2022/06/13
Committee: IMCOLIBE
Amendment 823 #
Proposal for a regulation
Article 2 – paragraph 1 – point b a (new)
(b a) natural persons affected by the use of AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 839 #
Proposal for a regulation
Article 2 – paragraph 1 a (new)
1 a. This Regulation shall also apply to Union institutions, offices and agencies where they develop, deploy or otherwise make use of AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 873 #
Proposal for a regulation
Article 2 – paragraph 3 a (new)
3 a. Any exemptions from the application of this Act to AI systems used exclusively by Member States for national security purposes will be without prejudice to the application of Union law to any activity carried out by the Union or by a Member State that is subject to Union law.
2022/06/13
Committee: IMCOLIBE
Amendment 880 #
Proposal for a regulation
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 885 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5 a. The use of any AI-system that is in line with this Regulation, should also continue to comply with the European Charter on Fundamental Rights, secondary Union law and national law. This Regulation shall not provide the legal ground for unlawful AI development, deployment or use.
2022/06/13
Committee: IMCOLIBE
Amendment 886 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5 a. An AI-system or practice that is in line with this Regulation, should also continue to comply with the European Charter on Fundamental Rights, existing and new secondary Union law and national law.
2022/06/13
Committee: IMCOLIBE
Amendment 894 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
5 b. Member States may adopt or maintain in force more stringent provisions, compatible with the Treaty in the field covered by this Directive, to ensure a higher level of protection of health, safety and fundamental rights.
2022/06/13
Committee: IMCOLIBE
Amendment 896 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall be without prejudice to Regulation (EU) 2016/679.
2022/06/13
Committee: IMCOLIBE
Amendment 900 #
Proposal for a regulation
Article 2 – paragraph 5 d (new)
5 d. This Regulation shall be without prejudice to national labour law and practice, that is any legal or contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation
2022/06/13
Committee: IMCOLIBE
Amendment 901 #
Proposal for a regulation
Article 2 – paragraph 5 e (new)
5 e. This Regulation shall not in any way affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Nor does it affect the right to negotiate, to conclude and enforce collective agreements, or to take collective action in accordance with national law and/or practice.
2022/06/13
Committee: IMCOLIBE
Amendment 919 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
2022/06/13
Committee: IMCOLIBE
Amendment 943 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, data subject, public authority, agency or other body using an AI system under its authority and on its own responsibility, except where the AI system is used in the course of a personal non- professional activity;
2022/06/13
Committee: IMCOLIBE
Amendment 970 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12 a (new)
(12 a) ‘foreseeable uses’ means uses that can reasonably be expected to be made of an AI system, including but not limited to the use for which the AI system is intended for consumers or the likely use by consumers under reasonably foreseeable conditions;
2022/06/13
Committee: IMCOLIBE
Amendment 971 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12 a (new)
(12 a) 'reasonably foreseeable use' means the use of an AI system in a way that is or should be reasonably foreseeable;
2022/06/13
Committee: IMCOLIBE
Amendment 985 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a direct or indirect safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property;
2022/06/13
Committee: IMCOLIBE
Amendment 1014 #
Proposal for a regulation
Article 3 – paragraph 1 – point 29
(29) ‘training data’ means data used for training an AI system through fittingo fit its learnable parameters, including the weights of a neural network;
2022/06/13
Committee: IMCOLIBE
Amendment 1016 #
Proposal for a regulation
Article 3 – paragraph 1 – point 30
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non- learnable param. The process evaluates whethers and the model its learning process, among other things, in order to prevent overfitting; whereasunder-fitted or overfitted; The validation dataset should be a separate dataset of the training set for the evaliduation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split;to be unbiased. If there is only one available dataset, this is divided into two parts, a training set and a validation set. Both sets should still comply with Article 10(3) to ensure appropriate data governance and management practices.
2022/06/13
Committee: IMCOLIBE
Amendment 1020 #
Proposal for a regulation
Article 3 – paragraph 1 – point 31
(31) ‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service;. Similar to Article 3(30), the testing dataset should be a separate dataset from the training set and validation set. This set should also comply with Article 10(3) to ensure appropriate data governance and management practices.
2022/06/13
Committee: IMCOLIBE
Amendment 1025 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 a (new)
(33 a) ‘biometrics-based data’ means data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person which may or may not allow or confirm the unique identification of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 1026 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 a (new)
(33 a) ‘biometrics-based data’ means data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person which may or may not allow or confirm the unique identification of a natural person
2022/06/13
Committee: IMCOLIBE
Amendment 1031 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts, states of mind (such as ‘deception’, ‘trustworthiness’ or ‘truthfulness’) or intentions of natural persons on the basis of their biometric data or other biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1032 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions , thoughts, states of mind (such as ‘deception’, ‘trustworthiness’ or ‘truthfulness’)or intentions of natural persons on the basis of their biometric data or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1043 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system that uses biometric or biometrics-based data for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric dataor inferring their characteristics and attributes ;
2022/06/13
Committee: IMCOLIBE
Amendment 1045 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, or inferring their characteristics and attributes on the basis of their biometric data or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1054 #
(36) ‘remote biometric identification system’ means an AI system for the purposcapable of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
2022/06/13
Committee: IMCOLIBE
Amendment 1058 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purposcapable of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified or data repository;
2022/06/13
Committee: IMCOLIBE
Amendment 1072 #
Proposal for a regulation
Article 3 – paragraph 1 – point 40 – point a a (new)
(a a) any other authority competent for law enforcement, including courts and the judiciary;
2022/06/13
Committee: IMCOLIBE
Amendment 1074 #
Proposal for a regulation
Article 3 – paragraph 1 – point 41
(41) ‘law enforcement’ means i) activities carried out by law enforcement authorities for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; and ii) activities carried out by any other authority that is part of the criminal justice system, including the judiciary;
2022/06/13
Committee: IMCOLIBE
Amendment 1086 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s healthphysical health, mental health or wellbeing, to property or the environment,
2022/06/13
Committee: IMCOLIBE
Amendment 1090 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a a (new)
(a a) a breach of fundamental rights defined by The Charter of Fundamental Rights of the European Union;
2022/06/13
Committee: IMCOLIBE
Amendment 1091 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a b (new)
(a b) systematic, mass or serious breach of other rights;
2022/06/13
Committee: IMCOLIBE
Amendment 1092 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a c (new)
(a c) damage to democracy, the rule of law or the environment
2022/06/13
Committee: IMCOLIBE
Amendment 1096 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
(b a) breach of obligations under Union law intended to protect personal data
2022/06/13
Committee: IMCOLIBE
Amendment 1105 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ´scientific research and development´ means any scientific development, experimentation, analysis, testing or validation carried out under controlled conditions.
2022/06/13
Committee: IMCOLIBE
Amendment 1106 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) scientific research and development means: any scientific development, experimentation, analysis, testing or validation carried out under controlled conditions.
2022/06/13
Committee: IMCOLIBE
Amendment 1108 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘social scoring’ means the evaluation or categorisation of EU citizens based on their behavior or (personality) characteristics, where one or more of the following conditions apply: (i) the information is not reasonably relevant for the evaluation or categorisation; (ii) the information is generated or collected in another domain than that of the evaluation or categorisation; (iii) the information is not necessary for or proportionate to the evaluation or categorisation; (iv) the information contains or reveals special categories of personal data.
2022/06/13
Committee: IMCOLIBE
Amendment 1109 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘social scoring’ means the evaluation or categorisation of persons based on their behaviour or (personality) characteristics, where one or more of the following conditions apply: (i) the information is not reasonably relevant for the evaluation or categorisation; (ii) the information is generated or collected in another domain than that of the evaluation or categorisation; (iii) the information is not necessary for or proportionate to the evaluation or categorisation; (iv) the information contains or reveals special categories of personal data.
2022/06/13
Committee: IMCOLIBE
Amendment 1116 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
(44 c) ‘affectee(s)’ mean(s) any natural or legal person or group of natural or legal persons affected by the use or outcomes of, or a combination of, AI system(s);
2022/06/13
Committee: IMCOLIBE
Amendment 1119 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
(44 c) “child” is any person under the age of 18.
2022/06/13
Committee: IMCOLIBE
Amendment 1120 #
(44 d) ‘artificial intelligence system within determinate uses’ means an artificial intelligence system without specific and limited provider-defined purposes;
2022/06/13
Committee: IMCOLIBE
Amendment 1122 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 e (new)
(44 e) 'deep fake' means generated or manipulated image, audio or video content produced by an AI system that appreciably resembles existing persons, objects, places or other entities or events and falsely appears to a person to be authentic or truthful;
2022/06/13
Committee: IMCOLIBE
Amendment 1125 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 f (new)
(44 f) 'redress by design' means technical mechanisms and/or operational procedures, established from the design phase, in order to be able to effectively detect, audit, rectify the consequences and implications of wrong predictions by an AI system and improve it.
2022/06/13
Committee: IMCOLIBE
Amendment 1159 #
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materiallyed, aimed at, or used for manipulation, deception or distorting a person’s behaviour or exploit a person’s characteristics, in a manner that causes, or is likely to cause, harm to: (i) that person or’s, another person physical or psychological harm’s or group of persons’ fundamental rights, including their physical or psychological health and safety, and/or (ii) democracy, the rule of law, or society at large;
2022/06/13
Committee: IMCOLIBE
Amendment 1162 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys manipulative, including subliminal, techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1175 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilicharacteristiecs of a specific group of persons due to their age, physical or mental disability,gender, ethnic origin, sexual orientation, disability, or any other biological, physical, physiological, behavioural or social characteristics that results in a detrimental, unfavourable, or discriminatory treatment vis-à-vis persons without those characteristics, or that is used in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or, psychological or material harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1187 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) tThe placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:or on behalf of public authorities or by private actors for the purpose of social scoring.
2022/06/13
Committee: IMCOLIBE
Amendment 1193 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:r groups thereof relating to their education, employment, housing, socio-economic situation, health, reliability, social behaviour, location or movements.
2022/06/13
Committee: IMCOLIBE
Amendment 1205 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1216 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1238 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’placing or making available on the market or putting into service of remote biometric identification systems that are or may be used in publicly- accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectiv, as well as online spaces, and the use of remote biometric identification systems in publicly accessible spaces:;
2022/06/13
Committee: IMCOLIBE
Amendment 1276 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1278 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1283 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(d a) the placing on the market, putting into service or use of: (i) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions; (ii) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions. (iii) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests; (iv) AI systems intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships. (v) AI systems intended to be used by public authorities, private entities or on their behalf to evaluate the eligibility of natural persons for public assistance benefits and services, essential private services, as well as to grant, reduce, revoke, or reclaim such benefits and services; (vi) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use; (vii) AI systems intended to be used by competent authorities for migration, asylum and border control management to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State; (viii) AI systems intended to be used by public authorities, including competent authorities for migration, asylum and border control management, as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 1285 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(d a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 1293 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
(d b) the placing on the market, putting into service or use of AI systems to infer emotions of a natural person, except for health or research purposes or other exceptional purposes, and subject to full regulatory review and with full and informed consent at all times.
2022/06/13
Committee: IMCOLIBE
Amendment 1294 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
(d b) AI systems intended to be used by law enforcement authorities or other competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 1301 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
(d c) the use of AI systems by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
2022/06/13
Committee: IMCOLIBE
Amendment 1302 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
(d c) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons, groups, or locations;
2022/06/13
Committee: IMCOLIBE
Amendment 1309 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(d d) The creation or expansion of facial recognition or other biometric databases through the untargeted or generalised scraping of biometric data from social media profiles or closed circuit television (CCTV) footage, or equivalent methods;
2022/06/13
Committee: IMCOLIBE
Amendment 1310 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(d d) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 1312 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(d d) The use of private facial recognition or other private biometric databases for the purpose of law enforcement;
2022/06/13
Committee: IMCOLIBE
Amendment 1314 #
Proposal for a regulation
Article 5 – paragraph 1 – point d e (new)
(d e) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
2022/06/13
Committee: IMCOLIBE
Amendment 1320 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
(d f) The use of remote biometric identification in migration management, border surveillance and humanitarian aid.
2022/06/13
Committee: IMCOLIBE
Amendment 1321 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
(d f) the placing on the market, putting into service or use of ‘emotion recognition systems’
2022/06/13
Committee: IMCOLIBE
Amendment 1324 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
(d g) the use of AI systems, by or on behalf of competent authorities in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the interdicting, curtailing or preventing migration or border crossings;
2022/06/13
Committee: IMCOLIBE
Amendment 1326 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
(d g) the use of biometric categorisation systems in publicly-accessible spaces, workplaces (including in hiring processes), and educational settings;
2022/06/13
Committee: IMCOLIBE
Amendment 1327 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
(d h) the placing on the market, putting into service or use of biometric categorisation systems, or other AI systems, that categorise natural persons according to sensitive or protected attributes or characteristics, or infer those attributes or characteristics, including: ◦ Sex ◦ Gender & gender identity ◦ Race ◦ Ethnic origin ◦ Membership of a national minority ◦ Migration or citizenship status ◦ Political orientation ◦ Social origin or class ◦ Language or dialect ◦ Trade union membership ◦ Sexual orientation ◦ Religion or philosophical orientation ◦ Disability ◦ Or any other grounds on which discrimination is prohibited under Article 21 of the EU Charter of Fundamental Rights as well as under Article 9 of the General Data Protection Regulation
2022/06/13
Committee: IMCOLIBE
Amendment 1330 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
(d h) The use of private facial recognition or other private biometric databases for the purpose of law enforcement;
2022/06/13
Committee: IMCOLIBE
Amendment 1331 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
(d i) the use of AI systems by law enforcement authorities, criminal justice authorities, or other public authorities in conjunction with law enforcement and criminal justice authorities, to make predictions, profiles or risk assessments based on data analysis or profiling of natural persons [as referred to in Article 3(4) of Directive EU)2016/680], groups or locations, for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour.”
2022/06/13
Committee: IMCOLIBE
Amendment 1333 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
(d i) The creation or expansion of facial recognition or other biometric databases through the untargeted or generalised scraping of biometric data from social media profiles or CCTV footage, or equivalent methods;
2022/06/13
Committee: IMCOLIBE
Amendment 1335 #
Proposal for a regulation
Article 5 – paragraph 1 – point d j (new)
(d j) the use of AI systems, by or on behalf of competent authorities in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the interdicting, curtailing or preventing migration or border crossings;
2022/06/13
Committee: IMCOLIBE
Amendment 1337 #
Proposal for a regulation
Article 5 – paragraph 1 – point d j (new)
(d j) the placing on the market, putting into service or use of ‘emotion recognition systems’;
2022/06/13
Committee: IMCOLIBE
Amendment 1338 #
Proposal for a regulation
Article 5 – paragraph 1 – point d k (new)
(d k) The use of AI systems by law enforcement and criminal justice authorities to make predictions, profiles or risk assessments for the purpose of predicting crime.
2022/06/13
Committee: IMCOLIBE
Amendment 1339 #
Proposal for a regulation
Article 5 – paragraph 1 – point d k (new)
(d k) the use of biometric categorisation systems in publicly-accessible spaces, workplaces (including in hiring processes), and educational settings;
2022/06/13
Committee: IMCOLIBE
Amendment 1341 #
Proposal for a regulation
Article 5 – paragraph 1 – point d l (new)
(d l) the placing on the market, putting into service or use of: (i) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions; (ii) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions. (iii) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests; (iv) AI systems intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships; (v) AI systems intended to be used by public authorities, private entities or on their behalf to evaluate the eligibility of natural persons for public assistance benefits and services, essential private services, as well as to grant, reduce, revoke, or reclaim such benefits and services; (vi) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score;
2022/06/13
Committee: IMCOLIBE
Amendment 1374 #
Proposal for a regulation
Article 5 – paragraph 3 – introductory part
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible or online spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use.
2022/06/13
Committee: IMCOLIBE
Amendment 1392 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible or online spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.
2022/06/13
Committee: IMCOLIBE
Amendment 1403 #
Proposal for a regulation
Article 5 a (new)
Article 5 a Accessibility Requirements for providers and users of AI systems 1. Providers of AI systems shall ensure that their systems are accessible in accordance with the accessibility requirements set out in Section I, Section II, Section VI, and Section VII of Annex I of Directive (EU) 2019/882 prior to those systems being placed on the market or put into service. 2. Users of AI systems shall use such systems in accordance with the accessibility requirements set out in Section III, Section IV, Section VI, and Section VII of Annex I of Directive (EU) 2019/882. 3. Users of AI systems shall prepare the necessary information in accordance with Annex V of Directive (EU) 2019/882. Without prejudice to Annex VIII of this Regulation, the information shall be made available to the public in an accessible manner for persons with disabilities and be kept for as long as the AI system is in use. 4. Without prejudice to right of affected persons to information about the use and functioning of AI systems, transparency obligations for providers and users of AI, obligations to ensure consistent and meaningful public transparency under this Regulation, providers and users of AI systems shall ensure that information, forms and measures provided pursuant to this Regulation are made available in a manner that they are easy to find, easy to understand, and accessible in accordance with Annex I to Directive 2019/882. 5. Users of AI systems shall ensure that procedures are in place so that the use of AI systems remains in conformity with the applicable accessibility requirements. Changes in the characteristics of the use, changes in applicable accessibility requirements and changes in the harmonised standards or in technical specifications by reference to which use of an AI system is declared to meet the accessibility requirements shall be adequately taken into account by the user. 6. In the case of non-conformity, users of AI systems shall take the corrective measures necessary to conform with the applicable accessibility requirements. When necessary, and at the request of the user, the provider of the AI system in question shall cooperate with the user to bring the use of the AI system into compliance with applicable accessibility requirements. 7. Furthermore, where the use of an AI system is not compliant with applicable accessibility requirements, the user shall immediately inform the competent national authorities of the Member States in which the system is being used, to that effect, giving details, in particular, of the non-compliance and of any corrective measures taken. They shall cooperate with the authority, at the request of that authority, on any action taken to bring the use of the AI system into compliance with applicable accessibility requirements. 8. AI systems and the use of thereof, which are in conformity with harmonised technical standards or parts thereof derived from Directive (EU) 2019/882 the references of which have been published in the Official Journal of the European Union, shall be presumed to be in conformity with the accessibility requirements of this Regulation in so far as those standards or parts thereof cover those requirements. 9. AI systems and use of thereof, which are in conformity with the technical specifications or parts thereof adopted for the Directive (EU) 2019/882 shall be presumed to be in conformity with the accessibility requirements of this Regulation in so far as those technical specifications or parts thereof cover those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1404 #
Proposal for a regulation
Article 5 a (new)
Article 5 a 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list of prohibited artificial intelligence practices referred to in Article 5 by adding AI systems that pose an unacceptable risk of harm to health and safety, or an unacceptable risk of adverse impact on fundamental rights. 2. When assessing for the purposes of paragraph 1 whether an AI system poses an unacceptable risk of harm to health and safety, or an unacceptable risk of adverse impact on fundamental rights, the Commission shall take into account the following non-cumulative criteria: a) the extent to which the intended purpose of the AI system, or the reasonably foreseeable consequences of its use, conflict with the essence of the rights and freedoms established by the Charter, such that these rights and freedoms would lose their value either for the rights holder or for society as a whole; b) the extent to which the risks posed by an AI system cannot be sufficiently mitigated, including by the obligations imposed upon high-risk AI systems under this Regulation; c) the extent to which an AI system violates human dignity; d) the extent to which the use of an AI system has already caused harm to the health and safety of persons or disproportionate impact on their fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or disproportionate impact, as demonstrated by reports or documented allegations available to national competent authorities; e) the potential extent of such harm or such disproportionate impact, in particular in terms of its intensity and its ability to affect a plurality of persons or to affect a particular group of persons disproportionately; f) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome; g) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, accessibility barriers or age; h) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons or on their fundamental rights shall not be considered as easily reversible; i) the extent to which existing Union legislation lacks: 1) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; 2) effective measures to prevent those risks.
2022/06/13
Committee: IMCOLIBE
Amendment 1406 #
Proposal for a regulation
Article 5 b (new)
Article 5 b Delegated acts to update the list of prohibited AI practices 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list of prohibited artificial intelligence practices referred to in Article 5 by adding AI systems that pose an unacceptable risk of harm to health and safety, or an unacceptable risk of adverse impact on fundamental rights.2. When assessing for the purposes of paragraph 1 whether an AI system poses an unacceptable risk of harm to health and safety, or an unacceptable risk of adverse impact on fundamental rights, the Commission shall take into account the following non-cumulative criteria: a) the extent to which the intended purpose of the AI system, or the reasonably foreseeable consequences of its use, conflict with the essence of the rights and freedoms established by the Charter, such that these rights and freedoms would lose their value either for the rights holder or for society as a whole; b) the extent to which the risks posed by an AI system cannot be sufficiently mitigated, including by the obligations imposed upon high-risk AI systems under this Regulation; c) the extent to which an AI system violates human dignity; d) the extent to which the use of an AI system has already caused harm to the health and safety of persons or disproportionate impact on their fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or disproportionate impact, as demonstrated by reports or documented allegations available to national competent authorities; e) the potential extent of such harm or such disproportionate impact, in particular in terms of its intensity and its ability to affect a plurality of persons or to affect a particular group of persons disproportionately; f) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome; g) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, accessibility barriers or age; h) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons or on their fundamental rights shall not be considered as easily reversible; i) the extent to which existing Union legislation lacks: i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; ii) effective measures to prevent those risks.
2022/06/13
Committee: IMCOLIBE
Amendment 1407 #
Proposal for a regulation
Title II a (new)
Horizonal Requirements for all AI systems Title for a new Article -Accessibility Requirements for providers and users of AI systems 1.Providers of AI systems shall ensure that their systems are accessible in accordance with the accessibility requirements set out in Section I, Section II, Section VI, and Section VII of Annex I of Directive (EU) 2019/882 prior to those systems being placed on the market or put into service. 2.Users of AI systems shall use such systems in accordance with the accessibility requirements set out in Section III, Section IV, Section VI, and Section VII of Annex I of Directive (EU) 2019/882. 3.Users of AI systems shall prepare the necessary information in accordance with Annex V of Directive (EU) 2019/882.Without prejudice to Annex VIII of this Regulation, the information shall be made available to the public inan accessible manner for persons with disabilities and be kept for as long as the AI system is in use. 4.Without prejudice to right of affected persons to information about the use and functioning of AI systems, transparency obligations for providers and users of AI, 4obligations to ensure consistent and meaningful public transparency under this Regulation , providers and users of AI systems shall ensure that information, forms and measures provided pursuant to this Regulation are made available in a manner that they are easy to find, easy to understand, and accessible in accordance with Annex I to Directive 2019/882. 5.Users of AI systems shall ensure that procedures are in place 6 so that the use of AI systems remains in conformity with the applicable accessibility requirements.Changes in the characteristics of the use, changes in applicable accessibility requirements and changes in the harmonised standards or in technical specifications by reference to which use of an AI system is declared to meet the accessibility requirements shall be adequately taken into account by the user. 6.In the case of non-conformity, users of AI systems shall take the corrective measures necessary to conform with the applicable accessibility requirements.When necessary, and at the request of the user, the provider of the AI system in question shall cooperate with the user to bring the use of the AI system into compliance with applicable accessibility requirements. 7.Furthermore, where the use of an AI system is not compliant with applicable accessibility requirements, the user shall immediately inform the competent national authorities of the Member States in which the system is being used, to that effect, giving details, in particular, of the non-compliance and of any corrective measures taken.They shall cooperate with the authority, at the request of that authority, on any action taken to bring the use of the AI system into compliance with applicable accessibility requirements. 8.AI systems and the use of thereof, which are in conformity with harmonised technical standards or parts thereof derived from Directive (EU) 2019/882 the references of which have been published in the Official Journal of the European Union, shall be presumed to be in conformity with the accessibility requirements of this Regulation in so far as those standards or parts thereof cover those requirements. 9.AI systems and use of thereof, which are in conformity with the technical specifications or parts thereof adopted for the Directive(EU) 2019/882 shall be presumed to be in conformity with the accessibility requirements of this Regulation in so far as those technical specifications or parts thereof cover those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1416 #
Proposal for a regulation
Article 6 – paragraph 1 – introductory part
1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where bothone of the following conditions are fulfilled:
2022/06/13
Committee: IMCOLIBE
Amendment 1421 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex IIthe failure or malfunctioning of which endangers the health, safety or fundamental rights of persons;
2022/06/13
Committee: IMCOLIBE
Amendment 1430 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
(b) the product whose safety component as meant under (a) is the AI system, or the AI system itself as a product, is required to undergo a third- party conformity assessment with a view to the placing on the market or putting into service or use of that product pursuant to the Union harmonisation legislation listed in Annex II.
2022/06/13
Committee: IMCOLIBE
Amendment 1432 #
(b a) the AI system is used by a public authority.
2022/06/13
Committee: IMCOLIBE
Amendment 1434 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be consideredidentified as posing a risk to fundamental human rights as defined in the EU Charter of Fundamental Rights, in relation to a specific intended use shall also be considered high-risk. Such risk is to be determined by completion of a Human Rights Impact Assessment by the user of the AI in relation to the specific use intended for the AI system, with records of such assessment retained for regulatory inspection. The provider shall apply a precautionary principle and, in case of uncertainty over the AI system's classification, shall consider the AI system high-risk.
2022/06/13
Committee: IMCOLIBE
Amendment 1448 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
2 a. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.
2022/06/13
Committee: IMCOLIBE
Amendment 1450 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
2 b. In addition to the high-risk AI systems referred to in paragraphs 1, AI systems that have over 20 million EU citizens across the EU or 50% of any given Member States’ population as active monthly users, or whose users have cumulatively over 20 million customers or beneficiaries in the EU affected by it shall be considered high-risk, unless these are placed onto the market.
2022/06/13
Committee: IMCOLIBE
Amendment 1453 #
Proposal for a regulation
Article 6 – paragraph 2 c (new)
2 c. In addition to the high-risk AI systems referred to in paragraph 1, AI systems affecting employees in the employment relationship or in matters of training or further education shall be considered high risk.
2022/06/13
Committee: IMCOLIBE
Amendment 1454 #
Proposal for a regulation
Article 6 – paragraph 2 d (new)
2 d. In addition to the high-risk AI systems referred to in paragraph 1, AI systems likely to interact with children shall be considered high-risk.
2022/06/13
Committee: IMCOLIBE
Amendment 1455 #
Proposal for a regulation
Article 6 – paragraph 2 e (new)
2 e. In addition to the high-risk AI systems referred to in paragraph 1, an artificial intelligence system with indeterminate uses shall also be considered high risk.
2022/06/13
Committee: IMCOLIBE
Amendment 1461 #
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:the following condition is fulfilled: the AI systems pose a risk of harm to health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity or probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact associated with the high-risk AI systems already referred to in Annex III. Where an AI system is not intended to be used in any of the areas listed in points 1 to 8 of Annex III, the Commission is empowered to update the list of areas in Annex III by including new areas or extending the scope of existing areas.
2022/06/13
Committee: IMCOLIBE
Amendment 1469 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:.
2022/06/13
Committee: IMCOLIBE
Amendment 1470 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where botheither of the following conditions areis fulfilled:
2022/06/13
Committee: IMCOLIBE
Amendment 1472 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1478 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1482 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to theeconomic harm, negative societal impacts or harm to the environment, health and safety, or a risk of adverse impact on fundamental rights, democracy and the rule of law, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/06/13
Committee: IMCOLIBE
Amendment 1485 #
Proposal for a regulation
Article 7 – paragraph 1 – point b a (new)
(b a) the AI systems pose a risk of harm to occupational health and safety, including psychosocial risks.
2022/06/13
Committee: IMCOLIBE
Amendment 1490 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights or on the environment, democracy and rule of law that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the followingconsult social partners and civil society and take into account, including but not limited to, the following non-cumulative criteria:
2022/06/13
Committee: IMCOLIBE
Amendment 1494 #
Proposal for a regulation
Article 7 – paragraph 2 – point a
(a) the intended purpose of the AI system, or the reasonably foreseeable consequences of its use;
2022/06/13
Committee: IMCOLIBE
Amendment 1510 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or, democracy, rule of law and the environment has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by available reports or documented allegations submitted to national competent authorities;
2022/06/13
Committee: IMCOLIBE
Amendment 1513 #
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or on the environment or to affect a particular group of persons disproportionately;
2022/06/13
Committee: IMCOLIBE
Amendment 1528 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is not easily reversible, whereby outcomes having an impact on the health or safety of persons or on their fundamental rights shall not be considered as easily reversible;
2022/06/13
Committee: IMCOLIBE
Amendment 1540 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – introductory part
(h) the extent to which existing Union legislation provides forlacks:
2022/06/13
Committee: IMCOLIBE
Amendment 1541 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – point i
(i) effective measures of redress, the availability of redress-by-design mechanisms and procedures in relation to the risks posed by an AI system, with the exclusion of claims forincluding claims for material and non-material damages;
2022/06/13
Committee: IMCOLIBE
Amendment 1543 #
Proposal for a regulation
Article 7 – paragraph 2 – point h a (new)
(h a) The general capabilities and functionalities of the AI system independent of its foreseeable use;
2022/06/13
Committee: IMCOLIBE
Amendment 1544 #
Proposal for a regulation
Article 7 – paragraph 2 – point h b (new)
(h b) The extent of the availability and use of demonstrated technical solutions and mechanisms for the control, reliability and corrigibility of the AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1545 #
Proposal for a regulation
Article 7 – paragraph 2 – point h c (new)
(h c) The potential misuse and malicious use of the AI system and of the technology underpinning it.
2022/06/13
Committee: IMCOLIBE
Amendment 1564 #
Proposal for a regulation
Article 8 – paragraph 2
2. The intended purpoforeseeable uses and foreseeable misuses of AI systems with indeterminate uses of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1568 #
Proposal for a regulation
Article 8 – paragraph 2
2. The intended purpose or reasonably foreseeable use of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1577 #
Proposal for a regulation
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1580 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updatingreview and updating, including when the high-risk AI system is subject to significant changes in its design or purpose. It shall comprise the following steps:
2022/06/13
Committee: IMCOLIBE
Amendment 1582 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and the reasonably foreseeable risks associated with each high-risk AI system;that the high-risk AI system, and AI systems with indeterminate uses can pose to: (i) the health or safety of natural persons; (ii) the legal rights or legal status of natural persons; (iii) the fundamental rights of natural persons; (iv) the equal access to services and opportunities of natural persons; (v) the Union values enshrined in Article 2 TEU; (vi) society at large and the environment.
2022/06/13
Committee: IMCOLIBE
Amendment 1593 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose or reasonably foreseeable use and under conditions of reasonably foreseeable misuse;
2022/06/13
Committee: IMCOLIBE
Amendment 1612 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or reasonably foreseeable use or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user.
2022/06/13
Committee: IMCOLIBE
Amendment 1619 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as possible through adequate design and development involving relevant domain and other experts and internal and external stakeholders, including but not limited to representative bodies and the social partners;
2022/06/13
Committee: IMCOLIBE
Amendment 1644 #
Proposal for a regulation
Article 9 – paragraph 5
5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose or reasonably foreseeable use and they are in compliance with the requirements set out in this Chapter.
2022/06/13
Committee: IMCOLIBE
Amendment 1681 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets as well as data that is collected, fed into, or used by the AI system, after deployment of the system and throughout its lifecycle shall be subject to appropriate data governance and management practices. Those practices shall concern in particular,
2022/06/13
Committee: IMCOLIBE
Amendment 1695 #
Proposal for a regulation
Article 10 – paragraph 2 – point d
(d) the formulation of relevant, justified and reasonable assumptions, notably with respect to the information that the data are supposed to measure and represent;
2022/06/13
Committee: IMCOLIBE
Amendment 1737 #
Proposal for a regulation
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state- of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1739 #
Proposal for a regulation
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued. This should also guarantee explainability of AI driven recommendations or decisions.
2022/06/13
Committee: IMCOLIBE
Amendment 1770 #
Proposal for a regulation
Article 12 – paragraph 1
1. High-riskAll AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to recognised standards or common specifications.
2022/06/13
Committee: IMCOLIBE
Amendment 1781 #
Proposal for a regulation
Article 12 – paragraph 4 – introductory part
4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum:
2022/06/13
Committee: IMCOLIBE
Amendment 1784 #
Proposal for a regulation
Article 12 – paragraph 4 a (new)
4 a. For high-risk self-learning AI systems the logging of self-learning shall be maintained.The logging shall provide, at a minimum: (a) the input data used for self-learning; (b) the used algorithms of the input data interpretation; (c) the results of self-learning.
2022/06/13
Committee: IMCOLIBE
Amendment 1785 #
Proposal for a regulation
Article 12 – paragraph 4 b (new)
4 b. Where a decision and/or proposal of decision is the outcome of an AI system, the logging shall cover information comprehensively sufficient for further human manual review of the decision/proposal with no need to refer to the AI system itself.The logging shall provide, at a minimum: (a) the input data; (b)the reference database, if such present; (c) the algorithms that could had been used; (d) the algorithms that actually had been used; (e) output data (decision and/or proposal); (f) comprehensive mechanism of how the input data resulted into the output data.
2022/06/13
Committee: IMCOLIBE
Amendment 1786 #
Proposal for a regulation
Article 12 – paragraph 4 c (new)
4 c. For all high-risk AI systems, including those mentioned in paragraphs 4–6 above, the logging shall provide, at a minimum: (a) log-in information (user, date, time, authentication type); (b) the input data; (c) the output data.
2022/06/13
Committee: IMCOLIBE
Amendment 1787 #
Proposal for a regulation
Article 12 – paragraph 4 d (new)
4 d. The Commission is empowered to adopt delegated acts in accordance with Article 73 to define more minimum logging requirements for AI systems or their certain types.
2022/06/13
Committee: IMCOLIBE
Amendment 1810 #
Proposal for a regulation
Article 13 a (new)
Article 13 a Transparency for affectees of AI systems 1) High-risk AI systems shall be designed, developed and used in such a way that an affectee can obtain an explanation from the developer and user for any decision taken or supported by a high-risk AI system that significantly affects the affectee; 2) Providers and users of high-risk AI systems shall provide access to the person of persons designated with the exercise of 'human oversight' as described in Art. 14 to discuss and to clarify the facts, circumstances and reasons having led to the decision by the AI system; 3) Providers and users of high-risk AI systems shall provide the affectee with a written statement of the reasons for any decision taken or supported by a high-risk AI system; 4) Where the affectee is not satisfied with the explanation or the written statement of reasons obtained or consider that the decision referred to in paragraph (1) jeopardizes their health, safety or fundamental rights, the provider or user, as the case may be, shall review that decision, upon reasonable request by the affectee. The provider or user, as the case maybe, shall respond to such request by providing the affectee with a substantiated reply without undue delay and in any event within one week of receipt of the request.
2022/06/13
Committee: IMCOLIBE
Amendment 1894 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant conformiindependent third party assessment procedure, prior to its placing on the market or putting into service;
2022/06/13
Committee: IMCOLIBE
Amendment 1896 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service or use;
2022/06/13
Committee: IMCOLIBE
Amendment 1905 #
Proposal for a regulation
Article 16 – paragraph 1 – point j a (new)
(j a) refrain from placing on the market or putting into service a High-Risk AI system that: (i) is not in conformity with the requirements set out in Chapter 2 of this Title;or (ii) poses a risk of harm to health, safety or fundamental rights despite its conformity with the requirements set out in Chapter 2 of this Title.
2022/06/13
Committee: IMCOLIBE
Amendment 1907 #
Proposal for a regulation
Article 16 – paragraph 1 – point j b (new)
(j b) ensure that the individual to whom human oversight is assigned shall either be fully independent from the provider or user or, be adequately protected against negative consequences for their position within the organisation, resulting from or related to their exercise of human oversight.
2022/06/13
Committee: IMCOLIBE
Amendment 1913 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
1. Providers of high-risk AI systems shall put a quality management system in place, certified by an independent third party that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:
2022/06/13
Committee: IMCOLIBE
Amendment 1927 #
Proposal for a regulation
Article 17 – paragraph 1 – point f
(f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service or use of high-risk AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 1943 #
Proposal for a regulation
Article 17 – paragraph 3 a (new)
3 a. High-risk AI systems shall make use of high quality models, that use relevant, justified and reasonable parameters and features and optimise for justified goals;
2022/06/13
Committee: IMCOLIBE
Amendment 1944 #
Proposal for a regulation
Article 17 – paragraph 3 b (new)
3 b. High-risk AI systems shall only be used in a different domain or environment where they are generalisable to such domain or environment
2022/06/13
Committee: IMCOLIBE
Amendment 1949 #
Proposal for a regulation
Article 19 – title
Independent Third party Conformity assessment
2022/06/13
Committee: IMCOLIBE
Amendment 1950 #
Proposal for a regulation
Article 19 – paragraph 1
1. Providers of high-risk AI systems shall ensure that their systems undergo the relevantan independent third party conformity assessment procedure in accordance with Article 43 and Annex VII, prior to their placing on the market or putting into service. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49. The conformity assessment shall be publicly available.
2022/06/13
Committee: IMCOLIBE
Amendment 1952 #
Proposal for a regulation
Article 19 – paragraph 1
1. Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43, prior to their placing on the market or putting into service or use. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49.
2022/06/13
Committee: IMCOLIBE
Amendment 1955 #
Proposal for a regulation
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose or reasonably foreseeable use of high-risk AI system and applicable legal obligations under Union or national law.
2022/06/13
Committee: IMCOLIBE
Amendment 2046 #
Proposal for a regulation
Article 29 – paragraph 2
2. The obligations in paragraph 1 are without prejudice to other user obligations under Union or national law and to the user’s discretion in organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider. This regulation does not conflict with the scope of Art. 153 TFEU, which sets minimum requirements for Member States that may be exceeded.
2022/06/13
Committee: IMCOLIBE
Amendment 2057 #
Proposal for a regulation
Article 29 – paragraph 5 – introductory part
5. Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control. The logs shall be kept for a period that is appropriate in the light of the intended purpose or reasonably foreseeable use of the high-risk AI system and applicable legal obligations under Union or national law.
2022/06/13
Committee: IMCOLIBE
Amendment 2071 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
6 a. Users of high-risk AI systems shall refrain from placing on the market or putting into service a high-risk AI system that: (i) is not in conformity with the requirements set out in Chapter 2 of this Title;or (ii) poses a risk of harm to health, safety or fundamental rights despite its conformity with the requirements set out in Chapter 2 of this Title.
2022/06/13
Committee: IMCOLIBE
Amendment 2073 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
6 a. Users of high risk AI systems, who modify or extend the purpose for which the conformity of the AI system was originally assessed, shall establish and document a post-market monitoring system (Art. 61)and must undergo a new conformity assessment (Art. 43) involved by a notified body.
2022/06/13
Committee: IMCOLIBE
Amendment 2083 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Obligation on users to define affected persons 1. Before putting into use a high-risk AI system as defined in Article 6(2), the user shall define categories of natural persons and groups likely to be affected by the use of the system.
2022/06/13
Committee: IMCOLIBE
Amendment 2084 #
Proposal for a regulation
Article 29 a (new)
Article 29 a A fiduciary duty for providers and users of high-risk AI systems Providers and users of high-risk AI systems have a fiduciary duty to act in the interest of the affectees.
2022/06/13
Committee: IMCOLIBE
Amendment 2085 #
Proposal for a regulation
Article 29 b (new)
Article 29 b Fundamental rights impact assessments for high-risk AI systems 1. Users of high-risk AI systems as defined in Article 6(2) shall conduct an assessment of the systems’ impact in the context of use before putting the system into use. This assessment shall include, but is not limited to, the following: a. a clear outline of the intended purpose for which the system will be used; b. a clear outline of the intended geographic and temporal scope of the system’s use; c. verification of the legality of the system in accordance with Union and national law, fundamental rights law, Union accessibility legislation, and the extent to which the system is in compliance with this Regulation; d. the likely impact on fundamental rights of the high-risk AI system, including any indirect impacts or consequences of the system’s use; e. any specific risk of harm likely to impact marginalised persons or those groups at risk of discrimination, or increase existing societal inequalities; f. the foreseeable impact of the use of the system on the environment, including but not limited to energy consumption; g. any other negative impact on the public interest; and h. clear steps as to how the harms identified will be mitigated, and how effective this mitigation is likely to be. 2. If adequate steps to mitigate the risks outlined in the course of the assessment in paragraph 1 cannot be identified, the system shall not be put into use. Market surveillance authorities, pursuant to their capacity under Articles 65 and 67, may take this information into account when investigating systems which present a risk at national level. 3. The obligation outlined under paragraph 1 applies for each new deployment of the high-risk AI system. 4. In the course of the impact assessment, the user shall notify relevant national authorities and allrelevant stakeholders, including but not limited to: equality bodies, consumer protection agencies, social partners and data protection agencies, with a view to receiving input into the impact assessment.The user must allow a period of six weeks for bodies to respond. 5. Where, following the impact assessment process, the user decides to put the high- risk AI system into use, the user shall be required to publish the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51(2). 6. Where the user is already required to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the impact assessment outlined in paragraph 1 shall be conducted in conjunction to the data protection impact assessment and be published as an addendum. 7. Users of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation under paragraph 1. 8. Where the user, pursuant to their obligation to define affected categories of persons under Article 29a,finds that use of a high-risk system poses a particular risk to a specific group of natural persons, the user has the obligation to notify established representatives or interest groups acting on behalf of those persons before putting the system into use, with a view to receiving input into the impact assessment. 9 The obligations on users in paragraph 1 is without prejudice to the obligations on users of all high risk AI systems as outlined in Article 29.
2022/06/13
Committee: IMCOLIBE
Amendment 2132 #
Proposal for a regulation
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2). The Commission shall adopt common specifications setting out how risk management systems should give specific consideration to interaction with or impact on children.
2022/06/13
Committee: IMCOLIBE
Amendment 2151 #
Proposal for a regulation
Article 42
Presumption of conformity with certain requirements 1. Taking into account their intended purpose, high-risk AI systems that have been trained and tested on data concerning the specific geographical, behavioural and functional setting within which they are intended to be used shall be presumed to be in compliance with the requirement set out in Article 10(4). 2. High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 of the European Parliament and of the Council63 and the references of which have been published in the Official Journal of the European Union shall be presumed to be in compliance with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements. _________________ 63 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) (OJ L 151, 7.6.2019, p. 1).Article 42 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2153 #
Proposal for a regulation
Article 42 – paragraph 1
1. Taking into account their intended purpoforeseeable uses, high-risk AI systems that have been trained and tested on data concerning the specific geographical, behavioural and functional setting within which they are intended to be used shall be presumed to be in compliance with the requirement set out in Article 10(4).
2022/06/13
Committee: IMCOLIBE
Amendment 2157 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1, 3 and 4 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:follow the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
2022/06/13
Committee: IMCOLIBE
Amendment 2163 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
(a) the conformity assessment procedure based on internal control referred to in Annex VI;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2170 #
Proposal for a regulation
Article 43 – paragraph 1 – point b
(b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VIIdocumentation of analysis and achievement of the tests of strict necessity, proportionality and legality of the system, as well as any associated database or data repository on which it relies; with the involvement of a notified body, referred to in Annex VII, and with the involvement of the relevant national data protection authority.
2022/06/13
Committee: IMCOLIBE
Amendment 2180 #
Proposal for a regulation
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
2022/06/13
Committee: IMCOLIBE
Amendment 2194 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. A new conformity assessment is always required whenever safety-related limits of continuing learning high-risk AI systems may be exceeded or have an impact on the health or safety.
2022/06/13
Committee: IMCOLIBE
Amendment 2214 #
Proposal for a regulation
Article 47
[...]deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2246 #
Proposal for a regulation
Article 51 – paragraph 1
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2)n AI system, the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
2022/06/13
Committee: IMCOLIBE
Amendment 2251 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Before using a high-risk AI system referred to in Article 6(2), the user or, where applicable, the authorised representative, shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each new use of a high-risk AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 2256 #
Proposal for a regulation
Article 51 – paragraph 1 b (new)
Before using an AI system, public authorities shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each new use of an AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 2283 #
Proposal for a regulation
Article 52 a (new)
Article 52 a 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list of AI systems subject to transparency obligations under Article 52 by adding AI systems that affect individuals or to which they are subject, where: the AI systems pose a risk of manipulation, harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity or probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the systems already referred to in Article52. 2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk that is equivalent to or greater than the risk of harm posed by the AI systems already referred to in Article 52, the Commission shall take into account the following non-cumulative criteria: a. the intended purpose of the AI system, or the reasonably foreseeable consequences of its use; b. the extent to which an AI system poses a risk of manipulation, or of adversely impacting one or more fundamental rights in a manner which could be to some degree mitigated by additional transparency measures; c. the extent to which the use of an AI system impairs natural persons’ agency, autonomy of choice or may lead to or already has led to developing addictive behaviour; d. the extent to which the use of an AI system may lead to or has already led to price discrimination or other form of economic harm; e. the extent to which the use of an AI system may lead to or has already led to negative societal effects such as increased polarisation of opinions, insufficient exposure to objective sources of information and amplification of illegal online content. f. the extent to which an AI system has been used or is likely to be used; g. the extent to which the use of an AI system has already been shown to pose a risk in the senses of points b) to e) above, has caused harm to health and safety or disproportionate impact on fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or disproportionate impact, as demonstrated by reports or documented allegations available to national competent authorities; h. the potential extent of such harm or such disproportionate impact, in particular in terms of its intensity and its ability to affect a plurality of persons or to affect aparticular group of persons disproportionately; i. the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome or from the functionality of the service which relies on the AI system; j. the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, accessibility barriers, or age; k. the extent to which the outcome produced with an AI system is not easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible; l. the extent to which existing Union legislation lacks: i. effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; ii. effective measures to prevent or substantially minimise those risks.
2022/06/13
Committee: IMCOLIBE
Amendment 2290 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes established by the Commission in collaboration with one or more Member States competent authorities or the European Data Protection Supervisor, are considered high risk and shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. They shall operate in full compliance with the General Data Protection Regulation. This shall take place under the direct supervision and guidance by the Commission in collaboration with competent authorities with a view to identifying risks to health and safety and fundamental rights, testing mitigation measures for identified risks, demonstrating prevention of these risks and otherwise ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. AI regulatory sandboxes shall remain a technical solution, shall assess potentialadverse effects and not be used on the employment context.
2022/06/13
Committee: IMCOLIBE
Amendment 2314 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks toRegulatory sandboxes involving activities that may impact health and, safety and fundamental rights, democracy and rule of law or the environment shall be developed in accordance with redress-by-design principles. Any significant risks identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
2022/06/13
Committee: IMCOLIBE
Amendment 2338 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
6 a. The modalities referred to in Article 53(6) shall ensure at least the following: (a) participants in the regulatory sandboxing system, in particular small-scale providers, are granted access to pre-deployment services, such as preliminary registration of AI system, insurance, compliance and R&D support services, and to all the other relevant elements of the Union’s AI ecosystem and other Digital Single Market initiatives such as testing and experimentation facilities, digital hubs, centers of excellence, testing and experimentation facilities, and EU benchmarking capabilities; and to other value-adding services such as standardization and certification, community social platform and contact databases, tenders and grant making portal and lists of potential investors. (b) foreign providers, in particular small- scale providers, are eligible to take part in the regulatory sandboxes to incubate and refine their products in compliance with this Regulation. (c) individuals such as researchers, entrepreneurs, innovators and other pre-market ideas owners are eligible to take part in the regulatory sandboxes to incubate and refine their products in compliance with this Regulation. (d) there be as little fragmentation as possible of the regulatory sandboxes across Member States, notably through development of a single interface and contact point at the EU level to interact with the regulatory sandbox ecosystem and through the Commission facilitating the creation of transnational and EU-wide regulatory sandboxes
2022/06/13
Committee: IMCOLIBE
Amendment 2388 #
Proposal for a regulation
Article 55 a (new)
Article 55 a Promoting research and development of AI in support of socially and environmentally beneficial outcomes led by civil society 1. Member States shall promote research and development of AI solutions which support socially and environmentally beneficial outcomes, including but not limited to development of AI-based solutions to increase accessibility for persons with disabilities, tackle socio- economic inequalities, and meet sustainability and environmental targets, by: (a) providing relevant projects with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions; (b) earmarking public funding, including from relevant EU funds, for AI research and development in support of socially and environmentally beneficial outcomes; (c) organising specific awareness raising activities about the application of this Regulation, the availability of and application procedures for dedicated funding, tailored to the needs of those projects; (d) where appropriate, establishing accessible dedicated channels for communication with projects to provide guidance and respond to queries about the implementation of this Regulation. 2. Member States shall ensure that when conformity assessment is required under Article 43, cost of such assessment is covered by public, including EU, funds available for AI research and development. 3. Without prejudice to Article 55 a (new)1(a), Member States shall ensure that relevant projects are led by civil society and social stakeholders that set the project priorities, goals, and outcomes.
2022/06/13
Committee: IMCOLIBE
Amendment 2390 #
Proposal for a regulation
Article 55 b (new)
Article 55 b Right not to be subject to non-compliant AI systems Natural persons shall have the right not to be subject to AI systems that: (a) pose an unacceptable risk pursuant to Article 5, or (b) otherwise do not comply with the requirements of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2391 #
Proposal for a regulation
Article 55 c (new)
Article 55 c Right to information about the use and functioning of AI systems 1. Natural persons shall have the right to be informed that they have been exposed to high-risk AI systems as defined in Article 6, and other AI systems as defined in Article 52. 2. Natural persons shall have the right to be provided upon request, with an explanation for decisions producing legal effects or otherwise affecting them or outcomes related to them taken by or with the assistance of systems within the scope of this Regulation, pursuant to Article 52 paragraph (3b). 3. The information outlined in paragraphs 1 and 2 shall be provided in a clear, easily understandable and intelligible way, in a manner that is accessible for persons with disabilities.
2022/06/13
Committee: IMCOLIBE
Amendment 2396 #
Proposal for a regulation
Article 56 – title
Establishment of the European Artificial Intelligence Board
2022/06/13
Committee: IMCOLIBE
Amendment 2402 #
Proposal for a regulation
Article 56 – paragraph 1 a (new)
1 a. The Board shall be independent in the fulfilment of its task. It shall have legal personality.
2022/06/13
Committee: IMCOLIBE
Amendment 2403 #
Proposal for a regulation
Article 56 – paragraph 1 b (new)
1 b. The Board shall ensure the consistent application of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2406 #
Proposal for a regulation
Article 56 – paragraph 2 – introductory part
2. The Board shall provide advice and assistance to the Commission and the national authorities in order to:
2022/06/13
Committee: IMCOLIBE
Amendment 2411 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
(c a) carry out annual reviews and analyses of the complaints sent to and findings by national competent authorities, of the serious incidents and malfunctioning reports referred to in Article 62, and of the new registration in the EU Database referred to in Article 60 to identify trends and potential emerging issues threatening the future health and safety and fundamental rights of citizens and not adequately addressed by this Regulation; to carry out biannual horizon scanning and foresight exercises to extrapolate the impact these trends and emerging issues can have on the Union; and to annually publish recommendations to the Commission, including but not limited to recommendations on the categorization of prohibited practices, high-risk systems, and codes of conduct for AI systems that are not classified as high-risk.
2022/06/13
Committee: IMCOLIBE
Amendment 2417 #
Proposal for a regulation
Article 56 – paragraph 2 – point c b (new)
(c b) represent and defend the interest of the broad cicil society, including Social Partners.
2022/06/13
Committee: IMCOLIBE
Amendment 2418 #
Proposal for a regulation
Article 56 – paragraph 2 – point c c (new)
(c c) launch an evaluation procedure for an AI system
2022/06/13
Committee: IMCOLIBE
Amendment 2419 #
Proposal for a regulation
Article 56 – paragraph 2 a (new)
2 a. The Board shall have a sufficient number of competent personnel at their disposal for assistance in the proper performance of their tasks.
2022/06/13
Committee: IMCOLIBE
Amendment 2420 #
Proposal for a regulation
Article 56 – paragraph 2 b (new)
2 b. The Board shall be organised and operated so as to safeguard the independence, objectivity and impartiality of their activities. The Board shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality throughout its activities.
2022/06/13
Committee: IMCOLIBE
Amendment 2432 #
Proposal for a regulation
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor, the EU Agency for Fundamental Rights, ENISA, EIGE and social partners as well representratives of civil society. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
2022/06/13
Committee: IMCOLIBE
Amendment 2435 #
Proposal for a regulation
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor and the Fundamental Rights Agency. Other national authorities or EU agencies may be invited to the meetings, where the issues discussed are of relevance for them.
2022/06/13
Committee: IMCOLIBE
Amendment 2438 #
Proposal for a regulation
Article 57 – paragraph 1 a (new)
1 a. The Commission shall have the right to participate in the activities and meetings of the Board without voting right. The Commission shall designate a representative. The Chair of the Board shall communicate to the Commission the activities of the Board.
2022/06/13
Committee: IMCOLIBE
Amendment 2444 #
Proposal for a regulation
Article 57 – paragraph 2
2. The Board shall adopt its rules of procedure by a simple majority of its members, following the consent of the Commission. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish sub-groups as appropriate for the purpose of examining specific questions.
2022/06/13
Committee: IMCOLIBE
Amendment 2448 #
Proposal for a regulation
Article 57 – paragraph 2 a (new)
2 a. The Board may establish sub- groups as appropriate for the purpose of examining specific questions. The Board shall establish a permanent sub-group for the purpose of examining the question of the proper governance of general purpose AI systems. The Board shall also establish a permanent sub-group for the purpose of examining the question of the proper governance of research and development activities on the topic of AI and to inform the development of the governance framework.
2022/06/13
Committee: IMCOLIBE
Amendment 2451 #
Proposal for a regulation
Article 57 – paragraph 3
3. The Board shall be chaired by the Commission. The Commission shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2459 #
Proposal for a regulation
Article 57 – paragraph 3 a (new)
3 a. The Board shall elect a chair and two deputy chairs from amongst its members by simple majority.
2022/06/13
Committee: IMCOLIBE
Amendment 2461 #
Proposal for a regulation
Article 57 – paragraph 3 b (new)
3 b. The term of office of the Chair and of the deputy chairs shall be five years and be renewable once.
2022/06/13
Committee: IMCOLIBE
Amendment 2465 #
Proposal for a regulation
Article 57 – paragraph 4
4. The Board may invite external experts and observers to . To thatt end its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end tthe Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and specialised bodies. The Ccommission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groupsposition of the specialised body shall ensure fair representation of consumer organisations, civil society organisations and academics specialised on AI. Its meetings and their minutes shall be published online.
2022/06/13
Committee: IMCOLIBE
Amendment 2488 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
When providing advice and assistance to the Commissensuring the consistent application inof the context of Article 56(2)is Regulation, the Board shall in particular:
2022/06/13
Committee: IMCOLIBE
Amendment 2517 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
(c a) provide guidance in relation to governing general-purpose AI systems and their compliance with applicable requirements to meet the objectives of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2520 #
Proposal for a regulation
Article 58 – paragraph 1 – point c b (new)
(c b) provide guidance in relation to governing research and development activities for creating new or improving existing AI systems, and the alignment of these activities with the objectives of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2524 #
Proposal for a regulation
Article 58 – paragraph 1 – point c c (new)
(c c) The Board shall provide statutory guidance in relation to children’s rights, applicable law and minimum standards for the evaluation of automated decision- making systems to meet the objectives of this Regulation pertaining to children and to investigate the design goals, data inputs, model selection, implementation and outcomes of such systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2568 #
Proposal for a regulation
Article 59 – paragraph 3
3. Member States shall inform the Board and the Commission of their designation or designations and, where applicable, the reasons for designating more than one authority.
2022/06/13
Committee: IMCOLIBE
Amendment 2580 #
Proposal for a regulation
Article 59 – paragraph 5
5. Member States shall report to the Board and the Commission on an annual basis on the status of the financial and human resources of the national competent authorities with an assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations.
2022/06/13
Committee: IMCOLIBE
Amendment 2594 #
Proposal for a regulation
Article 59 – paragraph 8
8. When Union institutions, agencies and bodies fall within the scope of this Regulation, tThe European Data Protection Supervisor shall act as the competent authority for their supervision of Union institutions, agencies and bodies.
2022/06/13
Committee: IMCOLIBE
Amendment 2608 #
Proposal for a regulation
Title VII
EU DATABASE FOR STAND-ALONE HIGH-RISK AI SYSTEMS
2022/06/13
Committee: IMCOLIBE
Amendment 2610 #
Proposal for a regulation
Article 60 – title
60 EU database for stand-alone high- risk, general purpose and certain AI systems, uses thereof, and uses of AI systems by public authorities AI systems
2022/06/13
Committee: IMCOLIBE
Amendment 2612 #
Proposal for a regulation
Article 60 – title
EU database for stand-alone high-risk AI systems
2022/06/13
Committee: IMCOLIBE
Amendment 2614 #
Proposal for a regulation
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51. AI systems which are registered in accordance with Article 51 and general purpose AI systems, in accordance with Article xx: a. high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51(1); b. any AI systems referred to in Article 52 paragraphs 1b and 2 which are registered in accordance with Article 51(1); c. any uses of high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51(2); d. any uses of AI systems referred to in Article 52 paragraph 1b and 2 which are registered in accordance with Article 51(2); e. any uses of AI systems by or on behalf of public authorities registered in accordance with Article 51(3).
2022/06/13
Committee: IMCOLIBE
Amendment 2620 #
Proposal for a regulation
Article 60 – paragraph 2
2. The Commission shall provide providers and users entering data into the EU database with technical and administrative support.The following information should be included in the EU database: (a) For registrations according to paragraph 1(a) and 1(b), the data listed in Annex VIII point 1 shall be entered into the EU database by the providers. The Commission shall provide them with technical and administrative support. (b) For registrations according to paragraph 1(c) , 1(d) and 1(e), the data listed in Annex VIII point 2 shall be entered into the EU database by the users.
2022/06/13
Committee: IMCOLIBE
Amendment 2624 #
Proposal for a regulation
Article 60 – paragraph 3
3. Information contained in the EU database shall be accessible to the publicThe EU database and the information contained in it shall be freely available to the public, comply with the accessibility requirements of Annex I to Directive 2019/882, and be user-friendly, navigable, and machine-readable, containing structured digital data based on a standardised protocol.
2022/06/13
Committee: IMCOLIBE
Amendment 2626 #
Proposal for a regulation
Article 60 – paragraph 3 a (new)
3 a. Users should register deployments of high-risk AI systems into the EU database before putting them into use. The users should include information in the database, not limited to, the identity of the provider and the user, the context of the purpose and of deployment, the designation of impacted persons, and the results of the impact assessment.
2022/06/13
Committee: IMCOLIBE
Amendment 2628 #
Proposal for a regulation
Article 60 – paragraph 4
4. The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider, or the user.
2022/06/13
Committee: IMCOLIBE
Amendment 2632 #
Proposal for a regulation
Article 60 – paragraph 5
5. The Commission shall be the controller of the EU database. It shall also ensure to providers and users adequate technical and administrative support, in particular in relation to registrations according to paragraph 1(e).
2022/06/13
Committee: IMCOLIBE
Amendment 2637 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
5 a. The database shall comply with the accessibility requirements of Annex I to Directive 2019/882.
2022/06/13
Committee: IMCOLIBE
Amendment 2638 #
Article 60 a Systemic transparency and monitoring of societal implications 1. The Commission shall, in collaboration with the Member States, set up and maintain a relational database of digital and AI systems that interact with high- risk or general purpose AI systems or with AI systems with transparency obligations. Among others, the relational database shall include digital and AI systems whose input directly or indirectly come from a high-risk or general purpose AI system or whose output directly or indirectly is taken as input by a high-risk or general purpose AI system. 2. For each entry in the EU database referred to in Article 60, the provider shall enter the upstream and downstream digital and AI systems into the relational database, as well as, to the extent it is possible, the digital and AI systems upstream of the upstream AI systems and the digital and AI systems downstream of the downstream AI systems. 3. The European AI Board and the Commission shall regularly assess the relational map to facilitate incident response and to identify AI systems (‘Societally Significant AI systems’)whose output is used as input into many downstream digital and AI systems.4. The European AI Board and the Commission shall develop a Code of Conduct for Societally Significant AI Systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2640 #
Proposal for a regulation
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Post-market monitoring must include continuous analysis of the AI environment, including other devices, software, and other AI systems that will interact with the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 2703 #
Proposal for a regulation
Article 64 a (new)
Article 64 a Market surveillance authorities 1. Market surveillance authorities shall, at a minimum, have the power to (a) carry out unannounced on-site and remote inspections of AI systems. (b) acquire samples related to AI systems, including through remote inspections, to reverse-engineer the AI systems and to acquire evidence to identify non- compliance. 2. Member States may authorise their market surveillance authorities to reclaim from the relevant operator the totality of the costs of their activities with respect to instances of non-compliance. 3. The costs referred to in paragraph 2 of this Article may include the costs of carrying out testing, computation, hardware,storage, and the costs of activities relating to AI systems that are found to be non-compliant and are subject to corrective action prior to their placing on the market.
2022/06/13
Committee: IMCOLIBE
Amendment 2706 #
Proposal for a regulation
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persons are concerned. in general, including safety in the workplace, protection of consumers, the environment, or to the protection of fundamental rights of persons are concerned, including autonomy of choice, access to goods and services, unfair discrimination and economic harm, privacy and data protection, as well as societal risks.
2022/06/13
Committee: IMCOLIBE
Amendment 2711 #
Proposal for a regulation
Article 65 – paragraph 1 a (new)
1 a. When AI systems are likely to interact with or impact on children, the precautionary principle shall apply.
2022/06/13
Committee: IMCOLIBE
Amendment 2713 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities, Board or bodies referred to in Article 64(3). Where there is sufficient reason to consider that that an AI system exploits the vulnerabilities of children or violates their rights intentionally or unintentionally, the market surveillance authority shall have the duty to investigate the design goals, data inputs, model selection, implementation and outcomes of the AI system and the burden of proof shall be on the operator or operators of that system to demonstrate compliance with the provisions of this Regulation. The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3), including by providing access to personnel, documents, internal communications, code, data samples and on platform testing as necessary.
2022/06/13
Committee: IMCOLIBE
Amendment 2716 #
Proposal for a regulation
Article 65 – paragraph 2 – subparagraph 1
Where, in the course of thatits evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe. The corrective action can also be applied to AI systems in other products or services judged to be similar in their objectives, design or impact.
2022/06/13
Committee: IMCOLIBE
Amendment 2740 #
Proposal for a regulation
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), objections are raised by the European Parliament or a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, or has sufficient reasons to believe that an AI system presents a risk or affects consumers in more than one Member State the Commission shall without delay enter into consultation with the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months from the notification referred to in Article 65(5) and notify such decision to the Member State concerned.
2022/06/13
Committee: IMCOLIBE
Amendment 2743 #
Proposal for a regulation
Article 66 – paragraph 3
3. Where the national measure is considered justified and the non- compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.The Commission shall also have the possibility to suggest alternative measures to the Member State concerned.
2022/06/13
Committee: IMCOLIBE
Amendment 2776 #
Proposal for a regulation
Article 68 a (new)
Article 68 a Right to lodge a complaint with a supervisory authority 1. Citizens have a right not to be subjected to prohibited AI systems. 2. Citizens have a right not to be subjected to high-risk AI systems that fail to meet the requirements for high-risk systems. 3. Without prejudice to any other administrative or judicial remedy, every citizen shall have the right to lodge a complaint with a supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if the citizen considers that he or she has been subjected to an AI system that infringes this Regulation. 4. The supervisory authority with which the complaint has been lodged shall inform the complainant on the progress and the outcome of the complaint. 5. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision
2022/06/13
Committee: IMCOLIBE
Amendment 2813 #
Proposal for a regulation
Article 70 b (new)
Article 70 b Right for removal and injunction 1. If an AI system infringes this Regulation each natural or legal person affected by said AI system may require the user of this system to stop the use and to remove the infringement. 2. If further infringements of an AI system are to be feared, each affected natural or legal person may seek a prohibitory injunction.
2022/06/13
Committee: IMCOLIBE
Amendment 2835 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
3. The following infringements shall be subject to administrative fines of up to 30 000 000 EUR or, if the offender is a company, up to 610 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:
2022/06/13
Committee: IMCOLIBE
Amendment 2850 #
Proposal for a regulation
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 47 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
2022/06/13
Committee: IMCOLIBE
Amendment 2856 #
Proposal for a regulation
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 24 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
2022/06/13
Committee: IMCOLIBE
Amendment 2884 #
1. The European Data Protection Supervisor may impose administrative fines on Union institutions, agencies and bodies falling within the scope of this Regulationdeveloping, deploying or operating AI systems. When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following:
2022/06/13
Committee: IMCOLIBE
Amendment 2918 #
Proposal for a regulation
Article 73 – paragraph 2
2. The delegation of power referred to in Article 4, Article 5a, Article 7(1), Article 11(3), Article 43(5) and (6), Article 48(5) and Article 48(5)52a shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
2022/06/13
Committee: IMCOLIBE
Amendment 2922 #
Proposal for a regulation
Article 73 – paragraph 3
3. The delegation of power referred to in Article 4, Article 5a, Article 7(1), Article 11(3), Article 43(5) and (6) and, Article 48(5) and Article 52a may be revoked at any time by a joint decision from the European Parliament or byand the Council.. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
2022/06/13
Committee: IMCOLIBE
Amendment 2929 #
Proposal for a regulation
Article 73 – paragraph 4
4. As soon as it adoptsIn preparation of a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.
2022/06/13
Committee: IMCOLIBE
Amendment 2930 #
Proposal for a regulation
Article 73 – paragraph 5
5. Any delegated act adopted pursuant to Article 4, Article 5a, Article 7(1), Article 11(3), Article 43(5) and (6) and, Article 48(5) and Article 52a shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
2022/06/13
Committee: IMCOLIBE
Amendment 2947 #
Proposal for a regulation
Article 83 – paragraph 1 – introductory part
1. This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [12 months after the date of application of this Regulation referred to in Article 85(2)], unless and the replacquirement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerneds laid down in this Regulation shall be taken into account in the evaluation of each large-scale IT systems established by the legal acts listed in Annex IX.
2022/06/13
Committee: IMCOLIBE
Amendment 2951 #
Proposal for a regulation
Article 83 – paragraph 1 – subparagraph 1
The requirements laid down in this Regulation shall be taken into account, where applicable, in the evaluation of each large-scale IT systems established by the legal acts listed in Annex IX to be undertaken as provided for in those respective acts.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2956 #
Proposal for a regulation
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes in their design or intended purpose.
2022/06/13
Committee: IMCOLIBE
Amendment 2964 #
Proposal for a regulation
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III, including the extension of existing area headings or addition of new area headings; ,Article 5’s list of prohibited AI practices, and Article 52’s list of AI systems requiring additional transparency measures, once a year following the entry into force of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2984 #
Proposal for a regulation
Article 84 – paragraph 6
6. In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of equality bodies and other relevant bodies or sources, and shall consult relevant external stakeholders, in particular those potentially affected by the AI system, as well as stakeholders from academia and civil society.
2022/06/13
Committee: IMCOLIBE
Amendment 2991 #
Proposal for a regulation
Article 84 – paragraph 7
7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology, the effect of AI systems on health and safety, fundamental rights, equality, and accessibility for persons with disabilities, and in the light of the state of progress in the information society.
2022/06/13
Committee: IMCOLIBE
Amendment 2996 #
Proposal for a regulation
Article 84 – paragraph 7 a (new)
7 a. To guide the evaluations and reviews referred to in paragraphs 1 to 4, the Board shall undertake to develop an objective and participative methodology for the evaluation of risk level based on the criteria outlined in the relevant articles and inclusion of new systems in: the list in Annex III, including the extension of existing area headings or addition of new area headings; Article 5’s list of prohibited AI practices; and Article 52’s list of AI systems requiring additional transparency measures.
2022/06/13
Committee: IMCOLIBE
Amendment 3052 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
1. Biometric identification and categorisation of natural personsAI systems which use biometric or biometrics-based data:
2022/06/13
Committee: IMCOLIBE
Amendment 3064 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended tothat are or may be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3068 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems that are or may be used for the biometric identification of natural persons in publicly accessible spaces, as well as in workplaces, in educational settings and in border surveillance, or in the provision of public or essential services;
2022/06/13
Committee: IMCOLIBE
Amendment 3071 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems that are or may be used for biometric verification in publicly accessible spaces, as well as in workplaces and in educational settings;
2022/06/13
Committee: IMCOLIBE
Amendment 3073 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that are or may be used for the detection of a person’s presence, in workplaces, in educational settings, and in border surveillance, including in the virtual / online version of these spaces, on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3076 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that are or may be used for biometric verification in publicly accessible spaces, as well as in workplaces and in educational settings;
2022/06/13
Committee: IMCOLIBE
Amendment 3077 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that are or may be used for categorisation on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3078 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used for monitoring compliance with health and safety measures or inferring alertness / attentiveness for safety purposes, on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3081 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used to diagnose or support diagnosis of medical conditions or medical emergencies on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3082 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used for categorisation on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3084 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a d (new)
(a d) AI systems that are or may be used for the detection of a person’s presence, in workplaces, in educational settings, and in border surveillance, including in the virtual / online version of these spaces, on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3086 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a e (new)
(a e) AI systems that are or may be used for monitoring compliance with health and safety measures or inferring alertness / attentiveness for safety purposes, on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3087 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – introductory part
2. Management and operation, operation, generation and supply of critical infrastructure, technology and energy:
2022/06/13
Committee: IMCOLIBE
Amendment 3104 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b a (new)
(b a) AI systems intended to be used for the optimization of individual learning processes based on a student's learning data.
2022/06/13
Committee: IMCOLIBE
Amendment 3116 #
(b) AI intended to be used for making decisions on promotion and termination of work-related contractualaffecting the initiation, establishment, implementation and termination of an employment relationship, including AI systems intended to support collective legal and regulationships,ory matters, particularly for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
2022/06/13
Committee: IMCOLIBE
Amendment 3122 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point a
(a) AI systems intended to be used by public authorities or on behalf of (semi-)public authorities to evaluateor private parties to evaluate or predict the lawful use by, or the eligibility of, natural persons, including the self employed and micro-enterprises, for public assistance, benefits and services and essential private services including but not limited to housing, electricity, heating/cooling, finance, insurance and internet, as well as to grant, reduce, revoke, or reclaim such benefits and services or set payment obligations related to these services;
2022/06/13
Committee: IMCOLIBE
Amendment 3127 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3132 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
2022/06/13
Committee: IMCOLIBE
Amendment 3151 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3152 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3159 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3161 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3174 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3175 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3181 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
(f) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3185 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point g
(g) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3191 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3192 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3198 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3202 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities, or by third parties acting on their behalf, to assess a risk, including but not limited to a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
2022/06/13
Committee: IMCOLIBE
Amendment 3212 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities for the examination and assessment of the veracity of evidence and claims in relation tof applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
2022/06/13
Committee: IMCOLIBE
Amendment 3219 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d a (new)
(d a) AI systems intended to be used by or on behalf of competent authorities in migration, asylum and border control management for the forecasting or prediction of trends related to migration, movement and border crossings;
2022/06/13
Committee: IMCOLIBE
Amendment 3221 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d a (new)
(d a) AI systems that are or may be used by or on behalf of competent authorities in law enforcement, migration, asylum and border control management for the biometric identification of natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3223 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d b (new)
(d b) AI systems intended to be used by, or on behalf of, competent authorities in migration, asylum and border control management to monitor, surveil, or process data in the context of border management activities for the purpose of recognizing or detecting objects and natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3225 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d b (new)
(d b) AI systems that are or may be used by or on behalf of competent authorities in law enforcement, migration, asylum and border control management for the biometric identification of natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3227 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d c (new)
(d c) AI systems intended to be used by, or on behalf of, competent authorities in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities for the purpose of recognizing or detecting objects and natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 3245 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose or reasonably foreseeable use , the person/s developing the system the date and the version of the system;
2022/06/13
Committee: IMCOLIBE
Amendment 3270 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point g
(g) the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracyperformance, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).
2022/06/13
Committee: IMCOLIBE
Amendment 3272 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3
3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose or reasonably foreseeable use ; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose or reasonably foreseeable use of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;
2022/06/13
Committee: IMCOLIBE
Amendment 3282 #
Proposal for a regulation
Annex IV – paragraph 1 – point 8 a (new)
8 a. Without prejudice to Article 9(2), a detailed description of the economic and social implications and potential risks for health, and in particular mental health, safety and fundamental rights arising from the hypothetical widespread usage of the AI system or of similar systems in society, with reference to past incidents that occurred using similar systems and associated mitigating measures.
2022/06/13
Committee: IMCOLIBE
Amendment 3287 #
Proposal for a regulation
Annex VII – point 4 – point 4.7
4.7. Any change to the AI system that could affect the compliance of the AI system with the requirements or its intended purpose or reasonably foreseeable use shall be approved by the notified body which issued the EU technical documentation assessment certificate. The provider shall inform such notified body of its intention to introduce any of the above-mentioned changes or if it becomes otherwise aware of the occurrence of such changes. The intended changes shall be assessed by the notified body which shall decide whether those changes require a new conformity assessment in accordance with Article 43(4) or whether they could be addressed by means of a supplement to the EU technical documentation assessment certificate. In the latter case, the notified body shall assess the changes, notify the provider of its decision and, where the changes are approved, issue to the provider a supplement to the EU technical documentation assessment certificate.
2022/06/13
Committee: IMCOLIBE
Amendment 3288 #
Proposal for a regulation
Annex VIII – title
INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF HIGH- RISK AI SYSTEMS AND OF CERTAIN AI SYSTEMS, USES THEREOF, AND USES OF AI SYSTEMS BY PUBLIC AUTHORITIES IN ACCORDANCE WITH ARTICLE 51
2022/06/13
Committee: IMCOLIBE
Amendment 3290 #
Proposal for a regulation
Annex VIII – paragraph 1
The following information shall be provided and thereafter kept up to date by the provider with regard to high-risk AI systems referred to in Article 6(2) and to any AI system referred to in Article 52 1(b) and (2) to be registered in accordance with Article 51(1).
2022/06/13
Committee: IMCOLIBE
Amendment 3293 #
Proposal for a regulation
Annex VIII – paragraph 1 a (new)
The following information shall be provided and thereafter kept up to date by the user with regard to uses of high-risk AI systems referred to in Article 6(2) and any AI system referred to in Article 52 1(b) and (2) to be registered in accordance with Article 51(2). (a) Name, address and contact details of the user; (b) Where submission of information is carried out by another person on behalf of the user, the name, address and contact details of that person; (c) Name, address and contact details of the authorised representative, where applicable; (d) URL of the entry of the AI system in the EU database by its provider, or, where unavailable, AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system; (e) Description of the intended purpose of the intended use of the AI system; (f) Description of the context and the geographical and temporal scope of application, geographic and temporal, of the intended use of the AI system; (g) Basic explanation of design specifications of the system, namely the general logic of the AI system and of the algorithms;the key design choices including the rationale and assumptions made, also with regard to categories persons or groups of persons on which the system is intended to be used;the main classification choices;and what the system is designed to optimise for and the relevance of the different parameters. (h) For high-risk AI systems and for systems referred to in Article 52 1(b) and (2), designation of persons foreseeably impacted by the intended use of the AI system as required by Article X; (i) For high-risk AI systems, results of the impact assessment on the use of the AI system that is conducted under obligations imposed by Article XX of this Regulation.Where full public disclosure of these results cannot be granted for reasons of privacy and data protection, disclosure must be granted to the national supervisory authority, which in turn must be indicated in the EU database. (j) A description of how the relevant accessibility requirements set out in Annex I to Directive 2019/882 are met by the use of the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 3295 #
Proposal for a regulation
Annex VIII – paragraph 1 b (new)
The following information shall be provided and thereafter kept up to date by the user with regard to uses of AI systems by public authorities to be registered in accordance with Article 51(3). (a) Name, address and contact details of the user;(b) Where submission of information is carried out by another person on behalf of the user, the name, address and contact details of that person; (c) Name, address and contact details of the authorised representative, where applicable; (d) For high-risk AI systems, URL of the entry of the AI system in the EU database by its provider, or, for non-high risk systems, AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system; (e) Description of the intended purpose of the intended use of the AI system; (f) Description of the context and the geographical and temporal scope of application, geographic and temporal, of the intended use of the AI system; (g) Basic explanation of design specifications of the system, namely the general logic of the AI system and of the algorithms;the key design choices including the rationale and assumptions made, also with regard to categories persons or groups of persons on which the system is intended to be used;the main classification choices;and what the system is designed to optimise for and the relevance of the different parameters. (h) Designation of persons foreseeably impacted by the intended use of the AI system; (i) If available, results of any impact assessment or due diligence process regarding the use of the AI system that the user has conducted; (j) Assessment of the foreseeable impact on the environment, including but not limited to energy consumption, resulting from the use of the AI system over its entire lifecycle, and of the methods to reduce such impact; (k) A description of how the relevant accessibility requirements set out in Annex I to Directive 2019/882 are met by the use of the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 3300 #
Proposal for a regulation
Annex VIII – point 5
5. Description of the intended purpose or reasonably foreseeable use of the AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 3307 #
Proposal for a regulation
Annex VIII – point 11
11. Electronic instructions for use; this information shall not be provided for high-risk AI sy as listed in Article 13(3) and basic explanation of the general logic and key design as listemsd in the areas of law enforcement and migration, asylum and border control management referred toAnnex IV point 2(b) and of optimization choices as listed in Annex III,V points 1, 6 and 7 (3).
2022/06/13
Committee: IMCOLIBE
Amendment 3308 #
Proposal for a regulation
Annex VIII – point 11 a (new)
11 a. Assessment of the environmental impact, including but not limited to resource consumption, resulting from the design, data management and training, and underlying infrastructures of the AI system, and of the methods to reduce such impact;
2022/06/13
Committee: IMCOLIBE
Amendment 3309 #
Proposal for a regulation
Annex VIII – point 11 b (new)
11 b. A description of how the system meets the relevant accessibility requirements of Annex I to Directive 2019/882.
2022/06/13
Committee: IMCOLIBE