44 Amendments of Romana JERKOVIĆ related to 2021/0106(COD)
Amendment 127 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the developbased on ethical principles in particular for the design, development, deployment, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 133 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 154 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal) and the Joint Declaration on Digital Rights of the Union (the Declaration) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 158 #
Proposal for a regulation
Recital 14
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. With regard to transparency and human oversight obligations, Member States should be able to adopt further national measures to complement them without changing their harmonising nature.
Amendment 161 #
Proposal for a regulation
Recital 14 a (new)
Recital 14 a (new)
(14a) Without prejudice to tailoring rules to the intensity and scope of the risks that AI systems can generate, or to the specific requirements laid down for high-risk AI systems, all AI systems developed, deployed or used in the Union should respect not only Union and national law but also a specific set of ethical principles that are aligned with the values enshrined in Union law and that are in part, concretely reflected in the specific requirements to be complied with by high-risk AI systems. That set of principles should, inter alia, also be reflected in codes of conduct that should be mandatory for the development, deployment and use of all AI systems. Accordingly, any research carried out with the purpose of attaining AI-based solutions that strengthen the respect for those principles, in particular those of social responsibility and environmental sustainability, should be encouraged by the Commission and the Member States.
Amendment 162 #
Proposal for a regulation
Recital 14 b (new)
Recital 14 b (new)
(14b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as developers and deployers of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed.
Amendment 163 #
Proposal for a regulation
Recital 15
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
Amendment 170 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 191 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into servicedeveloped and deployed if they comply with certain mandatory requirements based on ethical principles. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
Amendment 194 #
Proposal for a regulation
Recital 28
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, gender equality, education, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
Amendment 200 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the right to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination.
Amendment 201 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules applicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
Amendment 214 #
Proposal for a regulation
Recital 46
Recital 46
(46) Having comprehensible information on how high- risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation and to allow users to make informed and autonomous decisions about their use. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.
Amendment 215 #
Proposal for a regulation
Recital 47
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certainsufficient degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. The same applies to AI systems with general purposes that may have high-risk uses that are not forbidden by their developer. In such cases, sufficient information should be made available allowing deployers to carry out tests and analysis on performance, data and usage. The systems and information should also be registered in the EU database for stand- alone high-risk AI systems foreseen in Article 60 of this Regulation.
Amendment 218 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons can overseehave agency over them by being able to oversee and control their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate and at the very least where decisions based solely on the automated processing enabled by such systems produce legal or otherwise significant effects, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 221 #
Proposal for a regulation
Recital 49
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to thein an intelligible manner to the deployers and users.
Amendment 229 #
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional and ethically justified reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.
Amendment 237 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate and ethically justified safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
Amendment 242 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups; to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems, in line with the ethical principles outlined in this Regulation. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 246 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 251 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 256 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
Amendment 259 #
Proposal for a regulation
Article 2 – paragraph 1 – point a
Article 2 – paragraph 1 – point a
(a) providers‘developer’ placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country or that adapts a general purpose AI system to a specific purpose and use;
Amendment 284 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
Article 3 – paragraph 1 – point 4 a (new)
(4a) ‘end-user’ means any natural person who, in the context of employment or contractual agreement with the user, uses or deploys the AI system under the authority of the user;
Amendment 286 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the providdeveloper, the deployer, the user, the authorised representative, the importer and the distributor;
Amendment 287 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8 a (new)
Article 3 – paragraph 1 – point 8 a (new)
(8a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
Amendment 291 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14 a (new)
Article 3 – paragraph 1 – point 14 a (new)
(14a) ‘information security component of a product or system’ means a component of a product of a system which has been specifically designed to fulfil security function for that product or system against cyber incidents, disruptions and/ or attacks;
Amendment 292 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14 b (new)
Article 3 – paragraph 1 – point 14 b (new)
(14b) ‘information security product or system’ means a product or of a system which has been specifically designed to fulfil a security function against cyber incidents, disruptions and/ or attacks;
Amendment 301 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s fundamental rights, health, to property or the environment, to democracy or the democratic rule of law,
Amendment 303 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a a (new)
Article 3 – paragraph 1 – point 44 – point a a (new)
(aa) 'AI literacy' means the skills, knowledge and understanding regarding AI systems that are necessary for compliance with and enforcement of this Regulation
Amendment 379 #
Proposal for a regulation
Article 8 – paragraph 1
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art and industry standards, including as reflected in relevant harmonised standards or common specifications.
Amendment 401 #
Proposal for a regulation
Article 10 – paragraph 1 a (new)
Article 10 – paragraph 1 a (new)
1a. Providers of high-risk AI systems that utilise data collected and/or managed by third parties may rely on representations from those third parties with regard to quality criteria referred to in paragraph 2, points (a), (b) and (c).
Amendment 423 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be, to the best extent possible, relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 461 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. Providers of High-risk AI systems shall btake appropriate technical and organizational measures to ensure that high-risk AI systems are resilient as regards to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systemconsistent with industry best practices.
Amendment 496 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Article 29 a Jurisdiction and territoriality Providers as defined in point 2 of Article 3 and within the meaning of Article 28, paragraph 1, shall be deemed to be under the jurisdiction of the Member State in which they have their main establishment in the Union.
Amendment 501 #
Proposal for a regulation
Article 38 – paragraph 2 a (new)
Article 38 – paragraph 2 a (new)
2a. Where a competent authority of a Member State requires obtaining an EU declaration of conformity of a provider which has its main establishment in another Member State, that request shall be made through the competent authority of the Member State where the provider has its main establishment. The information shall be transmitted by the provider in an official language of the Member State where it has its main establishment. The Commission is empowered to adopt delegated acts in accordance with this paragraph to further define the modalities for issuing and handling such requests.
Amendment 519 #
Proposal for a regulation
Article 48 – paragraph 1
Article 48 – paragraph 1
1. The provider shall draw up a written EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be given to the relevant national competent authorities upony in the Member State of main establishment of the provider, upon the competent authority’s request.
Amendment 554 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor, and in collaboration with SMEs, start-ups, enterprises and other innovators, shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. TFor Member States competent authorities or the European Data Protection Supervisor, this shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. SMEs, start-ups, enterprises and other innovators shall conduct live experiments for new business models in collaboration with the Member State competent authorities.
Amendment 558 #
Proposal for a regulation
Article 53 – paragraph 2
Article 53 – paragraph 2
2. Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox. established by one or more Member States competent authorities or the European Data Protection Supervisor. Start-ups, SMEs, enterprises and other innovators may request access to personal data from relevant national authorities to be used in their AI sandbox while ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox
Amendment 563 #
Proposal for a regulation
Article 53 – paragraph 3
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and fundamental rights identified during the development and testing of suchAI systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
Amendment 564 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox. SMEs, start-ups, enterprises and other innovators are invited to share their good practices, lessons learnt and recommendations on their AI sandboxes with Member State competent authorities.
Amendment 571 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
Article 55 – paragraph 1 – point a
(a) provide small-scale providers andSME providers, including start-ups with priority access to the AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor to the extent that they fulfil the eligibility conditions;
Amendment 590 #
Proposal for a regulation
Article 56 – paragraph 2 – point a
Article 56 – paragraph 2 – point a
(a) contribute to thepromote and support effective cooperation of the national supervisory authorities and the Commission with regard to matters covered by this Regulation;
Amendment 591 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(ca) assist developers, deployers and users of AI systems to meet the requirements of this Regulation, including those set out in present and future Union legislation, in particular SMEs and start-ups.