BETA

59 Amendments of Marcos ROS SEMPERE related to 2021/0106(COD)

Amendment 56 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework based on ethical principles in particular for the development, marketingdeployment and use of artificial intelligence in conformity with Union values. Therefore, this Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights and values including democracy and rule of law, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketingdeployment and use of AI systems, unless explicitly authorised by this Regulation.
2022/04/01
Committee: CULT
Amendment 58 #
Proposal for a regulation
Recital 2
(2) (2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operatodevelopers, deployers and users and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/04/01
Committee: CULT
Amendment 62 #
Proposal for a regulation
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities if developed in accordance with ethical principles. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, culture, infrastructure, management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
2022/04/01
Committee: CULT
Amendment 75 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
2022/04/01
Committee: CULT
Amendment 84 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, the environment and fundamental rights, and values such as democracy and the rule of law, a set of ethical principles and common normative standards for all high-risk AI systems should be established. Those principles and standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal) and the Joint Declaration on Digital Rights of the Union (the Declaration) and should be non-discriminatory and in line with the Union’s international trade commitments.
2022/04/01
Committee: CULT
Amendment 86 #
Proposal for a regulation
Recital 14 a (new)
(14 a) (14 a) Without prejudice to tailoring rules to the intensity and scope of the risks that AI systems can generate, or to the specific requirements laid down for high-risk AI systems, all AI systems developed, deployed or used in the Union should respect not only Union and national law but also a specific set of ethical principles that are aligned with the values enshrined in Union law and that are in part, concretely reflected in the specific requirements to be complied with by high-risk AI systems. That set of principles should, inter alia, also be reflected in codes of conduct that should be mandatory for the development, deployment and use of all AI systems. Accordingly, any research carried out with the purpose of attaining AI-based solutions that strengthen the respect for those principles, in particular those of social responsibility and environmental sustainability, should be encouraged by the Commission and the Member States.
2022/04/01
Committee: CULT
Amendment 87 #
Proposal for a regulation
Recital 14 b (new)
(14 b) (14 b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens and operators in the context of the obligations set out in this Regulation, to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and operators in the context of the obligations set out in this Regulation, with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as operators of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed.
2022/04/01
Committee: CULT
Amendment 89 #
Proposal for a regulation
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
2022/04/01
Committee: CULT
Amendment 90 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research. .
2022/04/01
Committee: CULT
Amendment 106 #
Proposal for a regulation
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, right to education, consumer protection, workers’ rights. Special attention should be paid to gender equality, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration, protection of intellectual property rights and ensuring cultural diversity. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
2022/04/01
Committee: CULT
Amendment 107 #
Proposal for a regulation
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre- defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
2022/04/01
Committee: CULT
Amendment 115 #
Proposal for a regulation
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the rights to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination. Finally, education is also a social learning process therefore, the use of artificial intelligence systems must not replace the fundamental role of teachers in education.
2022/04/01
Committee: CULT
Amendment 118 #
Proposal for a regulation
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules aplicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
2022/04/01
Committee: CULT
Amendment 129 #
(70) Certain AI systems intendused to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications, which should include a disclaimer, should be provided in accessible formats for children, the elderly, migrants and persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio, text, scripts or video content that appreciably resembles existing persons, places, test, scripts or events and would falsely appear to a person to be authentic, should appropriately disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin, namely the name of the person or entity that created it. AI systems used to recommend, disseminate and order news or cultural and creative content displayed to users, should include an explanation of the parameters used for the moderation of content and personalised suggestions which should be easily accessible and understandable to the users.
2022/04/01
Committee: CULT
Amendment 132 #
Proposal for a regulation
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
2022/04/01
Committee: CULT
Amendment 134 #
Proposal for a regulation
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this and other Regulations a European Agency for Data and Artificial Intelligence Board should be established. The BoardAgency should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation and other present or future legislations, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence. The Agency should establish a Permanent Stakeholders' Group composed of experts representing the relevant stakeholders, such as representatives of developers, deployers and users of AI systems, including SMEs and start-ups, consumer groups, trade unions, fundamental rights organisations and academic experts and it should communicate its activities to citizens as appropriate.
2022/04/01
Committee: CULT
Amendment 136 #
Proposal for a regulation
Recital 79
(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. Where appropriate, national authorities or bodies, which supervise the application of Union law or national law compatible with union law establishing rules regulating the health, safety, security and environment at work, should also have access to any documentation created under this Regulation.
2022/04/01
Committee: CULT
Amendment 137 #
Proposal for a regulation
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy, socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participatDevelopers and deployers of all AI systems should also draw up codes of conduct in order to ensure and demonstrate compliance with the ethical principles underpinning trustworthy AI. The Commission inand the design and development of AI systems, and diversity of the development teams. The CommissionEuropean Agency for Data and Artificial Intelligence may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
2022/04/01
Committee: CULT
Amendment 145 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
2022/04/01
Committee: CULT
Amendment 146 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
2022/04/01
Committee: CULT
Amendment 151 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniquescan, in and approaches listed in Annex I and canutomated manner, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
2022/04/01
Committee: CULT
Amendment 152 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
(2) ‘providdeveloper’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge, or that adapts a general purpose AI system to a specific purpose and use;
2022/04/01
Committee: CULT
Amendment 153 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2 a (new)
(2 a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
2022/04/01
Committee: CULT
Amendment 155 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under itsthe authority, except where the AI system is used in the course of a personal non- professional activity of a deployer;
2022/04/01
Committee: CULT
Amendment 157 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the providdeveloper, the deployer, the user, the authorised representative, the importer and the distributor;
2022/04/01
Committee: CULT
Amendment 161 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s fundamental rights, health, to property or the environment, to democracy or the democratic rule of law,
2022/04/01
Committee: CULT
Amendment 165 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'AI literacy' means the skills, knowledge and understanding regarding AI systems
2022/04/01
Committee: CULT
Amendment 166 #
Proposal for a regulation
Article 4
Amendments to Annex I The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.rticle 4 deleted
2022/04/01
Committee: CULT
Amendment 168 #
Proposal for a regulation
Article 4 a (new)
Article 4 a Trustworthy AI 1. All AI systems in the Union shall be developed, deployed and used in full respect of the EU Charter of Fundamental Rights. 2. In view of promoting trustworthy AI in the Union, and without prejudice to the requirements set out in Title III for high- risk AI systems, all AI systems shall be developed, deployed and used: (a) in a lawful, fair and transparent manner (‘lawfulness, fairness and transparency’); (b) in a manner that ensures that natural persons shall always be able to make informed decisions regarding such systems and these shall never undermine or override human autonomy (‘human agency and oversight’); (c) in a manner that ensures their safe, accurate and reliable performance, with embedded safeguards to prevent any kind of individual or collective harm (‘safety, accuracy, reliability and robustness’); (d) in a manner that guarantees privacy and data protection (‘privacy’); (e) in a manner that privileges the integrity and quality of data, including with regard to access (‘data governance’); (f) in a traceable, auditable and explainable manner that ensures responsibility and accountability for their outcomes and supports redress (‘traceability, auditability, explainability and accountability’); (g) in a manner that does not discriminate against persons or groups of persons on the basis of unfair bias and that includes, to that end, the participation and input of relevant stakeholders(‘non-discrimination and diversity’); (h) in an environmentally sustainable manner that minimises their environmental footprint, including with regard to the extraction and consumption of natural resources (‘environmental sustainability’); (i) in a socially responsible manner that minimises their negative societal impact, especially with regard to social and gender inequalities and democratic processes (‘social responsibility’); 3. In view of promoting trustworthy AI in the Union, any person or groups of persons affected by the use of an AI system shall have the right to an explanation in accordance with New Article 71, as well as the right to object to an automated decision made solely by an AI system, or relying to a significant degree on the output of an AI system, which produces legal or similarly significant effects concerning them. These rights are without prejudice to Article 22 of Regulation (EU) 2016/679. 4. The ethical principles underpinning trustworthy AI as described in paragraph 2 shall be taken into account by European Standardisation Organisations as outcome-based objectives when they develop harmonised standards for AI systems as referred to in Article 40(2b) and by the European Commission when developing common specifications as referred to in Article 41. 5. Developers and deployers shall specify in the mandatory Codes of Conduct referred to in Article 69, how these principles are taken into account in the course of their activities. For AI systems other than high-risk, developers and deployers should outline any concrete measures implemented to ensure respect for those principles. This obligation is without prejudice to the voluntary application to AI systems other than high- risk of the requirements set out in Title III. 6. In order to demonstrate compliance with this Article, developers and deployers shall, in addition to the obligations set out in paragraphs 5 and afer drafting their codes of conduct, complete a trustworthy AI technology assessment. For high-risk AI systems, this assessment shall be part of the requirements under Article 16(a) and 29(4).
2022/04/01
Committee: CULT
Amendment 169 #
Proposal for a regulation
Article 4 b (new)
Article 4 b AI literacy 1. When implementing this Regulation, the Union and the Member States shall promote measures and tools for the development of a sufficient level of AI literacy, across sectors and groups of operators concerned, including through education and training, skilling and reskilling programmes and while ensuring a proper gender and age balance, in view of allowing a democratic control of AI systems. 2. Developers and deployers of AI systems shall promote tools and take measures to ensure a sufficient level of AI literacy of their staff and any other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the environment the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used. 3. Such literacy tools and measures shall consist, in particular, of the teaching and learning of basic notions and skills about AI systems and their functioning, including the different types of products and uses, their risks and benefits and the severity of the possible harm they can cause and its probability of occurrence. 4. A sufficient level of AI literacy is one that contributes to the ability of operators to fully comply with and benefit from trustworthy AI, and in particular with the requirements laid down in this Regulation in Articles 13, 14, 29, 52 and 69.
2022/04/01
Committee: CULT
Amendment 201 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk due to their risk to cause harm to health, safety, the environment, fundamental rights or to democracy and the rule of law.
2022/04/01
Committee: CULT
Amendment 202 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73, after ensuring adequate consultation with relevant stakeholders and the European Agency for Data and AI, to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:
2022/04/01
Committee: CULT
Amendment 203 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;
2022/04/01
Committee: CULT
Amendment 204 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the environment, health and safety, or a risk of adverse impact on fundamental rights, democracy and rule of law that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/04/01
Committee: CULT
Amendment 206 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the environment, health and safety or a risk of adverse impact on fundamental rights, democracy and the rule of law, that is equivalent to or greater than the risk of harm posed by the high- risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
2022/04/01
Committee: CULT
Amendment 214 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
(c a) provision of a sufficient level of AI literacy
2022/04/01
Committee: CULT
Amendment 216 #
Proposal for a regulation
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children, the elderly, migrants or other vulnerable groups.
2022/04/01
Committee: CULT
Amendment 221 #
Proposal for a regulation
Article 10 – paragraph 2 – point g a (new)
(g a) the purpose and the environment in which the system is to be used;
2022/04/01
Committee: CULT
Amendment 224 #
Proposal for a regulation
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users todevelopers, deployers, users and other relevant stakeholders to easily interpret the system’s functioning and output and use it appropriately. An appropriate type and degree of transparency shall be ensured on the basis of informed decisions, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title.
2022/04/01
Committee: CULT
Amendment 225 #
Proposal for a regulation
Article 13 – paragraph 3 a (new)
3 a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 4b.
2022/04/01
Committee: CULT
Amendment 228 #
Proposal for a regulation
Article 14 – paragraph 5 a (new)
5 a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with new Article 4b
2022/04/01
Committee: CULT
Amendment 232 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
1 a. In order to comply with the obligations established in this Article, as well as to be able to justify their possible non-compliance, deployers of high-risk AI systems shall ensure a sufficient level of AI literacy in line with new Article 4b;
2022/04/01
Committee: CULT
Amendment 235 #
Proposal for a regulation
Article 52 – paragraph 1
1. ProvidDevelopers and deployers shall ensure that AI systems intendused to interact with natural persons are designed and developed in such a way that natural persons are informed, in a timely, clear and intelligible manner that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This information shall also include, as appropriate, the functions that are AI enabled, and the rights and processes to allow natural persons to appeal against the application of such AI systems to them. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/04/01
Committee: CULT
Amendment 236 #
Proposal for a regulation
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform, in a timely, clear and intelligible manner, of the operation of the system to the natural persons exposed thereto. This information shall also include, as appropriate, the rights and processes to allow natural persons to appeal against the application of such AI system to then. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
2022/04/01
Committee: CULT
Amendment 237 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
3. UDeployers and users of an AI system that generates or manipulates image, audio, text, scripts or video content that appreciably resembles existing persons, objects, places, text, scripts or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose in an appropriate timely, clear and visible manner, that the content has been artificially generated or manipulated, as well as the name of the person or entity that generated or manipulated it.
2022/04/01
Committee: CULT
Amendment 242 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offencesforms part of an evidently artistic, creative or fictional cinematographic or analogous work or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
2022/04/01
Committee: CULT
Amendment 243 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 a (new)
Developers and deployers of an AI systems that recommend, disseminate and order news or creative and cultural content shall disclose in an appropriate, easily accesible, clear and visible manner, the parameters used for the moderation of content and personalised suggestions. This information shall include a disclaimer.
2022/04/01
Committee: CULT
Amendment 244 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 b (new)
The information referred to in previous paragraphs shall be provided to the natural persons in a timely, clear and visible manner, at the latest at the time of the first interaction or exposure. Such information shall be made accessible when the exposed natural person is a person with disabilities, a child or from a vulnerable group. It shall be complete, where possible, with intervention or flagging procedures for the exposed natural person taking into account the generally acknowledged state of the art and relevant harmonised standards and common specifications.
2022/04/01
Committee: CULT
Amendment 245 #
Proposal for a regulation
Article 52 – paragraph 4 a (new)
4 a. In order to comply with the obligations established in this Article, a sufficient level of AI literacy shall be ensured.
2022/04/01
Committee: CULT
Amendment 253 #
Proposal for a regulation
Article 69 – paragraph 1
1. The Commission and the Member States shall encouragsupport the mand facilitate theatory drawing up of codes of conduct intended to demonstrate compliance with the ethical principles underpinning trustworthy AI set out in new Article 4a and to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.
2022/04/01
Committee: CULT
Amendment 254 #
Proposal for a regulation
Article 69 – paragraph 2
2. The Commission and the Board shall encourage and facilitatIn the drawing up codes of conduct intended to ensure and demonstrate compliance with the ethe drawing up of codes of conduct intended to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, accessibility forical principles underpinning trustworthy AI set out in Article 4a, developers and deployers shall, in particular: (a) consider whether there is a sufficient level of AI literacy among their staff and any other persons dealing with the operation and use of AI systems in order to observe such principles; (b) assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with a disability, stakeholders participation in the design and development ofies or whether any measures could be put in place in order to support such persons or groups of persons; (c) pay attention to the way in which the use of their AI systems may have an impact on gender balance and equality; (d) have especial regard to whether their AI systems cand diversity of be used in a way that, directly or indirectly, may residually or significantly reinforce existing biases or inequalities; (e) reflect on the need and relevance of having in place diverse development teams oin the basis of clear objectives and key performance indicators to measure the achievement of those objectiveview of securing an inclusive design of their systems; (f) give careful consideration to whether their systems can have a negative societal impact, notably concerning political institutions and democratic processes; (g) evaluate the extent to which the operation of their AI systems would allow them to fully comply with the obligation to provide an explanation laid down in Article New 71 of this Regulation; (h) take stock of the Union’s commitments under the European Green Deal and the European Declaration on Digital Rights and Principles; (i) state their commitment to privileging, where reasonable and feasible, the common specifications to be drafted by the Commission pursuant to Article 41 rather than their own individual technical solutions.
2022/04/01
Committee: CULT
Amendment 255 #
Proposal for a regulation
Article 69 – paragraph 3
3. Codes of conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations, including in particular trade unions and consumers organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
2022/04/01
Committee: CULT
Amendment 256 #
Proposal for a regulation
Article 69 – paragraph 3 a (new)
3 a. Developers and deployers shall designate at least one natural person that is responsible for the internal monitoring of the drawing up of their code of conduct and for verifying compliance with that code of conduct in the course of their activities. That person shall serve as a contact point for users, stakeholders, national competent authorities, the Commission and the European Agency for Data and AI on all matters concerning the code of conduct.
2022/04/01
Committee: CULT
Amendment 257 #
Proposal for a regulation
Article 69 – paragraph 3 b (new)
3 b. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 6.
2022/04/01
Committee: CULT
Amendment 260 #
Proposal for a regulation
Annex I
ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHESreferred to in Article 3, point 1 (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.deleted
2022/04/01
Committee: CULT
Amendment 262 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating, telecommunications, and electricity.
2022/04/01
Committee: CULT
Amendment 264 #
(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions or of determining the study program or areas of study to be followed by students;
2022/04/01
Committee: CULT
Amendment 266 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 a (new)
3 a. AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests at education and training institutions
2022/04/01
Committee: CULT
Amendment 270 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on establishment, promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
2022/04/01
Committee: CULT