BETA

Activities of Miapetra KUMPULA-NATRI related to 2021/0106(COD)

Plenary speeches (1)

Artificial Intelligence Act (debate)
2023/06/13
Dossiers: 2021/0106(COD)

Shadow opinions (1)

OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts
2022/06/14
Committee: ITRE
Dossiers: 2021/0106(COD)
Documents: PDF(272 KB) DOC(201 KB)
Authors: [{'name': 'Eva MAYDELL', 'mepid': 98341}]

Amendments (113)

Amendment 127 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the developbased on ethical principles in particular for the design, development, deployment, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/03/31
Committee: ITRE
Amendment 131 #
Proposal for a regulation
Recital 1 a (new)
(1a) In line with Article 114(2) TFEU, this Regulation does not in any way affect the rights and interests of employed persons. This Regulation is without prejudice to Community law on social policy and national labour law and practice.
2022/03/31
Committee: ITRE
Amendment 132 #
Proposal for a regulation
Recital 1 b (new)
(1b) Given the significance of AI impact assessments according to the usage of AI applications in the workplace, the EU should consider a corresponding directive with specific provisions for an impact assessment to ensure the protection of the rights and freedoms of workers affected by AI systems through collective agreements or national legislation.
2022/03/31
Committee: ITRE
Amendment 133 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/03/31
Committee: ITRE
Amendment 139 #
Proposal for a regulation
Recital 3 a (new)
(3a) Furthermore, in order for the Member States to reach their climate targets, European companies should seek to achieve a ‘large handprint but small footprint’ of artificial intelligence to the environment. To facilitate investments in AI-based analysis and optimisation solutions that can help to achieve the climate goals, this regulation should provide a predictable and proportionate environment for low- risk industrial solutions. To ensure coherence, this requires that AI systems themselves need to be designed sustainably to reduce resource usage and energy consumption, thereby limiting the damage to the environment.
2022/03/31
Committee: ITRE
Amendment 144 #
Proposal for a regulation
Recital 5 a (new)
(5a) Legislation on artificial intelligence should be accompanied by actions intended to address main barriers hindering the digital transformation of the economy. Such measures should focus on education, upskilling and reskilling workers, fostering investment in R&I, and boosting security in the digital sphere in line with initiatives aimed at achieving the targets of the Digital Decade. Digital transformation should occur in a harmonized manner across regions, paying particular attention to less digitally developed areas of the Union.
2022/03/31
Committee: ITRE
Amendment 146 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. This definition should be understood to exclude tools and software systems that are strictly limited to elementary arithmetic operations on datasets or descriptive data analysis. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that listCommission should take note of the ongoing developments on defining the artificial intelligence within key international organisations such as United Nations Educational, Scientific and Cultural Organization, the Organisation for Economic Cooperation and Development, International standardization bodies and Council of Europe.
2022/03/31
Committee: ITRE
Amendment 149 #
Proposal for a regulation
Recital 12 a (new)
(12a) This Regulation should not undermine research and development activity and should respect freedom of science. It is therefore necessary to ensure that this Regulation does not affect scientific research and development activity on AI systems. As regards product oriented research activity by providers, the provisions of this Regulation should apply insofar as such research leads to or entails placing an AI system on the market or putting it into service. Under all circumstances, any research and development activity should be carried out in accordance with recognised ethical standards for scientific research.
2022/03/31
Committee: ITRE
Amendment 154 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal) and the Joint Declaration on Digital Rights of the Union (the Declaration) and should be non-discriminatory and in line with the Union’s international trade commitments.
2022/03/31
Committee: ITRE
Amendment 158 #
Proposal for a regulation
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. With regard to transparency and human oversight obligations, Member States should be able to adopt further national measures to complement them without changing their harmonising nature.
2022/03/31
Committee: ITRE
Amendment 161 #
Proposal for a regulation
Recital 14 a (new)
(14a) Without prejudice to tailoring rules to the intensity and scope of the risks that AI systems can generate, or to the specific requirements laid down for high-risk AI systems, all AI systems developed, deployed or used in the Union should respect not only Union and national law but also a specific set of ethical principles that are aligned with the values enshrined in Union law and that are in part, concretely reflected in the specific requirements to be complied with by high-risk AI systems. That set of principles should, inter alia, also be reflected in codes of conduct that should be mandatory for the development, deployment and use of all AI systems. Accordingly, any research carried out with the purpose of attaining AI-based solutions that strengthen the respect for those principles, in particular those of social responsibility and environmental sustainability, should be encouraged by the Commission and the Member States.
2022/03/31
Committee: ITRE
Amendment 162 #
Proposal for a regulation
Recital 14 b (new)
(14b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as developers and deployers of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed.
2022/03/31
Committee: ITRE
Amendment 163 #
Proposal for a regulation
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
2022/03/31
Committee: ITRE
Amendment 166 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby economic, physical or psychological harms are likely to occur, should be forbidden. This limitation should be understood to include neuro- technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain- computer interfaces for pecuniary purposes. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/03/31
Committee: ITRE
Amendment 170 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/03/31
Committee: ITRE
Amendment 174 #
Proposal for a regulation
Recital 17 a (new)
(17a) The use of Artificial Intelligence in work can be beneficial to both the management and operations of an enterprise, supporting workers in their tasks and improving safety on the workplace. Still, Artificial Intelligence systems applied to digital labour platforms, platforms for the management of workers, including in the field of transport, can entail risks of unjust/unnecessary social scoring, rooted in biased data sets, which can lead to violation of workers and fundamental rights. This Regulation should therefore aim at protecting the rights of workers managed by digital labour platforms and promote transparency, fairness and accountability in algorithmic management, to ensure workers are aware of how the algorithm works, which personal data is issued and how their behaviour affects decisions taken from the automated system.
2022/03/31
Committee: ITRE
Amendment 191 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into servicedeveloped and deployed if they comply with certain mandatory requirements based on ethical principles. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
2022/03/31
Committee: ITRE
Amendment 194 #
Proposal for a regulation
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, gender equality, education, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
2022/03/31
Committee: ITRE
Amendment 200 #
Proposal for a regulation
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the right to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination.
2022/03/31
Committee: ITRE
Amendment 201 #
Proposal for a regulation
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules applicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
2022/03/31
Committee: ITRE
Amendment 211 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the datasets, that might lead to risks to fundamental rights or discriminatory outcomes for the persons affected by the high-risk AI system. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural, contextual or functional setting or context within which the AI system is intended to be used, with specific attention to women, vulnerable groups and children. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
2022/03/31
Committee: ITRE
Amendment 213 #
Proposal for a regulation
Recital 45
(45) For the development of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission , competitive and fair European data economy structured around interoperable data intermediation services and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non- discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non- discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy- preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.
2022/03/31
Committee: ITRE
Amendment 214 #
Proposal for a regulation
Recital 46
(46) Having comprehensible information on how high- risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation and to allow users to make informed and autonomous decisions about their use. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.
2022/03/31
Committee: ITRE
Amendment 215 #
Proposal for a regulation
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certainsufficient degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. The same applies to AI systems with general purposes that may have high-risk uses that are not forbidden by their developer. In such cases, sufficient information should be made available allowing deployers to carry out tests and analysis on performance, data and usage. The systems and information should also be registered in the EU database for stand- alone high-risk AI systems foreseen in Article 60 of this Regulation.
2022/03/31
Committee: ITRE
Amendment 216 #
Proposal for a regulation
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency and comprehensibility should be required for high-risk AI systems and their algorithms. Users should be able to interpret both the algorithmic decision-making and the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate.
2022/03/31
Committee: ITRE
Amendment 218 #
Proposal for a regulation
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons can overseehave agency over them by being able to oversee and control their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate and at the very least where decisions based solely on the automated processing enabled by such systems produce legal or otherwise significant effects, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
2022/03/31
Committee: ITRE
Amendment 219 #
Proposal for a regulation
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users.Accuracy metrics and their expected level must be defined with the primary objective to mitigate risks and negative impact of the AI system to individuals and the society at large, The expected level of accuracy and accuracy metrics should be communicated to the users. The declaration of accuracy metrics cannot however be considered proof of future levels but relevant methods need to be applied to ensure sustainable levels during use
2022/03/31
Committee: ITRE
Amendment 221 #
Proposal for a regulation
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to thein an intelligible manner to the deployers and users.
2022/03/31
Committee: ITRE
Amendment 222 #
Proposal for a regulation
Recital 50
(50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system. Users of the AI system should take steps to ensure that the possible trade-off between robustness and accuracy does not lead to discriminatory or negative outcomes for minority subgroups.
2022/03/31
Committee: ITRE
Amendment 224 #
Proposal for a regulation
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or confidentiality attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
2022/03/31
Committee: ITRE
Amendment 225 #
Proposal for a regulation
Recital 60
(60) In the light of the complexity of the artificial intelligence value chain, relevant third parties, notably the ones involved in the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services, should cooperate, as appropriate, withensure, through technical means, the transparency and auditability for providers and users to enable their compliance with the obligations under this Regulation and withcooperate and assist competent authorities established under this Regulation in its enforcement.
2022/03/31
Committee: ITRE
Amendment 226 #
Proposal for a regulation
Recital 61
(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficient. In addition to technical details, the standardisation process should also include an assessment of risks to fundamental rights, the environment, societal risks and other sociotechnical considerations, such as how a given technology might interact with other technologies. The standardisation process should be transparent in terms of legal and natural persons participating the standardisation activities. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficient. In developing these common specifications Commission should involve views of relevant stakeholders, in particular when the common specifications address specific fundamental rights concerns. In particular, the Commission should adopt common specifications setting out how risk management systems give specific consideration to impact on children. _________________ 54 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
2022/03/31
Committee: ITRE
Amendment 229 #
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional and ethically justified reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.
2022/03/31
Committee: ITRE
Amendment 233 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose in an appropriate, clear and visible manner that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
2022/03/31
Committee: ITRE
Amendment 237 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate and ethically justified safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
2022/03/31
Committee: ITRE
Amendment 242 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups; to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems, in line with the ethical principles outlined in this Regulation. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/03/31
Committee: ITRE
Amendment 243 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups as well as to contribute to achieving the targets on AI as set in the Policy Programme “Path to the Digital Decade". To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/03/31
Committee: ITRE
Amendment 246 #
Proposal for a regulation
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
2022/03/31
Committee: ITRE
Amendment 251 #
Proposal for a regulation
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
2022/03/31
Committee: ITRE
Amendment 256 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
2022/03/31
Committee: ITRE
Amendment 258 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
When justified by significant risks to fundamental rights of persons, Member States may introduce specific regulatory solutions ensuring a higher level of protection of persons than offered in this Regulation.
2022/03/31
Committee: ITRE
Amendment 259 #
Proposal for a regulation
Article 2 – paragraph 1 – point a
(a) providers‘developer’ placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country or that adapts a general purpose AI system to a specific purpose and use;
2022/03/31
Committee: ITRE
Amendment 263 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5a. This Regulation shall not affect research activities regarding AI systems insofar as such activity does not lead to or entail placing an AI system on the market or putting it into service. These research activities shall not violate the fundamental rights of the affected persons.
2022/03/31
Committee: ITRE
Amendment 267 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
5b. This Regulation shall be without prejudice to Regulation (EU) 2016/679.
2022/03/31
Committee: ITRE
Amendment 268 #
Proposal for a regulation
Article 2 – paragraph 5 c (new)
5c. This Regulation shall be without prejudice to Union and national laws on social policies.
2022/03/31
Committee: ITRE
Amendment 269 #
Proposal for a regulation
Article 2 – paragraph 5 d (new)
5d. This Regulation shall be without prejudice to national labour law and practice, that is any legal or contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation.
2022/03/31
Committee: ITRE
Amendment 277 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and cmachine-based system that can, with a varying levels of autonomy an,d for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the real or virtual environments they interact with;
2022/03/31
Committee: ITRE
Amendment 280 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 a (new)
(1a) 'autonomy' means an AI system that operates by interpreting certain input and by using a set of pre-determined objectives, without being limited to such instructions, despite the system’s behaviour being constrained by, and targeted at, fulfilling the goal it was given and other relevant design choices made by its developer;
2022/03/31
Committee: ITRE
Amendment 284 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
(4a) ‘end-user’ means any natural person who, in the context of employment or contractual agreement with the user, uses or deploys the AI system under the authority of the user;
2022/03/31
Committee: ITRE
Amendment 287 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8 a (new)
(8a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
2022/03/31
Committee: ITRE
Amendment 289 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12 a (new)
(12a) ‘general purpose AI system’ means an AI application that performs generally applicable functions such as image or speech recognition, audio or video generation, pattern detection, question answering, and translation, and is largely customizable;
2022/03/31
Committee: ITRE
Amendment 290 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system orand the failure or malfunctioning of which endangers the health and safety of persons or property;
2022/03/31
Committee: ITRE
Amendment 291 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14 a (new)
(14a) ‘information security component of a product or system’ means a component of a product of a system which has been specifically designed to fulfil security function for that product or system against cyber incidents, disruptions and/ or attacks;
2022/03/31
Committee: ITRE
Amendment 292 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14 b (new)
(14b) ‘information security product or system’ means a product or of a system which has been specifically designed to fulfil a security function against cyber incidents, disruptions and/ or attacks;
2022/03/31
Committee: ITRE
Amendment 301 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s fundamental rights, health, to property or the environment, to democracy or the democratic rule of law,
2022/03/31
Committee: ITRE
Amendment 303 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a a (new)
(aa) 'AI literacy' means the skills, knowledge and understanding regarding AI systems that are necessary for compliance with and enforcement of this Regulation
2022/03/31
Committee: ITRE
Amendment 325 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of children, or a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/03/31
Committee: ITRE
Amendment 355 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1 of this Article, AI systems referred to in Annex III shall also be considered high-risk. In case there is uncertainty over the AI system's classification, the provider shall deem the AI system high-risk if its use or application poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights of individuals affected by these technologies.
2022/03/31
Committee: ITRE
Amendment 363 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the environment, health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/03/31
Committee: ITRE
Amendment 373 #
Proposal for a regulation
Article 7 – paragraph 2 – point g a (new)
(g a) magnitude and likelihood of both the benefits and risks of the AI use for individuals, groups, the environment and the society at large,
2022/03/31
Committee: ITRE
Amendment 389 #
Proposal for a regulation
Article 9 – paragraph 2 a (new)
2a. The risks referred to in paragraph 2 shall concern only those which may be sufficiently mitigated or eliminated through the use, development or design of the high-risk AI system, or the provision of adequate technical information.
2022/03/31
Committee: ITRE
Amendment 398 #
Proposal for a regulation
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5. Techniques such as unsupervised learning and reinforcement learning that do not use validation and testing data sets shall be developed on the basis of training data sets the quality criteria referred to in paragraphs 2 to 5.
2022/03/31
Committee: ITRE
Amendment 401 #
Proposal for a regulation
Article 10 – paragraph 1 a (new)
1a. Providers of high-risk AI systems that utilise data collected and/or managed by third parties may rely on representations from those third parties with regard to quality criteria referred to in paragraph 2, points (a), (b) and (c).
2022/03/31
Committee: ITRE
Amendment 407 #
Proposal for a regulation
Article 10 – paragraph 2 – point a a (new)
(aa) transparency on the original purpose of data collection;
2022/03/31
Committee: ITRE
Amendment 408 #
Proposal for a regulation
Article 10 – paragraph 2 – point b
(b) data collection processes;
2022/03/31
Committee: ITRE
Amendment 411 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases; that could affect fundamental rights or lead to discriminatory results, affecting both or either the individual right to non-discrimination and equality as a value recognised in Article 2 TEU.
2022/03/31
Committee: ITRE
Amendment 419 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be designed with the best possible efforts to ensure that they are relevant, representative, and free of errors and complete in view of the intended purpose of the system. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/03/31
Committee: ITRE
Amendment 426 #
Proposal for a regulation
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural, contextual or functional setting within which the high-risk AI system is intended to be used.
2022/03/31
Committee: ITRE
Amendment 448 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
(a) fulsufficiently understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
2022/03/31
Committee: ITRE
Amendment 464 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 2
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) and malicious manipulation of inputs used in learning during operation are duly addressed with appropriate mitigation measures.
2022/03/31
Committee: ITRE
Amendment 466 #
Proposal for a regulation
Article 15 – paragraph 4 – introductory part
4. High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use, behaviour, outputs or performance by exploiting the system vulnerabilities.
2022/03/31
Committee: ITRE
Amendment 468 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 2
The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset (‘data poisoning’), or pretrained components used in training (‘model poisoning’) , inputs designed to cause the model to make a mistake (‘adversarial examples’), or ‘model evasion’), confidentiality attacks or model flaws.
2022/03/31
Committee: ITRE
Amendment 494 #
Proposal for a regulation
Article 28 a (new)
Article 28 a Obligations for providers of general- purpose AI systems 1. Any legal entity that places on the market or puts into service under its own name or trademark or uses a general purpose AI system made available on the market for an intended high-risk purpose, that makes it subject to this Regulation, shall be considered the provider of the AI system in accordance with this Regulation. 2. Providers of general-purpose AI systems used shall be obliged to ensure, through technical means, the transparency and the auditability required for downstream providers to comply with the obligations outlined in the Chapter 2 and Chapter 3 of the title III of this Regulation. In addition, the provider of general-purpose AI system shall provide additional information on the relevant limitations of the general purpose AI systems, as well as potential risks to fundamental rights, environment or the society at large. 3. This information shall be made available to developers utilising such general purpose AI systems as part of the products delivered to the markets. 4. This Article shall apply irrespective of whether the general purpose AI system is open source software or not.
2022/03/31
Committee: ITRE
Amendment 498 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Obligations on user to define affected persons 1. Before implementing a high-risk AI system as defined in Article 6(2), the user shall describe persons or groups of natural persons likely to be affected by the use of the system.
2022/03/31
Committee: ITRE
Amendment 500 #
Proposal for a regulation
Article 29 b (new)
Article 29 b Fundamental rights impact assessment for high risk AI system 1. Users of high-risk AI systems defined in Article 6(2) shall assess the systems’ impact of the use prior to putting the system into work. 2. This assessment shall include, but is not limited to, the following: a) a clear outline of the intended purpose for which the system will be used´; b) a clear outline of the intended geographic and temporal scope of the system’s use c) the reasonably foreseeable impacts on fundamental rights of the persons affected by the high-risk AI system, d) the reasonably foreseeable risk of harm likely to impact marginalized persons or those groups at risk of discrimination, or increase existing societal inequalities; e) the reasonably foreseeable impact of the use of the system on the environment, f) in case of identification of reasonably foreseeable harms, clear steps as to how these harms will be addressed 3. The obligation outlined under paragraph 1 applies for each new deployment of the high-risk AI system 4. Where, following the impact assessment process, the user decides to put the high- risk AI system into use, the user shall be required to publish the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51(2). 5. The obligations on users in paragraph 1 is without prejudice to the obligations on users of all high risk AI systems as outlined in Article 29.
2022/03/31
Committee: ITRE
Amendment 503 #
Proposal for a regulation
Article 40 – paragraph 1 a (new)
The Commission shall ensure that the process of developing harmonised standards includes an assessment of risks to fundamental rights, environment and society at large.
2022/03/31
Committee: ITRE
Amendment 504 #
Proposal for a regulation
Article 40 – paragraph 1 b (new)
The Commission shall ensure that the process developing harmonised standards on artificial intelligence systems is open to stakeholders listed in the Article 5 of Regulation (EU) No 1025/2012.The Commission shall direct funds to stakeholders listed in Annex III of that Regulation in line with the Article 17 of that regulation to facilitate effective participation of the stakeholders with particular emphasis on the relevant stakeholders for the paragraph 2.
2022/03/31
Committee: ITRE
Amendment 505 #
Proposal for a regulation
Article 40 – paragraph 1 c (new)
The Commission shall review the harmonized standards before they are published in the Official Journal and prepare a report outlining their adequacy with paragraph 2 of this Article.
2022/03/31
Committee: ITRE
Amendment 507 #
Proposal for a regulation
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather, where relevant, the views of relevant bodiestakeholders, such as SME's and start-ups, civil society and social partners or expert groups established under relevant sectorial Union law.
2022/03/31
Committee: ITRE
Amendment 511 #
Proposal for a regulation
Article 42 – paragraph 1
1. Taking into account their intended purpose, high-risk AI systems that have been trained and tested on data concerning the specific geographical, behavioural, contextual and functional setting within which they are intended to be used shall be presumed to be in compliance with the requirement set out in Article 10(4).
2022/03/31
Committee: ITRE
Amendment 519 #
Proposal for a regulation
Article 48 – paragraph 1
1. The provider shall draw up a written EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be given to the relevant national competent authorities upony in the Member State of main establishment of the provider, upon the competent authority’s request.
2022/03/31
Committee: ITRE
Amendment 521 #
Proposal for a regulation
Article 48 – paragraph 2
2. The EU declaration of conformity shall state that the high-risk AI system in question meets the requirements set out in Chapter 2 of this Title. The EU declaration of conformity shall contain the information set out in Annex V and shall be translapresented into an official Union language or languages required byf the Member State(s) in which the provider of the high-risk AI system is made availablehas its main establishment.
2022/03/31
Committee: ITRE
Amendment 528 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Before using a high-risk AI system referred to in Article 6(2) the user or where applicable the authorised representative shall register the uses of that system in the EU database referred to in the Article 60. A new registration entry must be complemented by the user for each high risk use of the AI system.
2022/03/31
Committee: ITRE
Amendment 533 #
Proposal for a regulation
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural and legal persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authoriszed by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/03/31
Committee: ITRE
Amendment 534 #
Proposal for a regulation
Article 52 – paragraph 1 a (new)
1a. Users of a high-risk AI system, referred to in Article 6(2), shall inform natural and legal persons exposed thereto of the operation of the system.
2022/03/31
Committee: ITRE
Amendment 539 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose in an appropriate clear and visible manner that the content has been artificially generated or manipulated.
2022/03/31
Committee: ITRE
Amendment 542 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
3a. Users of AI systems referred to in paragraphs 1, 1a, 1b, 2, and 3 shall, when a decision made by or with the assistance of these AI systems produces legal effects concerning a natural person or otherwise significantly affects them, provide the affected person, following their request, with an explanation of the decision. The explanation shall be provided in a clear, and comprehensible manner and shall include meaningful, relevant information on the reasons for the decision, at a minimum: (a) the role of AI system in the decision- making process (b) the logic involved, the main parameters of decision-making and the relative weights. (c) the indication of specific personal data of the affected person, or other information, that had significant impact on the outcome, (d) the category or group into which the affected person has been classified, (e) whether there was a meaningful human oversight in the decision-making process. (f) the information about the rights to remedy under this Regulation, including the right to lodge a complaint with the national supervisory authority as per Article 52c of this Regulation.
2022/03/31
Committee: ITRE
Amendment 547 #
Proposal for a regulation
Article 52 a (new)
Article 52 a Right not to be subject to non-compliant AI systems Natural and legal persons shall have the right not to be subjected to AI systems, which are posing an unacceptable risk pursuant to Article 5, or do not comply with the requirements of this Regulation.
2022/03/31
Committee: ITRE
Amendment 548 #
Proposal for a regulation
Article 52 b (new)
Article 52 b Right to information about the use and functioning of AI systems 1. Natural and legal persons shall have the right to be informed that they are being subjected to a high-risk AI system as defined in Article 6, or other AI systems as defined in Article 52. 2. Natural and legal persons shall have the right to be informed, upon request, about the reasons for a decision, producing legal effects or significantly affecting them, taken with the assistance of AI system as specified in Article 52 (3a) of this Regulation. 3. The information outlined in paragraphs 1 and 2 shall be provided in a clear and comprehensible manner.
2022/03/31
Committee: ITRE
Amendment 549 #
Proposal for a regulation
Article 52 c (new)
Article 52 c Right to lodge a complaint with a national supervisory authority 1. Natural and legal persons who consider that their rights under this Regulation have been infringed shall have the right to lodge a complaint against the provider or user with a national supervisory authority in the Member State of his or her residence, place of work, or place of the alleged infringement. 2. National supervisory authorities shall have the duty to investigate, in conjunction with relevant market surveillance authority if applicable, the alleged infringement and inform the complainant, within a period of 6 months, of the outcome of the complaint, including the possibility of a judicial remedy pursuant to Article 52e.
2022/03/31
Committee: ITRE
Amendment 550 #
Proposal for a regulation
Article 52 d (new)
Article 52 d Representation of natural persons and the right for public interest organisations to lodge a complaint with national supervisory authority 1. Natural and legal persons who consider that their rights under this Regulation have been infringed shall have the right to ask a public interest organisation to lodge a complaint on their behalf with a national competent authority and to exercise on their behalf their rights as referred to in Articles 52c and 52e. 2. A public interest organization is a not- for-profit body, organization or association which has been properly established in accordance with the law of a Member State, has statutory objectives which are in the public interest. 3. Public interest organisations shall have the right to lodge complaints with national competent authorities, independently of the mandate of the natural or legal person, if they consider that an AI system has been placed on the market, put into service, or used in a way that infringes this Regulation, or is otherwise in violation of fundamental rights or other aspects of public interest protection, pursuant to article 67. 4. National supervisory authorities have the duty to investigate, in conjunction with relevant market surveillance authority if applicable, and respond within a period of 6 months to all complaints made by public interest organizations.
2022/03/31
Committee: ITRE
Amendment 551 #
Proposal for a regulation
Article 52 e (new)
Article 52 e Right to an effective remedy against the national supervisory authority 1. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them. 2. Without prejudice to any other administrative or non-judicial remedy, each natural and legal person shall have the right to an effective judicial remedy where the national supervisory authority does not handle a complaint or does not inform the person within 6 months on the progress or outcome of the complaint lodged pursuant to Articles 52c and 52d. 3. Proceedings against a national supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established.
2022/03/31
Committee: ITRE
Amendment 552 #
Proposal for a regulation
Article 52 f (new)
Article 52 f Right to an effective remedy against a user for the infringement of rights 1. Without prejudice to any available administrative or non-judicial remedy, any natural and legal person shall have the right to an effective judicial remedy against a user where they consider that their rights under this Regulation have been infringed or they have been subject to an AI system in non-compliance with this Regulation. 2. Any natural and legal person who has suffered material or non-material damage due to an infringement of this Regulation shall have the right to receive compensation from the user for the damage suffered.
2022/03/31
Committee: ITRE
Amendment 553 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes established by the Commission in collaboration with one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance byof the Commission in collaboration with the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. The Commission shall play a complementary role, allowing those Member States with demonstrated experience with sandboxing to build on their expertise and, on the other hand, assisting and providing technical understanding and resources to those Member States that seek guidance on the set-up and running of these regulatory sandboxes.
2022/03/31
Committee: ITRE
Amendment 554 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor, and in collaboration with SMEs, start-ups, enterprises and other innovators, shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. TFor Member States competent authorities or the European Data Protection Supervisor, this shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. SMEs, start-ups, enterprises and other innovators shall conduct live experiments for new business models in collaboration with the Member State competent authorities.
2022/03/31
Committee: ITRE
Amendment 558 #
Proposal for a regulation
Article 53 – paragraph 2
2. Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox. established by one or more Member States competent authorities or the European Data Protection Supervisor. Start-ups, SMEs, enterprises and other innovators may request access to personal data from relevant national authorities to be used in their AI sandbox while ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox
2022/03/31
Committee: ITRE
Amendment 559 #
Proposal for a regulation
Article 53 – paragraph 2
2. The Commission in collaboration with Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox.
2022/03/31
Committee: ITRE
Amendment 563 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and fundamental rights identified during the development and testing of suchAI systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
2022/03/31
Committee: ITRE
Amendment 564 #
Proposal for a regulation
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox. SMEs, start-ups, enterprises and other innovators are invited to share their good practices, lessons learnt and recommendations on their AI sandboxes with Member State competent authorities.
2022/03/31
Committee: ITRE
Amendment 565 #
Proposal for a regulation
Article 53 – paragraph 5
5. The Commission and Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the CommissionCommission's AI Regulatory Sandboxing programme. The Commission shall submit annual reports to the European Artificial Intelligence Board on the results from the implementation of those schemes, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
2022/03/31
Committee: ITRE
Amendment 568 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
6a. The Commission shall establish an EU AI Regulatory Sandboxing Programme whose modalities referred to in Article 53(6) shall cover the elements set out in Annex IXa. The Commission shall proactively coordinate with national and local authorities, where relevant.
2022/03/31
Committee: ITRE
Amendment 571 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
(a) provide small-scale providers andSME providers, including start-ups with priority access to the AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor to the extent that they fulfil the eligibility conditions;
2022/03/31
Committee: ITRE
Amendment 583 #
Proposal for a regulation
Article 55 – paragraph 2 a (new)
2a. Where appropriate, Member States shall find synergies and cooperate via relevant instruments funded by EU programmes, such as the European Digital Innovation Hubs.
2022/03/31
Committee: ITRE
Amendment 590 #
Proposal for a regulation
Article 56 – paragraph 2 – point a
(a) contribute to thepromote and support effective cooperation of the national supervisory authorities and the Commission with regard to matters covered by this Regulation;
2022/03/31
Committee: ITRE
Amendment 600 #
Proposal for a regulation
Article 57 – paragraph 4
4. TWhen relevant, the Board mayshall invite external experts and, in particular a standing expert on fundamental rights, and other observers to attend its meetings and mayshall hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end the Commission mayshall facilitate exchanges between the Board and, other Union bodies, offices, agencies and advisory groups and civil society and social partners.
2022/03/31
Committee: ITRE
Amendment 602 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point iii a (new)
(iiia) on the impacts on fundamental rights and outcomes for different groups in society, including for children and other vulnerable groups.
2022/03/31
Committee: ITRE
Amendment 617 #
Proposal for a regulation
Article 64 – paragraph 1
1. Access to data and documentation in the context of their activities, the market surveillance authorities shall be granted full access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
2022/03/31
Committee: ITRE
Amendment 618 #
Proposal for a regulation
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code, of the AI systemr if impossible, all related data sets used to train or place the AI system on the market.
2022/03/31
Committee: ITRE
Amendment 630 #
Proposal for a regulation
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changesubstantial modifications as defined in Article 3(23) in their design or intended purpose.
2022/03/31
Committee: ITRE
Amendment 637 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic, digital infrastructure, and the supply of water, gas, heating and electricity.;
2022/03/31
Committee: ITRE
Amendment 639 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on promotion and termination of work-related contractualaffecting the initiation, establishment, implementation and termination of an employment relationship, including AI systems intended to support collective legal and regulationships,ory matters, particularly for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships. .
2022/03/31
Committee: ITRE
Amendment 654 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point d
(d) where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);
2022/03/31
Committee: ITRE
Amendment 663 #
Proposal for a regulation
Annex IX a (new)
MODALITIES FOR AN EU AI REGULATORY SANDBOXING PROGRAMME 1. The AI Regulatory Sandboxes shall be part of the EU AI Regulatory Sandboxing Programme (‘sandboxing programme’) to be established by the Commission in collaboration with Member States. 2. The Commission shall play a complementary role, allowing those Member States with demonstrated experience with sandboxing to build on their expertise and the expertise of relevant stakeholders from industry, academia and civil society and, on the other hand, assisting and providing technical understanding and resources to those Member States that seek guidance on the set-up of these regulatory sandboxes. 3. The criteria for the access to the regulatory sandbox should be transparent and competitive. 4. Participants in the sandboxing programme, in particular small-scale providers, are granted access to pre- deployment services, such as preliminary registration of their AI system, compliance R&D support services, and to all the other relevant elements of the Union’s AI ecosystem and other Digital Single Market initiatives such as Testing & Experimentation Facilities, Digital Hubs, Centres of Excellence; and to other value-adding services such as standardisation documents and certification, consultation and support to conduct impact assessments of the AI systems to fundamental rights, environment or the society at large, an online social platform for the community, contact databases, existing portal for tenders and grant making and lists of EU investors. 5. The sandboxing programme shall, in a later development phase, develop and manage two types of regulatory sandboxes: Physical Regulatory Sandboxes for AI systems embedded in physical products or services and Cyber Regulatory Sandboxes for AI systems operated and used on a stand-alone basis, not embedded in physical products or services 6. The sandboxing programme shall work with the already established Digital Innovation Hubs in Member States to provide a dedicated point of contact for entrepreneurs to raise enquiries with competent authorities and to seek non- binding guidance on the conformity of innovative products, services or business models embedding AI technologies. 7. One of the objectives of the sandboxing programme is to enable firms’ compliance with this Regulation at the design stage of the AI system (‘compliance-by-design’). To do so, the programme shall facilitate the development of software tools and infrastructure for testing, benchmarking, assessing and explaining dimensions of AI systems relevant to sandboxes, such as accuracy, robustness and cybersecurity as well as minimisation of risks to fundamental rights, environment and the society at large. 8. The sandboxing programme shall be rolled out in a phased fashion, with the various phases launched by the Commission upon success of the previous phase. 9. The sandboxing programme will have a built-in impact assessment procedure to facilitate the review of cost-effectiveness against the agreed-upon objectives. This assessment shall be drafted with input from Member States based on their experiences and shall be included as part of the Annual Report submitted by the Commission to the European Artificial Intelligence Board.
2022/03/31
Committee: ITRE