BETA

83 Amendments of Milan BRGLEZ related to 2021/0106(COD)

Amendment 68 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for thefor the design, development, marketing and use of artificial intelligence in conformity with Union valuesand of sustainable and green artificial intelligence in conformity with Union values while minimising any risk of adverse and discriminatory impacts on people and adverse impacts on the environment. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, and fundamental rights, and the protection of environment and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the design, development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/01/25
Committee: ENVI
Amendment 71 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons, end users and end recipients throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/01/25
Committee: ENVI
Amendment 74 #
Proposal for a regulation
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming and food safety, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
2022/01/25
Committee: ENVI
Amendment 84 #
Proposal for a regulation
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law including climate and the environment. Such harm might be material or immaterial.
2022/01/25
Committee: ENVI
Amendment 86 #
Proposal for a regulation
Recital 4 a (new)
(4 a) In terms of environment, artificial intelligence has a strong potential to solve environmental issues such as reducing resource consumption, promoting decarbonisation, boosting the circular economy, balancing supply and demand in electricity grids or optimising logistic routes. The analysis of large volumes of data can lead to a better understanding of environmental challenges and a better monitoring of trends and impacts. The intelligent management of large volumes of information related to the environment also provides solutions for better environmental planning, decision-making and monitoring of environmental threats and can inform and encourage environmentally sustainable business, providing better information to reorient sustainable decision-making in different business models, and thereby improving the efficiency of resource, energy and material use through smart-Industry initiatives and M2M and IoT technologies.
2022/01/25
Committee: ENVI
Amendment 89 #
Proposal for a regulation
Recital 4 b (new)
(4 b) The predictive analytics capabilities provided by artificial intelligence based models can support a better maintenance of energy systems and infrastructure, as well as anticipate the patterns of society's interaction with natural resources, thus facilitating better resource management. Artificial intelligence also has the potential to contribute to strengthening environmental administration and governance by facilitating administrative decisions related to environmental heritage management, monitoring violations and environmental fraud, and encouraging citizen participation in biodiversity conservation initiatives.
2022/01/25
Committee: ENVI
Amendment 92 #
Proposal for a regulation
Recital 4 c (new)
(4 c) However, despite the high potential solutions to the environmental and climate crisis offered by artificial intelligence, the design, training and execution of algorithms imply a high energy consumption and consequently high levels of carbon emissions. These environmental and carbon footprints are expected to increase overtime as the volume of data transferred and stored and the increasing development of AI applications will continue to grow exponentially in the years to come. In order to favour the ecological transition and the reduction of the carbon footprint of artificial intelligence this regulation contributes to the promotion of a green and sustainable artificial intelligence and to the consideration of the environmental impact of AI systems throughout their lifecycle.
2022/01/25
Committee: ENVI
Amendment 94 #
Proposal for a regulation
Recital 4 d (new)
(4 d) In terms of health and patients’ rights, AI systems can play a major role in improving the health of individual patients and the performance of public health systems. However, when AI is deployed in the context of health, patients may be exposed to potential specific risks that could lead to physical or psychological harm, for example, when different biases related to age, ethnicity, sex or disabilities in algorithms leads to incorrect diagnoses. The lack of transparency around the functioning of algorithms also makes it difficult to provide patients with the relevant information they need to exercise their rights, such as informed consent. In addition, AI’s reliance on large amounts of data, many of them being personal data, may affect the protection of medical data, due to patients’ limited control over the use of their personal data and the cybersecurity vulnerabilities of AI systems. All of this means that special caution must to be taken when AI is applied in clinical or healthcare settings.
2022/01/25
Committee: ENVI
Amendment 95 #
Proposal for a regulation
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the design, development, use and uptake of sustainable and green artificial intelligence in the internal market aligned with the European Green Deal provisions, that at the same time meets a high level of protection of public interests, such as health and safety, environment and climate change, food security and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council[33 ], and it ensures the protection of ethical principles, as specifically requested by the European Parliament[34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL)].
2022/01/25
Committee: ENVI
Amendment 99 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commissionordinary legislative procedure to amend that list.
2022/01/25
Committee: ENVI
Amendment 100 #
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and, fundamental rights or the environment, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
2022/01/25
Committee: ENVI
Amendment 103 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.
2022/01/25
Committee: ENVI
Amendment 106 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons or the environment in the Union and such limitation minimises any potential restriction to international trade, if any.
2022/01/25
Committee: ENVI
Amendment 110 #
Proposal for a regulation
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons or to the environment, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment.
2022/01/25
Committee: ENVI
Amendment 113 #
Proposal for a regulation
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and, safety or the fundamental rights of persons or the environment, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
2022/01/25
Committee: ENVI
Amendment 114 #
Proposal for a regulation
Recital 34
(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons or the environment at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities.
2022/01/25
Committee: ENVI
Amendment 119 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property or the environment.
2022/01/25
Committee: ENVI
Amendment 121 #
Proposal for a regulation
Recital 40 a (new)
(40 a) AI systems not covered by Regulation (EU) 2017/745 with an impact on health or healthcare should be classified as high-risk and be covered by this Regulation. Healthcare is one of the sectors where many AI applications are being deployed in the Union and is a market posing potential high risk to human health. Regulation (EU) 2017/745 only covers medical devices and software with an intended medical purpose, but excludes many AI applications used in health, like AI administrative and management systems used by healthcare professionals in hospitals or other healthcare setting and by health insurance companies and many fitness and health apps which provides AI powered recommendations. These applications may present new challenges and risks to people, because of their health effects or the processing of sensitive health data. In order to control this potential specific risks that could lead to any physical or psychological harm or the misuse of sensitive health data, these AI systems should be classified as high- risk.
2022/01/25
Committee: ENVI
Amendment 125 #
Proposal for a regulation
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and, fundamental rights, and more widely for the climate and the environment as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
2022/01/25
Committee: ENVI
Amendment 126 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, that is, to ensure algorithmic non-discrimination, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
2022/01/25
Committee: ENVI
Amendment 132 #
(46) Having information on how high- risk AI systems have been designed and developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.
2022/01/25
Committee: ENVI
Amendment 133 #
Proposal for a regulation
Recital 46 a (new)
(46 a) Artificial intelligence should contribute to the European Green Deal and the green transition and be used by governments and businesses to benefit people and the planet, and contribute to the achievement of sustainable development, the preservation of the environment, climate neutrality and circular economy goals. The design, development, deployment and use of AI systems should also minimise and remedy any harm caused to the environment during their lifecycle and across their entire supply chain in line with Union law. In this regard, in order to enhance sustainability and ecological responsibility, and to design, develop, deploy and use ever greener and more sustainable AI systems, green AI should be encouraged. Green AI proposes to reduce energy consumption by balancing the volume of data needed to train a model, the amount of time to train it and the number of iterations to optimise its parameters, being more efficient and less carbon intensive, and by promoting the use of renewable energy sources in the creation and application of these models.
2022/01/25
Committee: ENVI
Amendment 135 #
Proposal for a regulation
Recital 46 b (new)
(46 b) In order to promote the development of a green and sustainable artificial intelligence, as well as to address needs of the providers and product manufacturers to carry out the ecological transition and green transformation, the technical documentation of high-risk AI systems should also include an “energy efficiency and carbon intensity marking”, indicating the energy used in the training and execution of algorithms and the carbon intensity. This will stimulate research into new modelling and running strategies and algorithms that lower the energy use and the carbon intensity. In this regard, high-risk AI systems that boost the energy efficiency of data storage and computing systems, and minimise its own carbon footprint will obtain a “green AI label”. Likewise, non high-risk AI systems which address global challenges related to climate and environment and support the implementation of pertinent initiatives and actions such as the Paris Agreement, the UN Sustainable Development Goals and the European Green Deal, may also receive the green AI label.
2022/01/25
Committee: ENVI
Amendment 140 #
Proposal for a regulation
Recital 54
(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation, including the energy consumption and carbon intensity of the system and establish a robust post- market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
2022/01/25
Committee: ENVI
Amendment 141 #
Proposal for a regulation
Recital 55
(55) Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the manufacturer of the final product as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation, including the information about the energy consumption and carbon intensity of the component, and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation.
2022/01/25
Committee: ENVI
Amendment 142 #
Proposal for a regulation
Recital 58 a (new)
(58 a) Insofar the Union lacks a charter of digital rights that would provide a reference framework for guaranteeing citizens' rights in the new digital reality and that would safeguard fundamental rights in the digital landscape. A number of AI-related data-protection issues may lead to uncertainties and costs, and may hamper the development of AI applications. In this regard, some provisions are included in the text to ensure the explanation, acceptability, surveillance, fairness and transparency of the AI systems.
2022/01/25
Committee: ENVI
Amendment 144 #
Proposal for a regulation
Recital 67
(67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market as well as the energy efficiency and carbon intensity marking. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking and the energy efficiency and carbon intensity marking.
2022/01/25
Committee: ENVI
Amendment 145 #
Proposal for a regulation
Recital 68
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons, the environment and climate change and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons, the protection of the environment and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.
2022/01/25
Committee: ENVI
Amendment 147 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof, sustainable and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems, with particular emphasis on the promotion of sustainable and green AI systems, under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
2022/01/25
Committee: ENVI
Amendment 149 #
Proposal for a regulation
Recital 73 a (new)
(73 a) In order to promote a more sustainable and greener innovation, the Commission and Member States should publish guidelines and methodologies for efficient algorithms that provide data and pre-trained models in view of a rationalisation of training activity. The development of best practice procedures would also support the identification and subsequent development of solutions to the most pressing environmental challenges of AI systems, including on the development of the previously mentioned green AI label.
2022/01/25
Committee: ENVI
Amendment 152 #
Proposal for a regulation
Recital 78
(78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law protecting fundamental rights resulting from the use of their AI systems. Likewise, civil society organisations and other stakeholders should be enabled to provide input and lodge complaints if the protection of fundamental rights or public interest is at risk.
2022/01/25
Committee: ENVI
Amendment 154 #
Proposal for a regulation
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, energy efficiency and carbon intensity, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
2022/01/25
Committee: ENVI
Amendment 155 #
Proposal for a regulation
Recital 85
(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and ,the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply and the provisions setting the content and presentation of the information, the methodology procedures, the minimum standards and the efficiency scale for the energy efficiency and carbon intensity marking and the green AI label of article 49a. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making[58 ]. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. _________________ 58OJ L 123, 12.5.2016, p. 1.
2022/01/25
Committee: ENVI
Amendment 158 #
Proposal for a regulation
Article 1 – paragraph 1 – point a a (new)
(a a) harmonised rules and procedures to establish an energy efficiency and carbon intensity marking and green labelling to mitigate the environmental impact of AI systems enabling further sustainability;
2022/01/25
Committee: ENVI
Amendment 159 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 a (new)
(1 a) “sustainable and green artificial intelligence” means an artificial intelligence system that reduces energy consumption by balancing the volume of data needed to train a model, the amount of time to train it and the number of iterations to optimise its parameters, thus reducing its carbon intensity;
2022/01/25
Committee: ENVI
Amendment 160 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property or causes a serious damage to the environment;
2022/01/25
Committee: ENVI
Amendment 163 #
Proposal for a regulation
Article 3 – paragraph 1 – point 24 a (new)
(24 a) “energy efficiency and carbon intensity marking” means a marking by which a provider indicates the carbon footprint of an AI system calculated by estimating the power consumption of the algorithms training and execution and the carbon intensity of producing this energy;
2022/01/25
Committee: ENVI
Amendment 164 #
Proposal for a regulation
Article 3 – paragraph 1 – point 24 b (new)
(24 b) “green AI label” means a label by which the less carbon intensive and most energy efficient AI systems are recognised and that promotes the techniques and procedures used for a better efficiency;
2022/01/25
Committee: ENVI
Amendment 166 #
Proposal for a regulation
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amendFor the amendment of the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein, the ordinary legislative procedure should be followed.
2022/01/25
Committee: ENVI
Amendment 169 #
Proposal for a regulation
Article 5 – paragraph 3 – introductory part
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use.
2022/01/25
Committee: ENVI
Amendment 170 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.
2022/01/25
Committee: ENVI
Amendment 172 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 89 of Annex III;
2022/01/25
Committee: ENVI
Amendment 174 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health and, safety, or a risk of adverse impact on fundamental rights or the environment, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/01/25
Committee: ENVI
Amendment 176 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and, safety or a risk of adverse impact on fundamental rights or the environment that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
2022/01/25
Committee: ENVI
Amendment 178 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health, and safety or adverse impact on the fundamental rights or the environment or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
2022/01/25
Committee: ENVI
Amendment 181 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons or having serious impact to the environment shall not be considered as easily reversible;
2022/01/25
Committee: ENVI
Amendment 186 #
Proposal for a regulation
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems and to ensure algorithmic non-discrimination, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.
2022/01/25
Committee: ENVI
Amendment 188 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements as well as their energy consumption and carbon intensity information. It shall contain, at a minimum, the elements set out in Annex IV.
2022/01/25
Committee: ENVI
Amendment 190 #
Proposal for a regulation
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users, including in relation to possible risks to fundamental rights and discrimination.
2022/01/25
Committee: ENVI
Amendment 191 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or the environment or fundamental rights;
2022/01/25
Committee: ENVI
Amendment 194 #
Proposal for a regulation
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or, fundamental rights and the environment that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
2022/01/25
Committee: ENVI
Amendment 199 #
Proposal for a regulation
Article 16 – paragraph 1 – point i
(i) to affix the CE marking and the energy efficiency and carbon intensity marking to their high-risk AI systems to indicate the conformity with this Regulation in accordance with Article 49 and their energy consumption and carbon intensity in accordance with article 49a, respectively;
2022/01/25
Committee: ENVI
Amendment 201 #
Proposal for a regulation
Article 26 – paragraph 1 – point c
(c) the system bears the required conformity marking and is accompanied by the required concise and clear documentation and instructions of use, including in relation to possible risks to fundamental rights and discrimination.
2022/01/25
Committee: ENVI
Amendment 202 #
Proposal for a regulation
Article 27 – paragraph 1
1. Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking and the energy efficiency and carbon intensity marking, that it is accompanied by the required concise and clear documentation and instruction of use, including in relation to possible risks to fundamental rights and discrimination, and that the provider and the importer of the system, as applicable, have complied with the obligations set out in this Regulation.
2022/01/25
Committee: ENVI
Amendment 203 #
Proposal for a regulation
Article 30 – paragraph 1
1. Each Member State shall designate or establish a notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring, including the energy efficiency and carbon intensity information.
2022/01/25
Committee: ENVI
Amendment 207 #
Proposal for a regulation
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 89 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
2022/01/25
Committee: ENVI
Amendment 210 #
Proposal for a regulation
Article 43 – paragraph 6
6. The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety, to the environment and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.
2022/01/25
Committee: ENVI
Amendment 211 #
Article 49 a Energy efficiency and carbon intensity marking and green AI label 1. Based on the energy efficiency and carbon intensity information provided following Article 11(1) and Annex IV, high-risk AI systems shall be affixed an energy efficiency and carbon intensity marking which considers the carbon footprint of the system based on its energy consumption and the carbon intensity. 2. The least carbon intensive and most energy efficient AI systems shall also be affixed a Green AI label. Non high-risk AI systems other than high-risk AI systems aimed at supporting the green transition may also be affixed a Green AI label upon presentation of the energy efficiency and carbon intensity information by the provider. 3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to supplement paragraphs 1 and 2 of this Article to specify the content and presentation of the information to be disclosed pursuant to those paragraphs, including the methodology to be used in order to comply with them, the procedure, the minimum standards and the efficiency scale, taking into account the obligations and procedures established pursuant to this Regulation, including the structures and the notifying authorities and notified bodies. The Commission shall adopt that delegated act within a year of the entry into force of this Regulation. 4. The obligation to provide the energy efficiency and carbon intensity information will not become effective until the adoption of this delegated act.
2022/01/25
Committee: ENVI
Amendment 212 #
Proposal for a regulation
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, especially in the healthcare sector, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/01/25
Committee: ENVI
Amendment 213 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
3 a. Recipients of an AI system in the domain of healthcare shall be informed of their interaction with an AI system.
2022/01/25
Committee: ENVI
Amendment 214 #
Proposal for a regulation
Article 52 – paragraph 3 b (new)
3 b. Public and administrative authorities which adopt decisions with the assistance of AI systems shall provide a clear and intelligible explanation which shall be accessible for persons with disabilities and other vulnerable groups.
2022/01/25
Committee: ENVI
Amendment 216 #
Proposal for a regulation
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and , to fundamental rights or to the environment identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
2022/01/25
Committee: ENVI
Amendment 220 #
Proposal for a regulation
Article 54 – paragraph 1 – point a – point ii
(ii) public safety and public health, including disease prevention, control and treatment, and the health challenges in relation to the inter-linkage between human and animal health, in particular zoonotic diseases;
2022/01/25
Committee: ENVI
Amendment 224 #
Proposal for a regulation
Article 54 – paragraph 1 – point a – point iii
(iii) a high level of protection and improvement of the quality of the environment, with particular emphasis on the three global environmental challenges: climate change, biodiversity loss and pollution;
2022/01/25
Committee: ENVI
Amendment 230 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
(c a) assist the Commission in the field of international cooperation in artificial intelligence for matters covered by this Regulation.
2022/01/25
Committee: ENVI
Amendment 234 #
Proposal for a regulation
Article 57 – paragraph 4
4. The Board may invite external experts and observers to attend its meetings and may hold exchanges with interested third parties of a wide array of organisations to inform its activities to an appropriate extent. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
2022/01/25
Committee: ENVI
Amendment 238 #
Proposal for a regulation
Article 59 – paragraph 4
4. Member States shall ensure that national competent authorities are provided with adequate financial and human resources to fulfil their tasks under this Regulation. In particular, national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data and data computing, fundamental rights, environment, health and safety risks and knowledge of existing standards and legal requirements.
2022/01/25
Committee: ENVI
Amendment 244 #
Proposal for a regulation
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or, to the protection of fundamental rights of persons or to the environment are concerned.
2022/01/25
Committee: ENVI
Amendment 245 #
Proposal for a regulation
Article 65 – paragraph 1 a (new)
1 a. Where the protection of fundamental rights or public interest is at risk, Member States need to ensure procedures for civil society organisations and other stakeholders to be able to submit input and lodge complaints to the market surveillance authority of a Member State or to the national public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights in relation to the use of high-risk AI systems referred to in Annex III.
2022/01/25
Committee: ENVI
Amendment 246 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority ex-officio or following a complaint by civil society organisations or other stakeholders shall also inform the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3).
2022/01/25
Committee: ENVI
Amendment 247 #
Proposal for a regulation
Article 67 – paragraph 1
1. Where, having performed an evaluation under Article 65, the market surveillance authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the environment, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
2022/01/25
Committee: ENVI
Amendment 249 #
Proposal for a regulation
Article 68 – paragraph 1 – point b
(b) the conformity marking or the energy efficiency and carbon intensity marking has not been affixed;
2022/01/25
Committee: ENVI
Amendment 251 #
Proposal for a regulation
Article 69 – paragraph 2
2. The Commission and the Board shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, energy efficiency and carbon intensity, accessibility for persons with a disability, stakeholders participation in the design and development of the AI systems and diversity of development teams on the basis of clear objectives and key performance indicators to measure the achievement of those objectives.
2022/01/25
Committee: ENVI
Amendment 252 #
Proposal for a regulation
Article 73 – paragraph 2
2. The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6),Article 48(5) and Article 48(59a(3) shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
2022/01/25
Committee: ENVI
Amendment 253 #
Proposal for a regulation
Article 73 – paragraph 3
3. The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6), Article 48(5) and Article 48(59a(3) may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
2022/01/25
Committee: ENVI
Amendment 254 #
Proposal for a regulation
Article 73 – paragraph 5
5. Any delegated act adopted pursuant to Article 4, Article 7(1), Article 11(3), Article 43(5) and (6), Article 48(5) and Article 48(59a(3) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
2022/01/25
Committee: ENVI
Amendment 260 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
8 a. Health, health care, long-term care and health insurance: (a) AI systems not covered by Regulation (EU) 2017/745 intended to be used in the health, health care and long-term care sectors that have indirect and direct effects on health or that use sensitive health data. (b) AI administrative and management systems used by healthcare professionals in hospitals and other healthcare settings and by health insurance companies that process sensitive data of people’s health.
2022/01/25
Committee: ENVI
Amendment 263 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point g
(g) clear and concise instructions of use for the user including in relation to possible risks to fundamental rights and discrimination and, where applicable installation instructions;
2022/01/25
Committee: ENVI
Amendment 268 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3 a (new)
3 a. Detailed information about the carbon footprint and the energy efficiency of the AI system, in particular with regard to the development of hardware and algorithm design and training processes, and the systematic analysis of the energy consumption of the applications being run.
2022/01/25
Committee: ENVI
Amendment 862 #
Proposal for a regulation
Article 2 – paragraph 2 a (new)
2 a. AI systems likely to interact with or impact on children shall be considered high-risk for this group;
2022/06/13
Committee: IMCOLIBE
Amendment 1747 #
Proposal for a regulation
Article 10 a (new)
Article 10 a Risk management system for AI systems likely to interact with children AI systems likely to interact with or impact on children shall implement a riskmanagement system addressing content, contact, conduct and contract risks to children;
2022/06/13
Committee: IMCOLIBE
Amendment 2710 #
Proposal for a regulation
Article 65 – paragraph 1 a (new)
1 a. When AI systems are likely to interact with or impact on children, the precautionary principle shall apply.
2022/06/13
Committee: IMCOLIBE
Amendment 2712 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3). Where there is sufficient reason to consider that that an AI system exploits the vulnerabilities of children or violates their rights intentionally or unintentionally, the market surveillance authority shall have the duty to investigate the design goals, data inputs, model selection, implementation and outcomes of the AI system and the burden of proof shall be on the operator or operators of that system to demonstrate compliance with the provisions of this Regulation. The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3), including by providing access to personnel, documents, internal communications, code, data samples and on platform testing as necessary. Where, in the course of its evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe. The corrective action can also be applied to AI systems in other products or services judged to be similar in their objectives, design or impact.
2022/06/13
Committee: IMCOLIBE