BETA

73 Amendments of Isabella TOVAGLIERI related to 2021/0106(COD)

Amendment 145 #
Proposal for a regulation
Recital 6
(6) The notion of AI system shouldmust be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, iAI software, distinguishing it from more traditional software systems and modelling approaches such as logistic regression and other techniques that are similarly transparent and capable of being explained and interpreted. In particular, for the ability, for a given set of human-defined objectives, to generate outputs purposes of this Regulation, AI systems should be understood as having the ability, on the basis of machine and/or human-based data and inputs, to deduce how to achieve a given set of human- defined objectives through learning, reasoning or modelling for a given set of human-defined objectives, to generate specific outputs in the form of content, for generative AI systems (such as contenxt, video or images), and predictions, recommendations, or decisions which influence the environment with which the system interacts, be it inin both a physical orand digital dimension. AI systems can be designed to operate with varying levels of autonomy andFor the purposes of this AI Regulation, AI systems can be designed that must follow an approach with limited explanations and operate with varying levels a very high level of autonomy. These systems may be used on a stand-alonen autonomous basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be accomplementanied by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments and developments in the market through the adoption of delegated acts by the Commission to amend that list.
2022/03/31
Committee: ITRE
Amendment 152 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. However, with regard to the risk management system for AI systems considered to be high-risk, the EU’s harmonisation legislation should focus on the essential requirements and leave their technical implementation to be governed by voluntary product-specific and cutting- edge standards, developed by the stakeholders. It is therefore desirable for European legislation to focus on the desired outcome of the risk management and evaluation systems, and to expressly leave industry the task of designing its systems and tailoring them to its internal operations and structures, particularly by developing cutting-edge standardisation systems.
2022/03/31
Committee: ITRE
Amendment 195 #
Proposal for a regulation
Recital 29
(29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40, Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43, Regulation (EU) 2018/858 of the European Parliament and of the Council44, Regulation (EU) 2018/1139 of the European Parliament and of the Council45, and Regulation (EU) 2019/2144 of the European Parliament and of the Council46, it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. In addition, effective standardisation rules are needed to make the requirements of this Regulation operational. The European institutions, and first and foremost the Commission, should, together with enterprises, identify the AI sectors where there is the greatest need for standardisation, to avoid fragmentation of the market and maintain and further strengthen the integration of our European Standardisation System (ESS) within the International Standardisation System (ISO, IEC). _________________ 39 Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 40 Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1). 41 Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52). 42 Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146). 43Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44). 44 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 45 Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). 46 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).
2022/03/31
Committee: ITRE
Amendment 196 #
Proposal for a regulation
Recital 29 a (new)
(29a) To demonstrate that the characteristics of a high-risk AI system conform to the requirements set out in Chapter 2 of Title III, it must be possible to conduct internal controls and use harmonised standards based on agreement. It is desirable for the European institutions, and first and foremost the Commission, to do more to promote alignment with existing international standardisation activities and with the certifications issued as part of the EU information security scheme. However, unlike the procedure to assess product conformity, where assessment infrastructure is in place, the relevant competence for auditing autonomous AI systems is still being developed. Moreover, because of the specific technological features of AI, it is possible that the competent authorities may encounter difficulties in verifying the conformity of some AI systems with existing legislation. It is therefore necessary for conformity assessment mechanisms to be developed with flexibility, so that due account may be taken of the infrastructure gaps, and disparities in application may be avoided in the single market.
2022/03/31
Committee: ITRE
Amendment 199 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversightsupervision.
2022/03/31
Committee: ITRE
Amendment 210 #
Proposal for a regulation
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversightsupervision, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
2022/03/31
Committee: ITRE
Amendment 217 #
Proposal for a regulation
Recital 48
(48) Human supervision must remain the basic ethical principle for the development and distribution of high-risk AI, since it guarantees transparency, confidentiality and protection of data and safeguarding against discrimination. However, it is vital to maintain a balance between meaningful human supervision and the efficiency of the system, in order not to compromise the benefits offered by these systems in sectors such as information security analysis, threat analysis and incident response processes. High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversightsupervision measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversightsupervision has been assigned have the necessary competence, training and authority to carry out that role.
2022/03/31
Committee: ITRE
Amendment 239 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversightsupervision and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversightsupervision before these systems are placed on the market or otherwise put into service.
2022/03/31
Committee: ITRE
Amendment 241 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversightsupervision and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To permit effective participation by these categories in regulatory sandboxes, compliance costs must be kept to an absolute minimum. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/03/31
Committee: ITRE
Amendment 244 #
Proposal for a regulation
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversightsupervision and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
2022/03/31
Committee: ITRE
Amendment 245 #
Proposal for a regulation
Recital 72 a (new)
(72a) It is desirable for the establishment of regulatory sandboxes, which is currently left to the discretion of Member States, to be made obligatory, with properly established criteria, to ensure both the effectiveness of the system and easier access for enterprises, particularly SMEs. It is also necessary for research enterprises and institutions to be involved in developing the conditions for the creation of regulatory sandboxes.
2022/03/31
Committee: ITRE
Amendment 248 #
Proposal for a regulation
Recital 74
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level and the national cybersecurity agencies should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
2022/03/31
Committee: ITRE
Amendment 254 #
Proposal for a regulation
Recital 89 a (new)
(89a) As things currently stand, the AI sector has a strategic international dimension. In order to achieve the objectives and ambitions set out in this Regulation and strengthen the European approach to AI internationally, it is a matter of urgency that thinking in this area, including as a result of of this legislation, should not remain solely within the European Union. If the EU wishes to be at the forefront of creating democratic and inclusive regulation that respects the rights of individuals, including those outside Europe’s borders, it should seek to be a benchmark in this sphere for non-EU countries too. That would serve to safeguard the competitiveness of the principal actors of the market and spread practices similar to those in this Regulation on a global scale. This Regulation’s effectiveness would be strengthened if the European Union were able to play a key role at international level too.
2022/03/31
Committee: ITRE
Amendment 271 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system ’ (AI system) means software that is developed with one or more ofsystem) means a system that (i) receives machine-based and/or human- based data and inputs (ii) adopts an approach with limited explanations that infers how to achieve a given set of human-defined objectives through learning, reasoning or modelling implemented using the techniques and approaches listed in Annex I, and can, for a given set of human-defined objectives, generate outputs such as content(iii) generates outputs with a very high level of autonomy in the form of content (generative AI systems), predictions, recommendations, or decisions influencing the environments ithey interacts with;
2022/03/31
Committee: ITRE
Amendment 278 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 – point a (new)
(a) ‘AI system used in an advisory capacity’ means an AI system in which the final decision is taken by a human.
2022/03/31
Committee: ITRE
Amendment 279 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 – point b (new)
(b) ‘AI system with decision-making capacity’ means an AI system with the capacity to model decisions in a repeatable manner, without human supervision.
2022/03/31
Committee: ITRE
Amendment 309 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44a) ‘Systems for identifying and categorising behaviour and cognitive distortions of natural persons’ means AI systems designed to be used for emotional calculation and psychographic analysis applications, Machine Learning and Affective Computing applications that use sensitive data from different sources, such as wearable smart devices, sensors, cameras or a person’s interactions on the internet, and that are able to evaluate and use emotions, psychological conditions and behavioural characteristics such as values and beliefs with the aim of assessing and using the cognitive distortions of natural persons. This includes, among other things, the application of Sentiment Analysis techniques and AI Nudging and Sludging.
2022/03/31
Committee: ITRE
Amendment 353 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk. In the event of uncertainty regarding the classification of the AI system, the supplier must deem the AI system to be high-risk if its use or application poses a risk of physical or non-physical harm to health and safety or a risk of an adverse impact to the fundamental rights of natural persons, groups of individuals or society as a whole, as set out in Article 7(2).
2022/03/31
Committee: ITRE
Amendment 370 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, with no distinctions between AI systems with an advisory or decision- making purpose, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
2022/03/31
Committee: ITRE
Amendment 403 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
2. The training, validation and testing of data sets and the AI applications based on them shall be subject to appropriate data governance and management practices. Those practices shall concern in particular,
2022/03/31
Committee: ITRE
Amendment 412 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases that are likely to affect health and safety of persons, lead to discrimination prohibited by Union law or have some other impact on fundamental rights;
2022/03/31
Committee: ITRE
Amendment 422 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errors and compl and complete, taking into account the degree of variability within data setes. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/03/31
Committee: ITRE
Amendment 427 #
Proposal for a regulation
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may also process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, ensuring compliance with the highest security and privacy protection standards for data management. Such processing shall also be subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.
2022/03/31
Committee: ITRE
Amendment 432 #
Proposal for a regulation
Article 12 – paragraph 4 – subparagraph 1 (new)
The retention period must not exceed 10 years at most, unless specific regulations establish otherwise.
2022/03/31
Committee: ITRE
Amendment 436 #
Proposal for a regulation
Article 13 – paragraph 3 – point d
(d) the human oversightsupervision measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users;
2022/03/31
Committee: ITRE
Amendment 437 #
Proposal for a regulation
Article 14 – title
Human oversightsupervision
2022/03/31
Committee: ITRE
Amendment 438 #
Proposal for a regulation
Article 14 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. Human supervision should be proportionate to the task carried out by the system and should not compromise its efficiency or effectiveness.
2022/03/31
Committee: ITRE
Amendment 443 #
Proposal for a regulation
Article 14 – paragraph 2
2. Human oversightsupervision shall aim at prevenotecting or minimising the risks to health, safety or fundamental rightsafety and fundamental human rights, preventing or minimising the risks that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
2022/03/31
Committee: ITRE
Amendment 444 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
3. Human oversightsupervision shall be ensured through either one or all of the following measures:
2022/03/31
Committee: ITRE
Amendment 446 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversightsupervision is assigned to do the following, as appropriate to the circumstances:
2022/03/31
Committee: ITRE
Amendment 454 #
Proposal for a regulation
Article 15 – paragraph 1
1. High-risk AI systems shall be 1. designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.
2022/03/31
Committee: ITRE
Amendment 495 #
Proposal for a regulation
Article 29 – paragraph 2
2. The obligations in paragraph 1 are without prejudice to other user obligations under Union or national law and to the user’s discretion in organising its own resources and activities for the purpose of implementing the human oversightsupervision measures indicated by the provider.
2022/03/31
Committee: ITRE
Amendment 523 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. In particular, the classification as high-risk according to Article 6 should not apply to AI systems whose intended purpose demonstrates that the generated output is a recommendation and a human intervention is required to convert this recommendation into an action.
2022/06/13
Committee: IMCOLIBE
Amendment 568 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems that automatically generate models used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purposeIn contrast, ancillary applications to those systems determining whether an individual should be granted access to credit, such as AI applications used for the acceleration of the credit disbursement process, in the valuation of collateral, or for the internal process efficiency, as well as other subsequent applications based on the credit scoring which do not create high risks for individuals should be exempt from the scope. AI systems used to evaluate the credit score or creditworthiness may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. NonethelessInfact, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/06/13
Committee: IMCOLIBE
Amendment 598 #
Proposal for a regulation
Article 57 – paragraph 4
4. The Board mayshall invite external experts and observers, including providers with appropriate skills and proven experience in supporting Member State authorities in the preparation and management of experimentation and test facilities, to attend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
2022/03/31
Committee: ITRE
Amendment 615 #
Proposal for a regulation
Article 61 – paragraph 2 a (new)
2a. Since the sensitive nature of some high-risk AI systems, especially systems used by public authorities, agencies and institutions to prevent, investigate, detect or prosecute crimes, could result in significant restrictions on the collection and sharing of data between the end user and the provider, end users must involve the provider in the definition of aspects such as the nature of data made available for post-marketing monitoring and the degree of anonymisation of data. This should take place as early as the system design stage, in order to allow the provider to perform activities under the Regulation with a complete data set that has already been validated by the final user before the activity, and with a level of security that is proportionate to the task carried out by the system. The end user must remain responsible for the disclosure of data contained in such groups of data.
2022/03/31
Committee: ITRE
Amendment 631 #
Proposal for a regulation
Annex I – point b
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;deleted
2022/03/31
Committee: ITRE
Amendment 633 #
Proposal for a regulation
Annex I – point c
(c) Statistical approaches, Bayesian estimation, search and optimization methods.deleted
2022/03/31
Committee: ITRE
Amendment 634 #
Proposal for a regulation
Annex I – point c a (new)
(ca) Approaches based on the assessment of behavioural and psychological characteristics of individuals, including activities, interests, opinions, attitudes, values and lifestyles, recognised through automatic means;
2022/03/31
Committee: ITRE
Amendment 635 #
Proposal for a regulation
Annex III – paragraph 1 – introductory part
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas, whose use or application poses a risk of harm to health and safety or a negative impact on the fundamental rights of natural persons, groups or society in general.
2022/03/31
Committee: ITRE
Amendment 643 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) Following the adoption of common specifications under Article 41 of this Regulation, AI systems intended to be used to evaluate the creditworthiness rating of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use when granting access to credit or other essential services, with the exception of AI systems put into service by providers on a small scale for their own use and AI systems based on autonomous use under human supervision of linear regression, logistic regression, decision trees and other equally transparent, explicable and interpretable techniques;
2022/03/31
Committee: ITRE
Amendment 652 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
8a. Identification and categorisation of behaviour and cognitive bias of natural persons.
2022/03/31
Committee: ITRE
Amendment 653 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
(b) in so far as this is without prejudice to professional secrecy, and only when the request is proportionate to the scale of the interest being preserved, the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
2022/03/31
Committee: ITRE
Amendment 655 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point e
(e) assessment of the human oversightsupervision measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the users, in accordance with Articles 13(3)(d);
2022/03/31
Committee: ITRE
Amendment 657 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3
3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversightsupervision measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;
2022/03/31
Committee: ITRE
Amendment 660 #
(54) TIn case there are no risk management systems already in place, the provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post- market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
2022/06/13
Committee: IMCOLIBE
Amendment 752 #
Proposal for a regulation
Recital 80
(80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. With regard to use case 5(b) in Annex III, areas covered by this Regulation relate to those outlined in Article 1(a). All other procedures relating to creditworthiness assessment are covered by the Directive of the European Parliament and of the Council on consumer credits . _________________ 56 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).
2022/06/13
Committee: IMCOLIBE
Amendment 913 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed witha system based on machine or human-based data and input that infers how to achieve a given set of human-defined objectives using one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
2022/06/13
Committee: IMCOLIBE
Amendment 1267 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
2022/06/13
Committee: IMCOLIBE
Amendment 1419 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
(a) the AI system has a self-evolving behaviour, the failure of which results in an immediate hazardous condition in a specific domain, and is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
2022/06/13
Committee: IMCOLIBE
Amendment 1493 #
Proposal for a regulation
Article 7 – paragraph 2 – point a
(a) the intended purpose of the AI systema description of the AI system, including the intended purpose, the concrete use and context, complexity and autonomy of the AI system, the potential persons impacted, the extent to which the AI system has been used or is likely to be used, the extent to which any outcomes produced are subject to human review or intervention;
2022/06/13
Committee: IMCOLIBE
Amendment 1498 #
Proposal for a regulation
Article 7 – paragraph 2 – point b
(b) an assessment of the expotent to which anial benefits provided by the use of the AI system, has been used or is likely to be usedwell as reticence risk and/or opportunity costs of not using the AI for individuals, groups of individuals, or society at large. This includes weighing the benefits of deploying the AI system against keeping the status quo;
2022/06/13
Committee: IMCOLIBE
Amendment 1505 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already causedan assessment of the probability of worst-case scenario, likelihood and severity of harm, to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities; fundamental rights of potentially impacted persons and its irreversibility, including: (i) the extent to which the AI system has already been evaluated and proven to have caused material harm as demonstrated by studies or reports published by the national competent authorities; (ii) the extent to which potentially impacted persons are dependent on the outcome produced from the AI system, in particular because of practical or legal reasons it is not reasonably possible to opt-out from that outcome; (iii) the extent to which the outcome produced by the AI system is easily reversible; (iv) the extent to which potentially impacted persons are in a vulnerable position in relation to the user of the AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age.
2022/06/13
Committee: IMCOLIBE
Amendment 1512 #
Proposal for a regulation
Article 7 – paragraph 2 – point d
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;measures taken to address or mitigate the identified risks, including to the extent existing Union legislation provides for: (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; (ii) effective measures to prevent or substantially minimise those risks.
2022/06/13
Committee: IMCOLIBE
Amendment 1515 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1522 #
Proposal for a regulation
Article 7 – paragraph 2 – point f
(f) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1524 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1537 #
Proposal for a regulation
Article 7 – paragraph 2 – point h
(h) the extent to which existing Union legislation provides for: (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; (ii) effective measures to prevent or substantially minimise those risks.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1554 #
Proposal for a regulation
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art and industry standards, including as reflected in relevant harmonised standards or common specifications.
2022/06/13
Committee: IMCOLIBE
Amendment 1622 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as possireduction of identified and evaluated risks as far as commercially reasonable and technologically feasable through adequate design and development;
2022/06/13
Committee: IMCOLIBE
Amendment 1623 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point b
(b) where appropriate, implementation of adequate mitigation and control measures in relation to risks that cannot be eliminadeleted;
2022/06/13
Committee: IMCOLIBE
Amendment 1643 #
Proposal for a regulation
Article 9 – paragraph 5
5. High-risk AI systems shall be tesevaluated for the purposes of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chap and weighing any such measures against the potential benefits and intended goals of the systerm.
2022/06/13
Committee: IMCOLIBE
Amendment 1717 #
Proposal for a regulation
Article 10 – paragraph 3
3. THigh risk AI systems should be designed and developed with the best efforts to ensure that, where appropriate, training, validation and testing data sets shall beare sufficiently relevant, representative, free of errors and complete and appropriately vetted for errors. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/06/13
Committee: IMCOLIBE
Amendment 1744 #
Proposal for a regulation
Article 10 – paragraph 6 a (new)
6 a. The training, testing and validation processes of data sets should have a duration based on the training periodicity of the systems, the timing of notification of incidents and the normal supervisory activity of the national competent authority
2022/06/13
Committee: IMCOLIBE
Amendment 1751 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall bevary according to each use of the AI system and drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or in the case of SMEs and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent national authority.
2022/06/13
Committee: IMCOLIBE
Amendment 1780 #
Proposal for a regulation
Article 12 – paragraph 4
4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum: (a) recording of the period of each use of the system (start date and time and end date and time of each use); (b) the reference database against which input data has been checked by the system; (c) the input data for which the search has led to a match; (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1820 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
3. Human oversightThe degree of human oversight shall be adapted to the specific risks, the level of automation, and context of the AI system and shall be ensured through either one or all of the following measures:
2022/06/13
Committee: IMCOLIBE
Amendment 1911 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
1. ProvidIn case there are no risk management systems already in place, providers and users of high-risk AI systems shall puimplement a quality management system in place thato ensures compliance with this Regulation and corresponding obligations. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:
2022/06/13
Committee: IMCOLIBE
Amendment 1926 #
Proposal for a regulation
Article 17 – paragraph 1 – point f
(f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service of high-risk AI systems, and after deployment of the high-risk AI;
2022/06/13
Committee: IMCOLIBE
Amendment 1941 #
Proposal for a regulation
Article 17 – paragraph 2
2. The implementation of aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s and user's organisation.
2022/06/13
Committee: IMCOLIBE
Amendment 2135 #
Proposal for a regulation
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 and international standards do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
2022/06/13
Committee: IMCOLIBE
Amendment 2142 #
Proposal for a regulation
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of relevant bodies, stakeholders or expert groups established under relevant sectorial Union law.
2022/06/13
Committee: IMCOLIBE
Amendment 2174 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 1
Where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied or has applied only in part harmonised standards referred to in Article 40, or where such harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider shall follow the conformity assessment procedure set out in Annex VII. Should the provider already have established internal organisation and structures for existing conformity assessments or requirements under other existing rules, the provider may utilise those, or parts of those, existing compliance structures, so long as they also have the capacity and competence needed to fulfil the requirements for the product set out in this Regulation.
2022/06/13
Committee: IMCOLIBE