30 Amendments of Mara BIZZOTTO related to 2021/0106(COD)
Amendment 523 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. In particular, the classification as high-risk according to Article 6 should not apply to AI systems whose intended purpose demonstrates that the generated output is a recommendation and a human intervention is required to convert this recommendation into an action.
Amendment 568 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems that automatically generate models used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purposeIn contrast, ancillary applications to those systems determining whether an individual should be granted access to credit, such as AI applications used for the acceleration of the credit disbursement process, in the valuation of collateral, or for the internal process efficiency, as well as other subsequent applications based on the credit scoring which do not create high risks for individuals should be exempt from the scope. AI systems used to evaluate the credit score or creditworthiness may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. NonethelessInfact, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 660 #
(54) TIn case there are no risk management systems already in place, the provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post- market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
Amendment 752 #
Proposal for a regulation
Recital 80
Recital 80
(80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. With regard to use case 5(b) in Annex III, areas covered by this Regulation relate to those outlined in Article 1(a). All other procedures relating to creditworthiness assessment are covered by the Directive of the European Parliament and of the Council on consumer credits . _________________ 56 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).
Amendment 913 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed witha system based on machine or human-based data and input that infers how to achieve a given set of human-defined objectives using one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 1267 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
Amendment 1419 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system has a self-evolving behaviour, the failure of which results in an immediate hazardous condition in a specific domain, and is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
Amendment 1493 #
Proposal for a regulation
Article 7 – paragraph 2 – point a
Article 7 – paragraph 2 – point a
(a) the intended purpose of the AI systema description of the AI system, including the intended purpose, the concrete use and context, complexity and autonomy of the AI system, the potential persons impacted, the extent to which the AI system has been used or is likely to be used, the extent to which any outcomes produced are subject to human review or intervention;
Amendment 1498 #
Proposal for a regulation
Article 7 – paragraph 2 – point b
Article 7 – paragraph 2 – point b
(b) an assessment of the expotent to which anial benefits provided by the use of the AI system, has been used or is likely to be usedwell as reticence risk and/or opportunity costs of not using the AI for individuals, groups of individuals, or society at large. This includes weighing the benefits of deploying the AI system against keeping the status quo;
Amendment 1505 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already causedan assessment of the probability of worst-case scenario, likelihood and severity of harm, to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities; fundamental rights of potentially impacted persons and its irreversibility, including: (i) the extent to which the AI system has already been evaluated and proven to have caused material harm as demonstrated by studies or reports published by the national competent authorities; (ii) the extent to which potentially impacted persons are dependent on the outcome produced from the AI system, in particular because of practical or legal reasons it is not reasonably possible to opt-out from that outcome; (iii) the extent to which the outcome produced by the AI system is easily reversible; (iv) the extent to which potentially impacted persons are in a vulnerable position in relation to the user of the AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age.
Amendment 1512 #
Proposal for a regulation
Article 7 – paragraph 2 – point d
Article 7 – paragraph 2 – point d
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;measures taken to address or mitigate the identified risks, including to the extent existing Union legislation provides for: (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; (ii) effective measures to prevent or substantially minimise those risks.
Amendment 1515 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
Article 7 – paragraph 2 – point e
Amendment 1522 #
Proposal for a regulation
Article 7 – paragraph 2 – point f
Article 7 – paragraph 2 – point f
Amendment 1524 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
Article 7 – paragraph 2 – point g
Amendment 1537 #
Proposal for a regulation
Article 7 – paragraph 2 – point h
Article 7 – paragraph 2 – point h
Amendment 1554 #
Proposal for a regulation
Article 8 – paragraph 1
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art and industry standards, including as reflected in relevant harmonised standards or common specifications.
Amendment 1622 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as possireduction of identified and evaluated risks as far as commercially reasonable and technologically feasable through adequate design and development;
Amendment 1623 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point b
Article 9 – paragraph 4 – subparagraph 1 – point b
Amendment 1643 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tesevaluated for the purposes of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chap and weighing any such measures against the potential benefits and intended goals of the systerm.
Amendment 1717 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. THigh risk AI systems should be designed and developed with the best efforts to ensure that, where appropriate, training, validation and testing data sets shall beare sufficiently relevant, representative, free of errors and complete and appropriately vetted for errors. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1744 #
Proposal for a regulation
Article 10 – paragraph 6 a (new)
Article 10 – paragraph 6 a (new)
6 a. The training, testing and validation processes of data sets should have a duration based on the training periodicity of the systems, the timing of notification of incidents and the normal supervisory activity of the national competent authority
Amendment 1751 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall bevary according to each use of the AI system and drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or in the case of SMEs and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent national authority.
Amendment 1780 #
Proposal for a regulation
Article 12 – paragraph 4
Article 12 – paragraph 4
Amendment 1820 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
Article 14 – paragraph 3 – introductory part
3. Human oversightThe degree of human oversight shall be adapted to the specific risks, the level of automation, and context of the AI system and shall be ensured through either one or all of the following measures:
Amendment 1911 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
Article 17 – paragraph 1 – introductory part
1. ProvidIn case there are no risk management systems already in place, providers and users of high-risk AI systems shall puimplement a quality management system in place thato ensures compliance with this Regulation and corresponding obligations. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:
Amendment 1926 #
Proposal for a regulation
Article 17 – paragraph 1 – point f
Article 17 – paragraph 1 – point f
(f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service of high-risk AI systems, and after deployment of the high-risk AI;
Amendment 1941 #
Proposal for a regulation
Article 17 – paragraph 2
Article 17 – paragraph 2
2. The implementation of aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s and user's organisation.
Amendment 2135 #
Proposal for a regulation
Article 41 – paragraph 1
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 and international standards do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2142 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of relevant bodies, stakeholders or expert groups established under relevant sectorial Union law.
Amendment 2174 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 1
Article 43 – paragraph 1 – subparagraph 1
Where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied or has applied only in part harmonised standards referred to in Article 40, or where such harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider shall follow the conformity assessment procedure set out in Annex VII. Should the provider already have established internal organisation and structures for existing conformity assessments or requirements under other existing rules, the provider may utilise those, or parts of those, existing compliance structures, so long as they also have the capacity and competence needed to fulfil the requirements for the product set out in this Regulation.