BETA

38 Amendments of Salvatore DE MEO related to 2021/0106(COD)

Amendment 573 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems that automatically generate models used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilitiIn contrast, ancillary applications to those systems determining whether an individual should be granted access to credit, such as AI applications used for the acceleration of the credit disbursement process, age, sexual orientin the valuation, orf create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purposollateral, or for the internal process efficiency, as well as other subsequent applications based on the credit scoring which do not create high risks for individuals should be exempt from the scope. AI systems used to evaluate the credit score ofr creditworthiness assessment and credit scoring when put into service by small-scale providers for their own usemay lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. NonethelessIn fact, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/06/13
Committee: IMCOLIBE
Amendment 604 #
Proposal for a regulation
Recital 40 a (new)
(40 a) Transparency requirements shall not apply where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme.
2022/06/13
Committee: IMCOLIBE
Amendment 625 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errorss complete and close to zero error as possible. A procedure to check data and completeness in view of the intended purpose of the system should be implemented. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the unfair bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the unfair bias monitoring, detection and correction in relation to high- risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 659 #
Proposal for a regulation
Recital 54
(54) TUnless the provider has already implemented a risk management system warranting quality and conformity, the provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post- market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
2022/06/13
Committee: IMCOLIBE
Amendment 706 #
Proposal for a regulation
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the publicregulators, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system in a EU database, to be established and managed by the Commission. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
2022/06/13
Committee: IMCOLIBE
Amendment 708 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. Images generated through the use of AI in the creation of audio-visual content such as films and video game visuals should not be considered “deep fakes” as defined in Article 52 (3), which must be consistent with the principle of artistic freedom under the Charter of Fundamental Rights.
2022/06/13
Committee: IMCOLIBE
Amendment 753 #
Proposal for a regulation
Recital 80
(80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. With regard to use case 5(b) in Annex III, areas covered by this Regulation relate to those outlined in Article 1(a). All other procedures relating to creditworthiness assessment are covered by the Directive of the European Parliament and of the Council on consumer credits. _________________ 56 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).
2022/06/13
Committee: IMCOLIBE
Amendment 910 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed witha system that operates with varying degrees of autonomy, uses one or more of the techniques and approaches listed in Annex I and can, for a given set of human- defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with and that cannot be fully predicted by the natural person developing the system;
2022/06/13
Committee: IMCOLIBE
Amendment 933 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing itand places that system on the market or puttings it into service under its own name or trademark, whether for payment or free of charge;
2022/06/13
Committee: IMCOLIBE
Amendment 942 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
2022/06/13
Committee: IMCOLIBE
Amendment 949 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
(4 a) ‘end user’ means any natural person who, in the context of employment or contractual agreement with the user, uses or deploys the AI system under the authority of the user;
2022/06/13
Committee: IMCOLIBE
Amendment 974 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, butwithin its intended purpose, but not in accordance with the specific context and conditions of use established by the provider and in a way which may result from reasonably foreseeable human behaviour or interaction with other systems;
2022/06/13
Committee: IMCOLIBE
Amendment 981 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property, but which is not necessary in order for the product or system to function;
2022/06/13
Committee: IMCOLIBE
Amendment 1001 #
Proposal for a regulation
Article 3 – paragraph 1 – point 23
(23) ‘substantial modification’ means a change to thea high-risk AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessedsuch as a new training with a completely different dataset with respect to the one used to begin with or the addition of a further AI module into the AI system or results in a modification to the intended purpose for which the AI system has been assessed; Supplementary and periodic training of an AI algorithm by the AI user or provider using their own data to ensure that the system remains accurate and/or is working as intended does not amount to a ‘substantial modification’ under this Regulation. The periodic retraining of models due to new data with same structure shall not constitute a substantial modification. For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been predetermined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification;
2022/06/13
Committee: IMCOLIBE
Amendment 1048 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ; this does not include biometric identification systems used for remote customer onboarding as proscribed under Article 13(1) of Directive (EU) 2018/843 of the European Parliament and of the Council, nor the use for authentication as defined under Articles 4(29) & 4(30) of Directive (EU) 2015/2366 of the European Parliament and of the Council;
2022/06/13
Committee: IMCOLIBE
Amendment 1104 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘unfair bias’ means an inclination of prejudice towards or against a natural person that can result in discriminatory and/or unfair treatment of some natural persons with respect to others;
2022/06/13
Committee: IMCOLIBE
Amendment 1422 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a safety component of a product, or is itself a product involving significant risks, covered by the Union harmonisation legislation listed in Annex II;
2022/06/13
Committee: IMCOLIBE
Amendment 1442 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk when no internal risk-mitigation mechanisms embedded in the AI system apply.
2022/06/13
Committee: IMCOLIBE
Amendment 1570 #
Proposal for a regulation
Article 8 – paragraph 2 a (new)
2 a. This article shall not apply where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme.
2022/06/13
Committee: IMCOLIBE
Amendment 1713 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errors and comple and as complete and close to zero error as possible, having regard to the intended purpose of the AI system. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof. In case of observational data, a common approach on data requirements shall be defined together with regulators.
2022/06/13
Committee: IMCOLIBE
Amendment 1752 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be appropriate to the context of application or use of the AI system and drawn up in such a way to demonstrate that the high- risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. or any equivalent documentation meeting the same objectives, subject to approval of the competent authority.
2022/06/13
Committee: IMCOLIBE
Amendment 1760 #
Proposal for a regulation
Article 11 – paragraph 2
2. Where a high-risk AI system related to a product, to which the legal acts listed in Annex II, section A apply, is placed on the market or put into service one singlappropriate technical documentation shall be drawn up containing all the information set out in Annex IV as well as the information required under those legal acts.
2022/06/13
Committee: IMCOLIBE
Amendment 1772 #
Proposal for a regulation
Article 12 – paragraph 2
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the system. The storage period should be determined on the business needs and informational value, without exceeding a maximum of 10 fiscal years
2022/06/13
Committee: IMCOLIBE
Amendment 1876 #
Proposal for a regulation
Article 16 – paragraph 1 – introductory part
PAs long as providers of high-risk AI systems exercise full control over the systems, they shall:
2022/06/13
Committee: IMCOLIBE
Amendment 1880 #
Proposal for a regulation
Article 16 – paragraph 1 – point a
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title as long as the provider exercise control over the AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 1910 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
1. PUnless existing risk management systems are already in place to warrant the quality of the high-risk AI systems, providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:
2022/06/13
Committee: IMCOLIBE
Amendment 1973 #
Proposal for a regulation
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon reasoned request by a national competent authority, provide that authority with all the information and documentation they deem necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
2022/06/13
Committee: IMCOLIBE
Amendment 2134 #
Proposal for a regulation
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist or relevant international standards do not apply or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
2022/06/13
Committee: IMCOLIBE
Amendment 2144 #
Proposal for a regulation
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of relevant bodies, stakeholders or expert groups established under relevant sectorial Union law.
2022/06/13
Committee: IMCOLIBE
Amendment 2229 #
Proposal for a regulation
Article 49 – paragraph 1
1. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriatein digital format for high-risk AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2336 #
Proposal for a regulation
Article 53 – paragraph 6
6. The modalities and the conditions of the operation of the AI regulatory sandboxes, including the eligibility criteria and the procedure for the application, selection, participation and exiting from the sandbox, and the rights and obligations of the participants shall be discussed with all the relevant actors of the AI value chain, such as research institutions and businesses, and set out in implementing acts. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
2022/06/13
Committee: IMCOLIBE
Amendment 2672 #
Proposal for a regulation
Article 62 – paragraph 3 a (new)
3 a. Requirements in place in existing EU legislation shall be taken into account with regard to reporting of information of incidents, in view of avoiding duplications and harmonizing the provisions on incident and event reporting.
2022/06/13
Committee: IMCOLIBE
Amendment 2752 #
Proposal for a regulation
Article 67 – paragraph 1
1. Where, having performed an evaluation under Article 65, the market surveillance authority of a Member State finds and demonstrates that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
2022/06/13
Committee: IMCOLIBE
Amendment 2810 #
Proposal for a regulation
Article 70 – paragraph 4
4. The Commission and Member States may exchange, where necessary and in compliance with trade agreements between the EU and third countries that may apply, confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.
2022/06/13
Committee: IMCOLIBE
Amendment 2815 #
Proposal for a regulation
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented and aligned with the guidelines issued by the Board, as referred to in Article 58 (c) (iii). The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers and start-up and their economic viability.
2022/06/13
Committee: IMCOLIBE
Amendment 3088 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – introductory part
2. Management and operation of critical infrastructureCritical infrastructure and protection of environment:
2022/06/13
Committee: IMCOLIBE
Amendment 3093 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic, digital infrastructure and the supply of water, gas, heating and electricity.
2022/06/13
Committee: IMCOLIBE
Amendment 3128 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with in order to determine their access to credit or to other exception of AI systems put into service by small scale providers for their own usessential services. Ancillary applications such as AI applications used for the acceleration of the credit disbursement process, in the valuation of collateral, or for the internal process efficiency, as well as other subsequent applications based on the credit scoring which do not create high risks for individuals are not included in those systems;
2022/06/13
Committee: IMCOLIBE