BETA

32 Amendments of Vincenzo SOFO related to 2021/0106(COD)

Amendment 315 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values, the Universal Declaration of Human Rights, the European Convention on Human Rights and the Charter of Fundamental Rights of the EU. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 321 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU and to align it with relevant EU legislation such as the GDPR and the EUDPR. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board and to take into consideration the EDPB-EDPS Joint Opinion 5/2021.
2022/06/13
Committee: IMCOLIBE
Amendment 354 #
Proposal for a regulation
Recital 5 a (new)
(5 a) The regulatory framework addressing artificial intelligence should be without prejudice to existing and future Union laws concerning data protection, privacy, and protection of fundamental rights. In this regard, requirements of this Regulation should be consistent with the aims and objectives of, among others, the GDPR and the EUDPR. Where this Regulation addresses automated processing within the context of article 22 of the GDPR, the requirements contained in that article should continue to apply, ensuring the highest levels of protection for European citizens over the use of their personal data.
2022/06/13
Committee: IMCOLIBE
Amendment 597 #
Proposal for a regulation
Recital 40
(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of factsfacts and the law. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
2022/06/13
Committee: IMCOLIBE
Amendment 650 #
Proposal for a regulation
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities accessing the data of providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
2022/06/13
Committee: IMCOLIBE
Amendment 700 #
Proposal for a regulation
Recital 68
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 870 #
Proposal for a regulation
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems designed, modified, developed or used exclusively for military purposes.
2022/06/13
Committee: IMCOLIBE
Amendment 887 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not apply to AI systems, including their output, specifically developed or used exclusively for scientific research and development purposes.
2022/06/13
Committee: IMCOLIBE
Amendment 905 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that dis developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives,play intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals, which: (a) receives machine and/or human-based data and inputs; (b) infers how to achieve a given set of human-defined objectives using data- driven models created through learning or reasoning implemented with the techniques and approaches listed in Annex I, and (c) generates outputs such as content, in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
2022/06/13
Committee: IMCOLIBE
Amendment 1147 #
Article 4 a Notification about the use of an AI system 1. Users of AI systems which affect natural persons, in particular, by evaluating or assessing them, making predictions about them, recommending information, goods or services to them or determining or influencing their access to goods and services, shall inform the natural persons that they are subject to the use of such an AI system. 2. The information referred to in paragraph 1 shall include a clear and concise indication of the user and the purpose of the AI system, information about the rights of the natural person conferred under this Regulation, and a reference to publicly available resource where more information about the AI system can be found, in particular the relevant entry in the EU database referred to in Article 60, if applicable. 3. This information shall be presented in a concise, intelligible and easily accessible form, including for persons with disabilities. 4. This obligation shall be without prejudice to other Union or Member State laws, in particular Regulation 2016/679, Directive 2016/680, Regulation 2022/XXX.
2022/06/13
Committee: IMCOLIBE
Amendment 1189 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf as well as private companies, including social media and cloud service providers, for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
2022/06/13
Committee: IMCOLIBE
Amendment 1275 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1360 #
Proposal for a regulation
Article 5 – paragraph 2 – point b a (new)
(b a) the full respect of fundamental rights and freedoms in conformity with Union values, the Universal Declaration of Human Rights, the European Convention of Human Rights and the Charter of Fundamental Rights of the EU.
2022/06/13
Committee: IMCOLIBE
Amendment 1389 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall alsofully comply with EU values, the Universal Declaration of Human Rights, the European Convention of Human Rights and the Charter of Fundamental Rights of the EU and shall specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.
2022/06/13
Committee: IMCOLIBE
Amendment 1441 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk, if they pose a risk of harm to either physical health and safety or human rights, or both.
2022/06/13
Committee: IMCOLIBE
Amendment 1483 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health, natural environment and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/06/13
Committee: IMCOLIBE
Amendment 1492 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health, natural environment and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
2022/06/13
Committee: IMCOLIBE
Amendment 1500 #
(b) the extent to which an AI system has been used or is likely to be used and misused;
2022/06/13
Committee: IMCOLIBE
Amendment 1509 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health, natural environment and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
2022/06/13
Committee: IMCOLIBE
Amendment 1607 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that anythe overall residual risk associated with each hazard as well as the overall residual risk ofof the high-risk AI systems is reasonably judged to be acceptable, having regard to the benefits that the high-risk AI systems is judged acceptablereasonably expected to deliver and, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, subject to terms, conditions as made available by the provider, and contractual and license restrictions. Those residual risks shall be communicated to the user.
2022/06/13
Committee: IMCOLIBE
Amendment 1626 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to usersand relevant information on necessary competence training and authority for natural persons exercising such oversight.
2022/06/13
Committee: IMCOLIBE
Amendment 1700 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
2022/06/13
Committee: IMCOLIBE
Amendment 1704 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
(g) the identification of any possibleother data gaps or shortcomings that materially increase the risks of harm to the health, natural environment and safety or the fundamental rights of persons, and how those gaps and shortcomings can be addressed.
2022/06/13
Committee: IMCOLIBE
Amendment 1908 #
Proposal for a regulation
Article 16 – paragraph 1 a (new)
The obligations contained in paragraph 1 shall be without prejudice to obligations applicable to providers of high-risk AI systems arising from Regulation (EU) 2016/679 of the European Parliament and of the Council and Regulation (EU) 2018/1725 of the European Parliament and of the Council
2022/06/13
Committee: IMCOLIBE
Amendment 2069 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
6 a. Users of high risk systems involving an emotion recognition system or a biometric categorisation system in accordance with Article 52 shall implement suitable measures to safeguard the natural person's rights and freedoms and legitimate interests in such a system, including providing the natural person with the ability to express his or her point of view on the resulting categorisation and to contest the decision.
2022/06/13
Committee: IMCOLIBE
Amendment 2081 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Human rights impact assessment for high-risk AI systems 1. The user of a high-risk AI system as defined in Article 6 paragraph 2 may conduct an assessment of the system’s impact on fundamental rights and public interest in the context of use before putting the system into use and at least every three years afterwards. This assessment shall include, at minimum, the following: a) a clear outline of the intended purpose for which the system will be used; b) a clear outline of the intended geographic and temporal scope of the system’s use; c) categories of natural persons and groups likely to be affected by the use of the system; d) the likely impact on human rights of affected persons identified pursuant to point (c), including any indirect impacts or consequences of the system’s use; e) in the case of public authorities, any other impact on the public interest, including democracy and allocation of public funds; 2. Where the user of a high-risk AI system is already required to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the impact assessment outlined in paragraph 1 may be conducted in conjunction to the data protection impact assessment. The user may publish the results of both assessments, following the obligation under Article 51 paragraph 2.
2022/06/13
Committee: IMCOLIBE
Amendment 2105 #
Proposal for a regulation
Article 33 – paragraph 6
6. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. Any information and documentation obtained by notified bodies pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
2022/06/13
Committee: IMCOLIBE
Amendment 2574 #
Proposal for a regulation
Article 59 – paragraph 4 a (new)
4 a. National competent authorities shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148.
2022/06/13
Committee: IMCOLIBE
Amendment 2575 #
Proposal for a regulation
Article 59 – paragraph 4 b (new)
4 b. Any information and documentation obtained by the national competent authorities pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
2022/06/13
Committee: IMCOLIBE
Amendment 2635 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
5 a. Any information and documentation obtained by the Commission and Member States pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
2022/06/13
Committee: IMCOLIBE
Amendment 3090 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, whose failure or malfunctioning would directly cause significant harm to the health, natural environment or safety of natural persons.
2022/06/13
Committee: IMCOLIBE
Amendment 3113 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI systems intended to be used forto makinge decisions on promotion and termination of work-related contractual relationships, for task allocationbased on individual behaviour or personal traits or characteristics, and for monitoring and evaluating performance and behaviour of persons in such relationships that have a likelihood of causing harm to the physical health and safety or adversely impact on the fundamental rights or have given rise to significant concerns in relation to the materialisation of such harm or adverse impact.
2022/06/13
Committee: IMCOLIBE