272 Amendments of Kosma ZŁOTOWSKI related to 2021/0106(COD)
Amendment 97 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they may pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and, when appropriate and justified by a proven added value to the protection of health, safety and fundamental rights, human oversight.
Amendment 98 #
Proposal for a regulation
Recital 34
Recital 34
(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road, air and railway traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities.
Amendment 106 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons canmay, when appropriate, oversee their functioning. For this purpose, when it brings a proven added value to the protection of health, safety and fundamental rights, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 107 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities accessing the data of providers of high risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 108 #
Proposal for a regulation
Recital 54
Recital 54
(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation in the language of the Member State concerned and establish a robust post-market monitoring system. All elements, from design to future development, must be transparent for the user. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
Amendment 113 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, AI solutions and services designed to combat fraud and protect consumers against fraudulent activities should not be considered high risk, nor prohibited. As a matter of substantial public interest, it is vital that this Regulation does not undermine the incentive of the industry to create and roll out solutions designed to combat fraud across the European Union. Furthermore, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Nnotified Bbodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 119 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems specially designed, modified, developed or used exclusively for military purposes.
Amendment 122 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
Amendment 126 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall not affect any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
Amendment 127 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that dis developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as contentplay intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals, which: (a) receives machine and/or human-based data and inputs; (b) infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and (c) generates outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
Amendment 133 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
Article 3 – paragraph 1 – point 4 a (new)
(4 a) 'End-user' means any natural person who, in the framework of employment, contract or agreement with the deployer, uses the AI system under the authority of the deployer;
Amendment 135 #
Proposal for a regulation
Article 3 – paragraph 1 – point 11
Article 3 – paragraph 1 – point 11
(11) ‘putting into service’ means the supply of an AI system for first use directly to the user or for own use on the Union marketthe end-user for its intended purpose;
Amendment 138 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purposepurpose as indicated in instruction for use or technical specification, but which may result from reasonably foreseeable human behaviour or interaction with other systems;
Amendment 139 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property;
Amendment 141 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a physical distance through thea one-to-many’ comparison of awhere the person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;identified do not claim to have a particular identity but where that identity is otherwise established – without the conscious cooperation of these persons or against their will – by matching live templates with templates stored in a template database
Amendment 146 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'critical infrastructure' means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive (…) on the resilience of critical entities;
Amendment 151 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likelyintended to cause that person or another person physical or psychological harm;
Amendment 152 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
Article 5 – paragraph 1 – point c – point i
(i) preferential, detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
Amendment 153 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
Article 5 – paragraph 1 – point c – point ii
(ii) preferential, detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;
Amendment 164 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as amain safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
Amendment 167 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2 a. The classification as high-risk as a consequence of Article 6(1) and 6(2) shall be disregarded for AI systems whose intended purpose demonstrates that the generated output is a recommendation requiring a human intervention to convert this recommendation into a decision and for AI systems, which do not lead to autonomous decisions or actions of the overall system.
Amendment 191 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, subject to terms, conditions as made available by the provider, and contractual and license restrictions. Those residual risks shall be communicated to the user.
Amendment 197 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. TWhere relevant to appropriate risk management measures, those practices shall concern in particular,
Amendment 198 #
Proposal for a regulation
Article 10 – paragraph 2 – point e
Article 10 – paragraph 2 – point e
(e) a priorn assessment of the availability, quantity and suitability of the data sets that are needed;
Amendment 199 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
Amendment 200 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possibleother data gaps or shortcomings that materially increase the risks of harm to the health, natural environment and safety or the fundamental rights of persons, and how those gaps and shortcomings can be addressed.
Amendment 201 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, sufficiently diverse to mitigate bias, and, to the best extent possible, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 205 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into accountbe sufficiently diverse to accurately capture, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high- risk AI system is intended to be used.
Amendment 209 #
Proposal for a regulation
Article 12 – paragraph 2
Article 12 – paragraph 2
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughoutwhile the AI system is used within its lifecycle that is appropriate to the intended purpose of the system.
Amendment 210 #
Proposal for a regulation
Article 12 – paragraph 3 a (new)
Article 12 – paragraph 3 a (new)
3 a. For records constituting trade secrets as defined in Article 2 of Directive (EU) 2016/943, provider may elect to confidentially provide such trade secrets only to relevant public authorities to the extent necessary for such authorities to perform their obligations hereunder.
Amendment 211 #
Proposal for a regulation
Article 13 – paragraph 2
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or made otherwise available, that include concise, complete, correct and clear information that is reasonably relevant, accessible and comprehensible to users to assist them in operating and maintaining the AI system, taking into consideration the system’s intended purpose and the expected audience for the instructions.
Amendment 213 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – introductory part
Article 13 – paragraph 3 – point b – introductory part
(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, that are relevant to the material risks associated with the intended purpose, including where appropriate, including:
Amendment 214 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point ii
Article 13 – paragraph 3 – point b – point ii
(ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and reasonably foreseeable circumstances that may have ancould materially impact on that expected level of accuracy, robustness and cybersecurity;
Amendment 216 #
Proposal for a regulation
Article 13 – paragraph 3 – point e
Article 13 – paragraph 3 – point e
(e) the expected lifetime of the high- risk AI system, the description of the procedure of withdrawing it from use and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates.
Amendment 218 #
Proposal for a regulation
Article 14 – paragraph 2
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter. Human oversight requirements shall only apply when appropriate, proportionate and justified by a proven added value to the protection of health, safety and fundamental rights, such justification residing in an improved accuracy measured in the outcomes and results delivered by high-risk AI systems.
Amendment 220 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
Article 14 – paragraph 4 – introductory part
4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate and proportionate to the circumstances:
Amendment 224 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
Article 14 – paragraph 4 – point a
(a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
Amendment 228 #
Proposal for a regulation
Article 14 – paragraph 5
Article 14 – paragraph 5
5. For high-risk AI systems referred to in point 1(a) of Annex III and for which human oversight is effectively justified by a proven end value to the protection of health, safety and fundamental rights, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons.
Amendment 229 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. HProviders and deployers should take all appropriate and feasible technical and organizational measures to ensure that high-risk AI systems shall bare resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
Amendment 230 #
Proposal for a regulation
Article 29 – paragraph 1
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systemsshall bear sole responsibility in case of any use of the AI system that is not in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5.
Amendment 234 #
Proposal for a regulation
Article 33 – paragraph 2
Article 33 – paragraph 2
2. Notified bodies shall satisfy the organisational, quality management, resources and process requirememinimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuants that are necessary to fulfil their tasks.o Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148;
Amendment 235 #
Proposal for a regulation
Article 33 – paragraph 6
Article 33 – paragraph 6
6. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. Any information and documentation obtained by notified bodies pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 244 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Amendment 246 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates text, image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
Amendment 251 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Amendment 255 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 262 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor, AI ethics experts and industry representatives. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 264 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be co-chaired by the Commission and representative chosen from among the delegates of the Member States. The Commission shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 268 #
Proposal for a regulation
Article 59 – paragraph 4 a (new)
Article 59 – paragraph 4 a (new)
4 a. National competent authorities shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148.
Amendment 269 #
Proposal for a regulation
Article 59 – paragraph 4 b (new)
Article 59 – paragraph 4 b (new)
4 b. Any information and documentation obtained by the national competent authorities pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 270 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providers. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States mayshall also establish one central contact point for communication with operators. In addition, the central contact point of each Member State should be contactable through electronic communications means.
Amendment 272 #
Proposal for a regulation
Article 60 – paragraph 4
Article 60 – paragraph 4
4. The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact detailnot contain any confidential business information or trade secrets of a natural persons who are responsible for registering the system and have the legal authority to represent the provior legal person, including source coder.
Amendment 273 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
Article 60 – paragraph 5 a (new)
5 a. Any information and documentation obtained by the Commission and Member States pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 275 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users and end-users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
Amendment 276 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. Access to data and documentation in the context of their activities, the market surveillance authorities shall be granted fulladequate access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access, taking into account the scope of access agreed with the relevant data subjects or data holders.
Amendment 277 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system. AI providers or deployers should support market surveillance authorities with the necessary facilities to carry out testing to confirm compliance.
Amendment 281 #
Proposal for a regulation
Article 70 – paragraph 1 – introductory part
Article 70 – paragraph 1 – introductory part
1. National competent authorities, market surveillance authorities and notified bodies involved in the application of this Regulation shall respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
Amendment 282 #
Proposal for a regulation
Article 70 – paragraph 1 a (new)
Article 70 – paragraph 1 a (new)
Amendment 283 #
Proposal for a regulation
Article 70 – paragraph 1 b (new)
Article 70 – paragraph 1 b (new)
1 b. Information and data collected by national competent authorities, market surveillance authorities and notified bodies and referred to in Paragraph 1 shall be: a) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes; further processing for archiving purposes in the public interest, for scientific or historical research purposes or for statistical purposes shall not be considered incompatible with the original purposes ("purpose limitation"); b) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’);
Amendment 284 #
Proposal for a regulation
Article 71 – paragraph 1 a (new)
Article 71 – paragraph 1 a (new)
1 a. In cases where administrative fines have been imposed under Article 83 of Regulation 2016/679, no further penalties shall be imposed on operators under the AI Act;
Amendment 285 #
Proposal for a regulation
Article 72 – paragraph 1 – point a
Article 72 – paragraph 1 – point a
(a) the nature, gravity and duration of the infringement and of its consequences, taking into account the number of subjects affected and the level of damage suffered by them;
Amendment 286 #
Proposal for a regulation
Article 72 – paragraph 1 – point a a (new)
Article 72 – paragraph 1 – point a a (new)
(a a) the intentional or negligent character of the infringement;
Amendment 287 #
Proposal for a regulation
Article 72 – paragraph 1 – point a b (new)
Article 72 – paragraph 1 – point a b (new)
(a b) any relevant previous infringement;
Amendment 288 #
Proposal for a regulation
Article 72 – paragraph 1 – point b a (new)
Article 72 – paragraph 1 – point b a (new)
(b a) the degree of cooperation with the supervisory authority, in order to remedy the infringement and mitigate the possible adverse effects of the infringement;
Amendment 289 #
Proposal for a regulation
Article 72 – paragraph 1 – point b b (new)
Article 72 – paragraph 1 – point b b (new)
(b b) any action taken by the provider to mitigate the damage suffered by subjects;
Amendment 290 #
Proposal for a regulation
Article 72 – paragraph 1 – point c a (new)
Article 72 – paragraph 1 – point c a (new)
(c a) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement.
Amendment 295 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
Amendment 296 #
Proposal for a regulation
Article 84 – paragraph 1 a (new)
Article 84 – paragraph 1 a (new)
1 a. The Commission shall assess the need for amendment of the list in Annex I every 24months following the entry into force of this Regulation and until the end of the period of the delegation of power.
Amendment 297 #
Proposal for a regulation
Article 84 – paragraph 1 b (new)
Article 84 – paragraph 1 b (new)
1 b. The Commission shall assess the need for amendment of the list in Annex III every24 months following the entry into force of this Regulation and until the end of the period of the delegation of power. The findings of that assessment shall be presented to the European Parliament and the Council.
Amendment 298 #
Proposal for a regulation
Article 84 – paragraph 2
Article 84 – paragraph 2
2. By [threewo years after the date of application of this Regulation referred to in Article 85(2)] and every fourthree years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.
Amendment 300 #
ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES referred to in Article 3, point 1
Amendment 302 #
Proposal for a regulation
Annex I – point c
Annex I – point c
(c) Statistical approaches, Bayesian estimation, , forecasting, search and optimization methods.
Amendment 303 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons without their consent of being identified;
Amendment 305 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road, air, railway traffic and the supply of water, gas, heating and electricity, whose failure or malfunctioning would directly cause significant harm to the health, natural environment or safety of natural persons.
Amendment 307 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used for the sole purpose of recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
Amendment 310 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point a
Annex IV – paragraph 1 – point 2 – point a
(a) provided that no confidential information or trade secrets are disclosed, the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre- trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;
Amendment 311 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
Annex IV – paragraph 1 – point 2 – point b
(b) provided that no confidential information or trade secrets are disclosed, the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
Amendment 315 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values, the Universal Declaration of Human Rights, the European Convention on Human Rights and the Charter of Fundamental Rights of the EU. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 321 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU and to align it with relevant EU legislation such as the GDPR and the EUDPR. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board and to take into consideration the EDPB-EDPS Joint Opinion 5/2021.
Amendment 354 #
Proposal for a regulation
Recital 5 a (new)
Recital 5 a (new)
(5 a) The regulatory framework addressing artificial intelligence should be without prejudice to existing and future Union laws concerning data protection, privacy, and protection of fundamental rights. In this regard, requirements of this Regulation should be consistent with the aims and objectives of, among others, the GDPR and the EUDPR. Where this Regulation addresses automated processing within the context of article 22 of the GDPR, the requirements contained in that article should continue to apply, ensuring the highest levels of protection for European citizens over the use of their personal data.
Amendment 369 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health, natural environment and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
Amendment 377 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference databasedatabase data repository, excluding verification/authentication systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
Amendment 400 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) This Regulation should also ensure harmonisation and consistency in definitions and terminology as biometric techniques can, in the light of their primary function, be divided into techniques of biometric identification, authentication and verification. Biometric authentication means the process of matching an identifier to a specific stored identifier in order to grant access to a device or service, whilst biometric verification refers to the process of confirming that an individual is who they claim to be. As they do not involve any “one-to-many” comparison of biometric data that is the distinctive trait of identification, both biometric verification and authentication should be excluded from the scope of this Regulation.
Amendment 402 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities accessing the data of providers of high risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 452 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems specially designed, modified, developed or used exclusively for military purposes.
Amendment 455 #
Proposal for a regulation
Article 2 – paragraph 4
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation or in the context of border checks, asylum and immigration related activities with the Union or with one or more Member States.
Amendment 458 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5a. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
Amendment 460 #
5b. This Regulation shall not affect any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
Amendment 461 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that dis developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as contentplay intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals, which: (a) receives machine and/or human-based data and inputs; (b) infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and (c) generates outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
Amendment 477 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12
Article 3 – paragraph 1 – point 12
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation; general purpose AI systems shall not be considered as having an intended purpose within the meaning of this Regulation;
Amendment 479 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purposepurpose as indicated in instruction for use or technical specification, but which may result from reasonably foreseeable human behaviour or interaction with other systems;
Amendment 481 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property;
Amendment 488 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a physical distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
Amendment 496 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44a) 'critical infrastructure' means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive (…) on the resilience of critical entities;
Amendment 510 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likelyintended to cause that person or another person physical or psychological harm;
Amendment 518 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
Article 5 – paragraph 1 – point c – point i
(i) preferential, detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
Amendment 520 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
Article 5 – paragraph 1 – point c – point ii
(ii) preferential, detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;
Amendment 537 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;
Amendment 541 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 541 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health, natural environment, and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
Amendment 544 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii a (new)
Article 5 – paragraph 1 – point d – point iii a (new)
(iiia) searching for missing persons, especially those who are minors or have medical conditions that affect memory, communication, or independent decision- making skills;
Amendment 548 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they may pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and, when appropriate and justified by a proven added value to the protection of health, safety and fundamental rights, human oversight.
Amendment 570 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as amain safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
Amendment 574 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Due to the fact that AI systems related to low-value credits for the purchase of moveables does not cause high risk, it is proposed to exclude this category from the scope of high-risk AI category as well. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 575 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2a. The classification as high-risk as a consequence of Article 6(1) and 6(2) shall be disregarded for AI systems whose intended purpose demonstrates that the generated output is a recommendation requiring a human intervention to convert this recommendation into a decision and for AI systems, which do not lead to autonomous decisions or actions of the overall system.
Amendment 586 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health, natural environment and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 589 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health, natural environment and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
Amendment 593 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health, natural environment and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
Amendment 597 #
Proposal for a regulation
Recital 40
Recital 40
(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of factsfacts and the law. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
Amendment 606 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, subject to terms, conditions as made available by the provider, and contractual and license restrictions. Those residual risks shall be communicated to the user.
Amendment 616 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. TWhere relevant to appropriate risk management measures, those practices shall concern in particular,
Amendment 617 #
Proposal for a regulation
Article 10 – paragraph 2 – point e
Article 10 – paragraph 2 – point e
(e) a priorn assessment of the availability, quantity and suitability of the data sets that are needed;
Amendment 618 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
Amendment 621 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possibleother data gaps or shortcomings that materially increase the risks of harm to the health, natural environment and safety or the fundamental rights of persons, and how those gaps and shortcomings can be addressed.
Amendment 627 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, sufficiently diverse to mitigate bias, and, to the best extent possible, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 629 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into accountbe sufficiently diverse to accurately capture, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high- risk AI system is intended to be used.
Amendment 632 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption or biometric template protection technologies where anonymisation may significantly affect the purpose pursued.
Amendment 637 #
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughoutwhile the AI system is used within its lifecycle that is appropriate to the intended purpose of the system.
Amendment 640 #
Proposal for a regulation
Article 12 – paragraph 3 a (new)
Article 12 – paragraph 3 a (new)
3a. For records constituting trade secrets as defined in Article 2 of Directive (EU) 2016/943, provider may elect to confidentially provide such trade secrets only to relevant public authorities to the extent necessary for such authorities to perform their obligations hereunder.
Amendment 641 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons canmay, when appropriate, oversee their functioning. For this purpose, when it brings a proven added value to the protection of health, safety and fundamental rights, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 648 #
Proposal for a regulation
Article 13 – paragraph 2
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or made otherwise available, that include concise, complete, correct and clear information that is reasonably relevant, accessible and comprehensible to users. to assist them in operating and maintaining the AI system, taking into consideration the system’s intended purpose and the expected audience for the instructions.
Amendment 650 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities accessing the data of providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 654 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – introductory part
Article 13 – paragraph 3 – point b – introductory part
(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, that are relevant to the material risks associated with the intended purpose, including where appropriate, including:
Amendment 658 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point ii
Article 13 – paragraph 3 – point b – point ii
(ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and reasonably foreseeable circumstances that may have ancould materially impact on that expected level of accuracy, robustness and cybersecurity;
Amendment 658 #
Proposal for a regulation
Recital 54
Recital 54
(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation in the language of the Member State concerned and establish a robust post-market monitoring system. All elements, from design to future development, must be transparent for the user. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
Amendment 669 #
Proposal for a regulation
Article 13 – paragraph 3 – point e
Article 13 – paragraph 3 – point e
(e) the expected lifetime of the high- risk AI system, the description of the procedure of withdrawing it from use and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates.
Amendment 681 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
Article 14 – paragraph 4 – introductory part
4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate and proportionate to the circumstances:
Amendment 682 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
Article 14 – paragraph 4 – point a
(a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
Amendment 690 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. HProviders and deployers should take all appropriate and feasible technical and organizational measures to ensure that high-risk AI systems shall bare resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
Amendment 700 #
Proposal for a regulation
Recital 68
Recital 68
Amendment 711 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
Article 29 – paragraph 1 a (new)
1a. Users shall bear sole responsibility in case of any use of the AI system that is not in accordance with the instructions of use accompanying the systems.
Amendment 717 #
Proposal for a regulation
Recital 70 a (new)
Recital 70 a (new)
(70 a) Suppliers of general purpose AI systems and, as relevant, other third parties that may supply other software tools and components, including pre- trained models and data, should cooperate, as appropriate, with providers that use such systems or components for an intended purpose under this Regulation in order to enable their compliance with applicable obligations under this Regulation and their cooperation, as appropriate, with the competent authorities established under this Regulation. In such cases, the provider may, by written agreement, specify the information or other assistance that such supplier will furnish in order to enable the provider to comply with its obligations herein.
Amendment 719 #
Proposal for a regulation
Article 33 – paragraph 2 a (new)
Article 33 – paragraph 2 a (new)
2a. Notified bodies shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148;
Amendment 720 #
Proposal for a regulation
Article 33 – paragraph 6
Article 33 – paragraph 6
6. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. Any information and documentation obtained by notified bodies pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 723 #
Proposal for a regulation
Article 41
Article 41
Amendment 734 #
Proposal for a regulation
Recital 73 a (new)
Recital 73 a (new)
(73 a) AI solutions and services designed to combat fraud and protect consumers against fraudulent activities should not be considered high risk, nor prohibited. As a matter of substantial public interest, it is vital that this Regulation does not undermine the incentive of the industry to create and roll out solutions designed to combat fraud across the European Union.
Amendment 747 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Amendment 754 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates text, image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
Amendment 763 #
Proposal for a regulation
Article 52 a (new)
Article 52 a (new)
Article 52 a General purpose AI systems 1. The placing on the market, putting into service or use of general purpose AI systems shall not, by themselves only, make those systems subject to the provisions of this Regulation. 2. Any person who places on the market or puts into service under its own name or trademark or uses a general purpose AI system made available on the market or put into service for an intended purpose that makes it subject to the provisions of this Regulation shall be considered the provider of the AI system subject to the provisions of this Regulation. 3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a general purpose AI system made available on the market, with or without modifying it, into an AI system whose intended purpose makes it subject to the provisions of this Regulation. 4. The provisions of this Article shall apply irrespective of whether the general purpose AI system is open source software or not.
Amendment 766 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Amendment 770 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 797 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor, AI ethics experts and industry representatives. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 800 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be co-chaired by the Commission and representative chosen from among the delegates of the Member States. The Commission shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 824 #
Proposal for a regulation
Article 59 – paragraph 4 a (new)
Article 59 – paragraph 4 a (new)
4a. National competent authorities shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148.
Amendment 825 #
Proposal for a regulation
Article 59 – paragraph 4 b (new)
Article 59 – paragraph 4 b (new)
4b. Any information and documentation obtained by the national competent authorities pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 828 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providers. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States mayshall also establish one central contact point for communication with operators. In addition, the central contact point of each Member State should be contactable through electronic communications means.
Amendment 836 #
Proposal for a regulation
Article 60 – paragraph 4 a (new)
Article 60 – paragraph 4 a (new)
4a. The EU database shall not contain any confidential business information or trade secrets of a natural or legal person, including source code.
Amendment 838 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
Article 60 – paragraph 5 a (new)
5a. Any information and documentation obtained by the Commission and Member States pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 847 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. Access to data and documentation in the context of their activities, the market surveillance authorities shall be granted fulladequate access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access, taking into account the scope of access agreed with the relevant data subjects or data holders.
Amendment 848 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system. AI providers or deployers should support market surveillance authorities with the necessary facilities to carry out testing to confirm compliance.
Amendment 870 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems designed, modified, developed or used exclusively for military purposes.
Amendment 878 #
Proposal for a regulation
Article 70 – paragraph 1 – introductory part
Article 70 – paragraph 1 – introductory part
1. National competent authorities, market surveillance authorities and notified bodies involved in the application of this Regulation shall respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
Amendment 879 #
Proposal for a regulation
Article 70 – paragraph 1 a (new)
Article 70 – paragraph 1 a (new)
Amendment 880 #
Proposal for a regulation
Article 70 – paragraph 1 b (new)
Article 70 – paragraph 1 b (new)
1b. Information and data collected by national competent authorities, market surveillance authorities and notified bodies and referred to in Paragraph 1 shall be: a) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes; further processing for archiving purposes in the public interest, for scientific or historical research purposes or for statistical purposes shall not be considered incompatible with the original purposes ("purpose limitation"); b) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’);
Amendment 884 #
Proposal for a regulation
Article 2 – paragraph 4
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation or in the context of border checks, asylum and immigration related activities with the Union or with one or more Member States.
Amendment 887 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not apply to AI systems, including their output, specifically developed or used exclusively for scientific research and development purposes.
Amendment 888 #
Proposal for a regulation
Article 71 – paragraph 1 a (new)
Article 71 – paragraph 1 a (new)
1a. In cases where administrative fines have been imposed under Article 83 of Regulation 2016/679, no further penalties shall be imposed on operators under the AI Act.
Amendment 890 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
Article 71 – paragraph 3 – introductory part
3. The following infringements shall be subject to administrative fines of up to 3015 000 000 EUR or, if the offender is company, up to 63 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:.
Amendment 891 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 210 000 000 EUR or, if the offender is a company, up to 42 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 894 #
Proposal for a regulation
Article 71 – paragraph 5
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 105 000 000 EUR or, if the offender is a company, up to 21 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 895 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall not affect any research and development activity regarding AI systems in so far as such activity does not lead to placing an AI system on the market or putting it into service.
Amendment 896 #
Proposal for a regulation
Article 72 – paragraph 1 – point a
Article 72 – paragraph 1 – point a
(a) the nature, gravity and duration of the infringement and of its consequences, taking into account the number of subjects affected and the level of damage suffered by them;
Amendment 897 #
Proposal for a regulation
Article 72 – paragraph 1 – point a a (new)
Article 72 – paragraph 1 – point a a (new)
(aa) the intentional or negligent character of the infringement;
Amendment 898 #
Proposal for a regulation
Article 72 – paragraph 1 – point a b (new)
Article 72 – paragraph 1 – point a b (new)
(ab) any relevant previous infringement;
Amendment 900 #
Proposal for a regulation
Article 72 – paragraph 1 – point b a (new)
Article 72 – paragraph 1 – point b a (new)
Amendment 901 #
Proposal for a regulation
Article 72 – paragraph 1 – point b b (new)
Article 72 – paragraph 1 – point b b (new)
(bb) any action taken by the provider to mitigate the damage suffered by subjects;
Amendment 902 #
Proposal for a regulation
Article 72 – paragraph 1 – point c a (new)
Article 72 – paragraph 1 – point c a (new)
(ca) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement.
Amendment 905 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that dis developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives,play intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals, which: (a) receives machine and/or human-based data and inputs; (b) infers how to achieve a given set of human-defined objectives using data- driven models created through learning or reasoning implemented with the techniques and approaches listed in Annex I, and (c) generates outputs such as content, in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
Amendment 914 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
Amendment 917 #
Proposal for a regulation
Article 84 – paragraph 1 a (new)
Article 84 – paragraph 1 a (new)
1a. The Commission shall assess the need for amendment of the list in Annex I every 24 months following the entry into force of this Regulation and until the end of the period of the delegation of power.
Amendment 918 #
Proposal for a regulation
Article 84 – paragraph 1 b (new)
Article 84 – paragraph 1 b (new)
1b. The Commission shall assess the need for amendment of the list in Annex III every 24 months following the entry into force of this Regulation and until the end of the period of the delegation of power. The findings of that assessment shall be presented to the European Parliament and the Council.
Amendment 919 #
Proposal for a regulation
Article 84 – paragraph 2
Article 84 – paragraph 2
2. By [threewo years after the date of application of this Regulation referred to in Article 85(2)] and every fourthree years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.
Amendment 923 #
Proposal for a regulation
Annex I – point c
Annex I – point c
(c) Statistical approaches, Bayesian estimation, forecasting, search and optimization methods.
Amendment 927 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons without their consent of being identified;
Amendment 930 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, whose failure or malfunctioning would directly cause significant harm to the health, natural environment or safety of natural persons.
Amendment 932 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing itand places that system on the market or puttings it into service under its own name or trademark, whether for payment or free of charge;
Amendment 939 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or, establish their credit score or assessment of insurance risk, with the exception of AI systems put into service by small scale providers for their own use;
Amendment 944 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘usdeployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
Amendment 950 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
Article 3 – paragraph 1 – point 4 a (new)
(4 a) 'End-user' means any natural person who, in the framework of employment, contract or agreement with the deployer, uses the AI system under the authority of the deployer;
Amendment 961 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point a
Annex IV – paragraph 1 – point 2 – point a
(a) provided that no confidential information or trade secrets are disclosed, the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre- trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;
Amendment 962 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
Annex IV – paragraph 1 – point 2 – point b
(b) provided that no confidential information or trade secrets are disclosed, the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
Amendment 966 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12
Article 3 – paragraph 1 – point 12
(12) ‘intended purpose’ means the specific use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation; general purpose AI systems shall not be considered as having an intended purpose within the meaning of this Regulation;
Amendment 975 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purposepurpose as indicated in instruction for use or technical specification, but which may result from reasonably foreseeable human behaviour or interaction with other systems;
Amendment 1050 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons, at a physical distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, repository, excluding verification/authentication systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises; and without prior knowledge of the user of the AI system whether the person will be present and can be identified ; ;
Amendment 1101 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'critical infrastructure' means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive (…) on the resilience of critical entities;
Amendment 1147 #
Article 4 a Notification about the use of an AI system 1. Users of AI systems which affect natural persons, in particular, by evaluating or assessing them, making predictions about them, recommending information, goods or services to them or determining or influencing their access to goods and services, shall inform the natural persons that they are subject to the use of such an AI system. 2. The information referred to in paragraph 1 shall include a clear and concise indication of the user and the purpose of the AI system, information about the rights of the natural person conferred under this Regulation, and a reference to publicly available resource where more information about the AI system can be found, in particular the relevant entry in the EU database referred to in Article 60, if applicable. 3. This information shall be presented in a concise, intelligible and easily accessible form, including for persons with disabilities. 4. This obligation shall be without prejudice to other Union or Member State laws, in particular Regulation 2016/679, Directive 2016/680, Regulation 2022/XXX.
Amendment 1149 #
Proposal for a regulation
Article 4 b (new)
Article 4 b (new)
Article 4 b Explanation of individual decision- making 1. A decision made by or with the assistance of a high risk AI system which produces legal effects concerning a person, or which similarly significantly affects that person, shall be accompanied by a meaningful, relevant explanation of at least: (a) the role of the AI system in the decision-making process; (b) the input data relating to the affected person, including the indication of his or her personal data on the basis of which the decision was made; (c) for high-risk AI systems, the link to the entry in the EU database referred to in Article 60; (d) the information about the person’s rights under this Regulation, including the right to lodge a complaint with the national supervisory authority. For information on input data under point b) to be meaningful it must include an easily understandable description of inferences drawn from other data. 2. Paragraph 1 shall not apply to the use of AI systems: (a) that are authorised by law to detect, prevent, investigate and prosecute criminal offences or other unlawful behaviour under the conditions laid down in Article 3(41) and Article 52 of this Regulation, if not explaining the decision is necessary and proportionate for detection, prevention, investigation and prosecution of a specific of-fence; (b) for which exceptions from, or restrictions to, the obligation under paragraph 1 follow from Union or Member State law, which lays down appropriate other safeguards for the affected person’s rights and freedoms and legitimate interests. 3. The explanation within the meaning of paragraph 1 shall be provided at the time when the decision is communicated to the affected person and shall be provided in a clear, easily understandable, and intelligible way, accessible for persons with disabilities. 4. If the affected person believes that the decision produced legal effects or similarly significantly affects him or her, but the deployer has not provided the explanation, he or she may request it. The deployer shall inform the affected person within 7 days about how he assessed the request and if it is accepted, the explanation shall be provided without undue delay. If the request is refused, the deployer shall in-form the affected person of the right to complain to the national supervisory authority.
Amendment 1168 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner intended that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1207 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
Article 5 – paragraph 1 – point c – point i
(i) preferential, detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
Amendment 1218 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
Article 5 – paragraph 1 – point c – point ii
(ii) preferential, detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;
Amendment 1255 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;
Amendment 1271 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1275 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1282 #
(iii a) searching for missing persons, especially those who are minors or have medical conditions that affect memory, communication, or independent decision- making skills;
Amendment 1360 #
Proposal for a regulation
Article 5 – paragraph 2 – point b a (new)
Article 5 – paragraph 2 – point b a (new)
(b a) the full respect of fundamental rights and freedoms in conformity with Union values, the Universal Declaration of Human Rights, the European Convention of Human Rights and the Charter of Fundamental Rights of the EU.
Amendment 1389 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall alsofully comply with EU values, the Universal Declaration of Human Rights, the European Convention of Human Rights and the Charter of Fundamental Rights of the EU and shall specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.
Amendment 1431 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
Article 6 – paragraph 1 – point b
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment related to safety with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
Amendment 1441 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk, if they pose a risk of harm to either physical health and safety or human rights, or both.
Amendment 1444 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2 a. The classification as high-risk as a consequence of Article 6(1) and 6(2) shall be disregarded for AI systems whose intended purpose demonstrates that the generated output is a recommendation requiring a human intervention to convert this recommendation into a decision and for AI systems which do not lead to autonomous decisions or actions of the overall system.
Amendment 1451 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
Article 6 – paragraph 2 b (new)
2 b. When assessing an AI system for the purposes of paragraph 1 of Article 6, a safety component shall be assessed against the essential health and safety requirements of the relevant EU harmonisation legislation listed in Annex II.
Amendment 1483 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health, natural environment and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 1492 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health, natural environment and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
Amendment 1500 #
(b) the extent to which an AI system has been used or is likely to be used and misused;
Amendment 1509 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health, natural environment and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
Amendment 1607 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that anythe overall residual risk associated with each hazard as well as the overall residual risk ofof the high-risk AI systems is reasonably judged to be acceptable, having regard to the benefits that the high-risk AI systems is judged acceptablereasonably expected to deliver and, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, subject to terms, conditions as made available by the provider, and contractual and license restrictions. Those residual risks shall be communicated to the user.
Amendment 1617 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – introductory part
Article 9 – paragraph 4 – subparagraph 1 – introductory part
In identifying the most appropriate risk management measures, the following outcomes shall be ensurpursued:
Amendment 1620 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as possible through adequcommercially reasonable and technologically feasible in light of the generally acknowledged state of the art, through appropriate design and development measures;
Amendment 1626 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c
Article 9 – paragraph 4 – subparagraph 1 – point c
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to usersand relevant information on necessary competence training and authority for natural persons exercising such oversight.
Amendment 1635 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 2
Article 9 – paragraph 4 – subparagraph 2
In seeking to eliminatinge or reducinge risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used.
Amendment 1640 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures for the specific scenario in which the system will be operating and to ensure that a system is performing appropriately for a given use case. Testing shall ensure that high-risk AI systems perform in a manner that is consistently for with their intended purpose and they are in compliance with the requirements set out in this Chapter.
Amendment 1658 #
Proposal for a regulation
Article 9 – paragraph 7
Article 9 – paragraph 7
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholdrubrics that are appropriate to the intended purpose of the high-risk AI system.
Amendment 1668 #
Proposal for a regulation
Article 9 – paragraph 9
Article 9 – paragraph 9
9. For credit institutions regulated by Directive 2013/36/EUAI systems already covered by Union law that requires a specific risk assessment, the aspects described in paragraphs 1 to 8 shall be part ofmay be incorporated into theat risk management procedures established by those institutions pursuant to Article 74 of that Directivassessment, without the need to conduct a separate, additional risk assessment in order to comply with this Article.
Amendment 1677 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality and fairness criteria referred to in paragraphs 2 to 5.
Amendment 1682 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. T for the entire lifecycle of data processing. Where relevant to appropriate risk management measures, those practices shall concern in particular,
Amendment 1697 #
Proposal for a regulation
Article 10 – paragraph 2 – point e
Article 10 – paragraph 2 – point e
(e) a priorn assessment of the availability, quantity and suitability of the data sets that are needed;
Amendment 1700 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
Amendment 1704 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possibleother data gaps or shortcomings that materially increase the risks of harm to the health, natural environment and safety or the fundamental rights of persons, and how those gaps and shortcomings can be addressed.
Amendment 1720 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, sufficiently diverse to mitigate bias, and, to the best extent possible, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1731 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into accountbe sufficiently diverse to accurately capture, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high- risk AI system is intended to be used.
Amendment 1740 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption or biometric template protection technologies where anonymisation may significantly affect the purpose pursued.
Amendment 1769 #
Proposal for a regulation
Article 12 – paragraph 1
Article 12 – paragraph 1
1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems ispropriate technical and organizational measures to enable effective monitoring and human opverating. Those logging capabilities shall conform to recognised standards or common specificationsight by those using the system as well as effective supervision by regulators.
Amendment 1775 #
Proposal for a regulation
Article 12 – paragraph 2
Article 12 – paragraph 2
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughoutwhile the AI system is used within its lifecycle that is appropriate to the intended purpose of the system.
Amendment 1777 #
Proposal for a regulation
Article 12 – paragraph 3 a (new)
Article 12 – paragraph 3 a (new)
3 a. For records constituting trade secrets as defined in Article 2 of Directive (EU) 2016/943, provider may elect to confidentially provide such trade secrets only to relevant public authorities to the extent necessary for such authorities to perform their obligations hereunder.
Amendment 1878 #
Proposal for a regulation
Article 16 – paragraph 1 – point a
Article 16 – paragraph 1 – point a
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service, and shall be responsible for compliance of these systems after that point only to the extent that they exercise actual control over relevant aspects of the system;
Amendment 1908 #
Proposal for a regulation
Article 16 – paragraph 1 a (new)
Article 16 – paragraph 1 a (new)
The obligations contained in paragraph 1 shall be without prejudice to obligations applicable to providers of high-risk AI systems arising from Regulation (EU) 2016/679 of the European Parliament and of the Council and Regulation (EU) 2018/1725 of the European Parliament and of the Council
Amendment 2026 #
Proposal for a regulation
Article 28 – paragraph 1 – introductory part
Article 28 – paragraph 1 – introductory part
1. Any distributor, importer, user or other third-party shall be considered a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:
Amendment 2031 #
Proposal for a regulation
Article 28 – paragraph 1 – point c a (new)
Article 28 – paragraph 1 – point c a (new)
(c a) they modify the intended purpose of an AI system which is not high-risk and is already placed on the market or put into service, in a way which makes the modified system a high-risk AI system.
Amendment 2041 #
Proposal for a regulation
Article 29 – paragraph 1
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systemsshall bear sole responsibility in case of any use of the AI system that is not in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5.
Amendment 2069 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Users of high risk systems involving an emotion recognition system or a biometric categorisation system in accordance with Article 52 shall implement suitable measures to safeguard the natural person's rights and freedoms and legitimate interests in such a system, including providing the natural person with the ability to express his or her point of view on the resulting categorisation and to contest the decision.
Amendment 2070 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Users shall monitor the performance of high-risk AI systems deployed by end-users and shall ensure that all possible malfunctioning and performance issues are recorded, and when not able to justify or ensure proper performance, communicated to the AI provider. In such cases, the provider and the user shall coordinate to establish the cause of a possible malfunctioning or performance issue.
Amendment 2081 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Amendment 2101 #
Proposal for a regulation
Article 33 – paragraph 2
Article 33 – paragraph 2
2. Notified bodies shall satisfy the organisational, quality management, resources and process requirememinimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuants that are necessary to fulfil their tasks.o Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148;
Amendment 2105 #
Proposal for a regulation
Article 33 – paragraph 6
Article 33 – paragraph 6
6. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. Any information and documentation obtained by notified bodies pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2129 #
Proposal for a regulation
Article 41
Article 41
Amendment 2254 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Article 51 – paragraph 1 a (new)
Before using an AI system, public authorities shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each use of an AI system.
Amendment 2284 #
Proposal for a regulation
Article 52 a (new)
Article 52 a (new)
Article 52 a General purpose AI systems 1. The placing on the market, putting into service or use of general purpose AI systems shall not, by themselves only, make those systems subject to the provisions of this Regulation. 2. Any person who places on the market or puts into service under its own name or trademark or uses a general purpose AI system made available on the market or put into service for an intended purpose that makes it subject to the provisions of this Regulation shall be considered the provider of the AI system subject to the provisions of this Regulation. 3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a general purpose AI system made available on the market, with or without modifying it, into an AI system whose intended purpose makes it subject to the provisions of this Regulation. 4. The provisions of this Article shall apply irrespective of whether the general purpose AI system is open source software or not.
Amendment 2297 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Amendment 2332 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 2434 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor, AI ethics experts and industry representatives. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 2453 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be co-chaired by the Commission and a representative chosen from among the delegates of the Member States. The Commission shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 2574 #
Proposal for a regulation
Article 59 – paragraph 4 a (new)
Article 59 – paragraph 4 a (new)
4 a. National competent authorities shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148.
Amendment 2575 #
Proposal for a regulation
Article 59 – paragraph 4 b (new)
Article 59 – paragraph 4 b (new)
4 b. Any information and documentation obtained by the national competent authorities pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2587 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providers. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States mayshall also establish one central contact point for communication with operators. In addition, the central contact point of each Member State should be contactable through electronic communications means.
Amendment 2630 #
Proposal for a regulation
Article 60 – paragraph 4 a (new)
Article 60 – paragraph 4 a (new)
4 a. The EU database shall not contain any confidential business information or trade secrets of a natural or legal person, including source code.
Amendment 2635 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
Article 60 – paragraph 5 a (new)
5 a. Any information and documentation obtained by the Commission and Member States pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2646 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users and end-users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
Amendment 2681 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. Access to data and documentation in the context of their activities, the market surveillance authorities shall be granted fullsufficient access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access, taking into account the scope of access agreed with the relevant data subjects or data holders.
Amendment 2691 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system. . AI providers or deployers shall support market surveillance authorities with the necessary facilities to carry out testing to confirm compliance.
Amendment 2805 #
Proposal for a regulation
Article 70 – paragraph 1 a (new)
Article 70 – paragraph 1 a (new)
1 a. Where the activities of national competent authorities and bodies notified under the provisions of this Article infringe intellectual property rights, Member States shall provide for the measures, procedures and remedies necessary to ensure the enforcement of intellectual property rights in full application of Directive 2004/48/EC on the enforcement of intellectual property rights.
Amendment 2807 #
Proposal for a regulation
Article 70 – paragraph 1 b (new)
Article 70 – paragraph 1 b (new)
1 b. Information and data collected by national competent authorities and notified bodies and referred to in Paragraph 1 shall be: a) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes;further processing for archiving purposes in the public interest, for scientific or historical research purposes or for statistical purposes shall not be considered incompatible with the original purposes ("purpose limitation"); b) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’);
Amendment 2822 #
Proposal for a regulation
Article 71 – paragraph 1 a (new)
Article 71 – paragraph 1 a (new)
1 a. In cases where administrative fines have been imposed under Article 83 of Regulation 2016/679, no further penalties shall be imposed on operators under the AI Act.
Amendment 2887 #
Proposal for a regulation
Article 72 – paragraph 1 – point a a (new)
Article 72 – paragraph 1 – point a a (new)
(a a) the intentional or negligent character of the infringement;
Amendment 2888 #
Proposal for a regulation
Article 72 – paragraph 1 – point a b (new)
Article 72 – paragraph 1 – point a b (new)
(a b) any relevant previous infringement;
Amendment 2890 #
Proposal for a regulation
Article 72 – paragraph 1 – point b a (new)
Article 72 – paragraph 1 – point b a (new)
(b a) the degree of cooperation with the supervisory authority, in order to remedy the infringement and mitigate the possible adverse effects of the infringement;
Amendment 2891 #
Proposal for a regulation
Article 72 – paragraph 1 – point b b (new)
Article 72 – paragraph 1 – point b b (new)
(b b) any action taken by the provider to mitigate the damage suffered by subjects;
Amendment 2893 #
Proposal for a regulation
Article 72 – paragraph 1 – point c a (new)
Article 72 – paragraph 1 – point c a (new)
(c a) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement.
Amendment 2933 #
Proposal for a regulation
Article 80 – paragraph 1 – introductory part
Article 80 – paragraph 1 – introductory part
In Article 5 of Regulation (EU) 2018/858 the following paragraph iss are added:
Amendment 2935 #
Proposal for a regulation
Article 80 – paragraph 1
Article 80 – paragraph 1
Regulation (EU) 2018/858
Article 5
Article 5
4 a. The Commission shall, prior to fulfilling the obligation pursuant to paragraph 4, provide a reasonable explanation based on a gap analysis of existing sectoral legislation in the automotive sector to determine the existence of potential gaps relating to Artificial Intelligence therein, and consult relevant stakeholders, in order to avoid duplications and overregulation, in line with the Better Regulation principles.
Amendment 2939 #
Proposal for a regulation
Article 82 – paragraph 1 – introductory part
Article 82 – paragraph 1 – introductory part
In Article 11 of Regulation (EU) 2019/2144, the following paragraph iss are added:
Amendment 2940 #
Proposal for a regulation
Article 82 – paragraph 1
Article 82 – paragraph 1
Regulation (EU) 2019/2144
Article 11
Article 11
3 a. The Commission shall, prior to fulfilling the obligation pursuant to paragraph 3, provide a reasonable explanation based on a gap analysis of existing sectoral legislation in the automotive sector to determine the existence of potential gaps relating to Artificial Intelligence therein, and consult relevant stakeholders, in order to avoid duplications and overregulation, in line with the Better Regulation principles.
Amendment 2966 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III once a yearevery 24 months following the entry into force of this Regulation and until the end of the period of the delegation of power. The findings of that assessment shall be presented to the European Parliament and the Council.
Amendment 2973 #
Proposal for a regulation
Article 84 – paragraph 2
Article 84 – paragraph 2
2. By [threewo years after the date of application of this Regulation referred to in Article 85(2)] and every fourthree years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.
Amendment 3018 #
Proposal for a regulation
Annex I – point b
Annex I – point b
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;Other data-driven approaches, including search and optimization methods.
Amendment 3025 #
Proposal for a regulation
Annex I – point c
Annex I – point c
(c) Statistical approaches, Bayesian estimation, search and optimization methodsif they are used to extract decisions from data in an automated way and search.
Amendment 3051 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
Annex III – paragraph 1 – point 1 – introductory part
1. Biometrics systems identification and categorisation of natural persons:
Amendment 3063 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI biometric identification systems intended to be used for the ‘real- time’ and ‘post’ remote biometric identification of natural persons without their agreement;
Amendment 3090 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, whose failure or malfunctioning would directly cause significant harm to the health, natural environment or safety of natural persons.
Amendment 3113 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI systems intended to be used forto makinge decisions on promotion and termination of work-related contractual relationships, for task allocationbased on individual behaviour or personal traits or characteristics, and for monitoring and evaluating performance and behaviour of persons in such relationships that have a likelihood of causing harm to the physical health and safety or adversely impact on the fundamental rights or have given rise to significant concerns in relation to the materialisation of such harm or adverse impact.
Amendment 3130 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit scoreassessment of insurance risk, with the exception of AI systems put into service by small scale providers for their own use or AI systems related to low-value credits for the purchase of moveables;
Amendment 3144 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c a (new)
Annex III – paragraph 1 – point 5 – point c a (new)
(c a) AI systems intended to be used for insurance premium setting, underwritings and claims assessments, with the exception of AI systems related to low- value property insurance.
Amendment 3260 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point a
Annex IV – paragraph 1 – point 2 – point a
(a) provided that no confidential information or trade secrets are disclosed, the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre- trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;
Amendment 3262 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
Annex IV – paragraph 1 – point 2 – point b
(b) provided that no confidential information or trade secrets are disclosed, the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;