BETA

Activities of Vlad-Marius BOTOŞ related to 2021/0106(COD)

Plenary speeches (1)

Artificial Intelligence Act (debate)
2023/06/13
Dossiers: 2021/0106(COD)

Shadow opinions (1)

OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts
2022/06/16
Committee: CULT
Dossiers: 2021/0106(COD)
Documents: PDF(252 KB) DOC(159 KB)
Authors: [{'name': 'Marcel KOLAJA', 'mepid': 197546}]

Amendments (297)

Amendment 55 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values without hindering the innovation and the evolution of Artificial Intelligence and the beneficial contributions it can bring to the society. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/04/01
Committee: CULT
Amendment 60 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that aArtificial iIntelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation, innovation and development of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/04/01
Committee: CULT
Amendment 63 #
Proposal for a regulation
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can and already contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, media and culture, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
2022/04/01
Committee: CULT
Amendment 78 #
Proposal for a regulation
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses, and factories. Online space and other private spaces. Online spaces whether publicly accessible or not, either for free or for various fees and conditions are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand. If certain online spaces conduct illegal activities defined as such by international and European Union legislation they will be subject to the specific legislation in place.
2022/04/01
Committee: CULT
Amendment 92 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities, educational institutions or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social and educational contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems used directly or indirectly by public authorities an educational institutions for general purpose should be therefore prohibited.
2022/04/01
Committee: CULT
Amendment 108 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘ real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight. The high risk of non-remote biometric identification systems intended to be used in publicly accessible spaces, workplaces and education and training institutions should be determined on a case-by-case basis considering the need for and logging-in capabilities and other elements that might interfere with the human rights.
2022/04/01
Committee: CULT
Amendment 112 #
Proposal for a regulation
Recital 35
(35) AI systems used on a compulsory bases by education and training institutions in education, or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education or for determining the areas of study a student should follow should be considered high- risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. However, these systems should be developed and used with the purpose of improving education and vocational training with full respect of the GDPR and other applicable laws. AI systems used to monitor students during tests at education and training institutions should not be considered high-risk, if they use un internal system or database and are fully aligned with the data protection.
2022/04/01
Committee: CULT
Amendment 141 #
Proposal for a regulation
Recital 86 a (new)
(86 a) Given the rapid technological developments and the required technical expertise in conducting the assessment of high-risk AI systems, the delegation of powers and the implementing powers of the Commission should be exercised with as much flexibility as possible. The Commission should regularly review Annex III , while consulting with the relevant stakeholders.
2022/04/01
Committee: CULT
Amendment 156 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, educational and training institution, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
2022/04/01
Committee: CULT
Amendment 158 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin, or sexual or political orientation, and others on the basis of their biometric data;
2022/04/01
Committee: CULT
Amendment 162 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 45 new ‘education and training institutions’ means providers where people of different ages gain education and training, including preschools, childcare, primary schools, secondary schools, tertiary education providers, vocational education and training and any type of lifelong learning providers authorized by national education authorities, excluding the NGOs and other economic operators providing vocational training and lifelong learning limited to the sector of their main activity.
2022/04/01
Committee: CULT
Amendment 170 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminpsychological techniques beyond a person’s consciousness in orderwith the purpose, the effect or likely effect of to materially distorting a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/04/01
Committee: CULT
Amendment 173 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their known or predicted personality or social or economic situation or due to their age, physical or mental disabilcapacity, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/04/01
Committee: CULT
Amendment 180 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities, educational institutions or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social and emotional behaviour or known or predicted personal or personality characteristics, with the social score leading to either or bothall of the following:
2022/04/01
Committee: CULT
Amendment 183 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii a (new)
(ii a) (iii) mandatory determining the areas of study a student should follow;
2022/04/01
Committee: CULT
Amendment 311 #
Proposal for a regulation
Citation 5 a (new)
Having regard to the opinion of the European Central Bank,
2022/06/13
Committee: IMCOLIBE
Amendment 348 #
Proposal for a regulation
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules as well as measures in support of innovation with a particular focus on SMEs and start-ups, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
2022/06/13
Committee: IMCOLIBE
Amendment 358 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. Therefore, the term AI system should be defined in line with internationally accepted definitions. The definition should be based on the key functional characteristics of the softwareAI systems, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in air physical or digital dimensionenvironment. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to– date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. In order to ensure alignment of definitions on an international level, the European Commission should engage in a dialogue with international organisations such as the Organisation for Economic Cooperation and Development (OECD), should their definitions of the term ‘AI system’ be adjusted.
2022/06/13
Committee: IMCOLIBE
Amendment 374 #
Proposal for a regulation
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. The notion of remote biometric identification system shall not include verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises.
2022/06/13
Committee: IMCOLIBE
Amendment 399 #
Proposal for a regulation
Recital 12 a (new)
(12 a) This Regulation should not undermine research and development activity and should respect freedom of science. It is therefore necessary to exclude from its scope AI systems specifically developed and put into service for the sole purpose of scientific research and development and to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems. As regards product oriented research activity by providers, the provisions of this Regulation should apply insofar as such research leads to or entails placing of an AI system on the market or putting it into service. Under all circumstances, any research and development activity should be carried out in accordance with recognised ethical standards for scientific research.
2022/06/13
Committee: IMCOLIBE
Amendment 404 #
Proposal for a regulation
Recital 12 b (new)
(12 b) Given the complexity of the value chain for AI systems, it is essential to clarify the role of persons who may contribute to the development of AI systems covered by this Regulation, without being providers and thus being obliged to comply with the obligations and requirements established herein. It is necessary to clarify that general purpose AI systems - understood as AI systems that are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation etc. - should not be considered as having an intended purpose within the meaning of this Regulation, unless those systems have been adapted to a specific intended purpose that falls within the scope of this Regulation. Initial providers of general purpose AI systems should therefore only have to comply with the provisions on accuracy, robustness and cybersecurity as laid down in Art. 15 of this Regulation. If a person adapts a general purpose AI application to a specific intended purpose and places it on the market or puts it into service, it shall be considered the provider and be subject to the obligations laid down in this Regulation. The initial provider of a general purpose AI application shall, after placing it on the market or putting it to service, and without compromising its own intellectual property rights or trade secrets, provide the new provider with all essential, relevant and reasonably expected information that is necessary to comply with the obligations set out in this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 430 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended towith the objective to or the effect of distorting human behaviour, whereby physical or psychological harms are reasonably likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacitiesspecific groups of persons due to their age, disabilities, social or economic situation. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/06/13
Committee: IMCOLIBE
Amendment 443 #
Proposal for a regulation
Recital 17 a (new)
(17 a) AI systems used by law enforcement authorities or on their behalf to predict the probability of a natural person to offend or to reoffend, based on profiling and individual risk-assessment hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 450 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The use of those systems in publicly accessible places should therefore be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 464 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 477 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 486 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 494 #
Proposal for a regulation
Recital 22
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 497 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 511 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
2022/06/13
Committee: IMCOLIBE
Amendment 515 #
Proposal for a regulation
Recital 24 a (new)
(24 a) Fundamental rights in the digital sphere have to be guaranteed to the same extent as in the offline world. The right to privacy needs to be ensured, amongst others through end-to-end encryption in private online communication and the protection of private content against any kind of general or targeted surveillance, be it by public or private actors. Therefore, the use of AI systems violating the right to privacy in online communication services should be prohibited.
2022/06/13
Committee: IMCOLIBE
Amendment 534 #
Proposal for a regulation
Recital 30
(30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure in order to ensure compliance with essential safety requirements with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
2022/06/13
Committee: IMCOLIBE
Amendment 546 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk, except for verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.
2022/06/13
Committee: IMCOLIBE
Amendment 563 #
Proposal for a regulation
Recital 36
(36) AI systems used for making autonomous decisions or materially influencing decisions in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
2022/06/13
Committee: IMCOLIBE
Amendment 576 #
Proposal for a regulation
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providerSMEs and start-ups for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
2022/06/13
Committee: IMCOLIBE
Amendment 582 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 599 #
Proposal for a regulation
Recital 40
(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts andor the law and infor applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
2022/06/13
Committee: IMCOLIBE
Amendment 662 #
Proposal for a regulation
Recital 56
(56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union.
2022/06/13
Committee: IMCOLIBE
Amendment 674 #
Proposal for a regulation
Recital 61
(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficientand are not expected to be published within a reasonable period or where they are insufficient, only after consulting the Artificial Intelligence Board, the European standardisation organisations as well as the relevant stakeholders. The Commission should duly justify why it decided not to use harmonised standards. _________________ 54 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
2022/06/13
Committee: IMCOLIBE
Amendment 683 #
Proposal for a regulation
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which and AI systems intended to be used to make inferences on the basis of biometric data that produce legal effects or affect the rights and freedoms of natural persons. For those types of AI systems the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited..
2022/06/13
Committee: IMCOLIBE
Amendment 713 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use or where the content is part of an obviously artistic, creative or fictional cinematographic work. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose, in an appropriate, clear and visible manner, that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
2022/06/13
Committee: IMCOLIBE
Amendment 733 #
Proposal for a regulation
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scaletart-ups and SME providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providerSMEs and start-ups shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
2022/06/13
Committee: IMCOLIBE
Amendment 741 #
Proposal for a regulation
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established as a body of the Union and should have legal personality. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission and the national competent authorities on specific questions related to artificial intelligence.
2022/06/13
Committee: IMCOLIBE
Amendment 796 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for certain AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
2022/06/13
Committee: IMCOLIBE
Amendment 797 #
Proposal for a regulation
Article 1 – paragraph 1 – point e
(e) rules on market monitoring and, market surveillance and governance; .
2022/06/13
Committee: IMCOLIBE
Amendment 802 #
Proposal for a regulation
Article 1 – paragraph 1 – point e a (new)
(e a) measures in support of innovation with a particular focus on SMEs and start-ups, including the setting up of regulatory sandboxes and the reduction of regulatory burdens.
2022/06/13
Committee: IMCOLIBE
Amendment 820 #
Proposal for a regulation
Article 2 – paragraph 1 – point b
(b) users of AI systems locatwho are established within the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 827 #
Proposal for a regulation
Article 2 – paragraph 1 – point c
(c) providers and users of AI systems thatwho are locatestablished in a third country, where the output produced by the system is used in the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 833 #
Proposal for a regulation
Article 2 – paragraph 1 – point c a (new)
(c a) importers and distributors of AI systems;
2022/06/13
Committee: IMCOLIBE
Amendment 834 #
Proposal for a regulation
Article 2 – paragraph 1 – point c b (new)
(c b) product placing on the market or putting into service an AI system together with their product and under their own name or trademark;
2022/06/13
Committee: IMCOLIBE
Amendment 837 #
Proposal for a regulation
Article 2 – paragraph 1 – point c c (new)
(c c) authorised representatives of providers, which are established in the Union.
2022/06/13
Committee: IMCOLIBE
Amendment 844 #
Proposal for a regulation
Article 2 – paragraph 2 – introductory part
2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems, falling within the scope of the following acts,classified as high- risk AI in accordance with Article 6 related to products covered by Union harmonisation legislation listed in Annex II, section B only Article 84 of this Regulation shall apply:.
2022/06/13
Committee: IMCOLIBE
Amendment 845 #
Proposal for a regulation
Article 2 – paragraph 2 – point a
(a) Regulation (EC) 300/2008;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 847 #
Proposal for a regulation
Article 2 – paragraph 2 – point b
(b) Regulation (EU) No 167/2013;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 849 #
Proposal for a regulation
Article 2 – paragraph 2 – point c
(c) Regulation (EU) No 168/2013;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 851 #
Proposal for a regulation
Article 2 – paragraph 2 – point d
(d) Directive 2014/90/EU;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 853 #
Proposal for a regulation
Article 2 – paragraph 2 – point e
(e) Directive (EU) 2016/797;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 856 #
Proposal for a regulation
Article 2 – paragraph 2 – point f
(f) Regulation (EU) 2018/858;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 857 #
Proposal for a regulation
Article 2 – paragraph 2 – point g
(g) Regulation (EU) 2018/1139;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 860 #
Proposal for a regulation
Article 2 – paragraph 2 – point h
(h) Regulation (EU) 2019/2144.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 861 #
Proposal for a regulation
Article 2 – paragraph 2 a (new)
2 a. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
2022/06/13
Committee: IMCOLIBE
Amendment 863 #
Proposal for a regulation
Article 2 – paragraph 2 b (new)
2 b. This Regulation shall not apply to any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
2022/06/13
Committee: IMCOLIBE
Amendment 912 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact withreal or virtual environments; AI systems can be designed to operate with varying levels of autonomy and can be developed with one or more of the techniques and approaches listed in Annex I;
2022/06/13
Committee: IMCOLIBE
Amendment 923 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 a (new)
(1 a) 'autonomy' means that to some degree an AI system operates by interpreting certain input and by using a set of pre-determined objectives, without being limited to such instructions, even when the system’s behaviour was initially constrained by, and targeted at, fulfilling the goal it was given and other relevant design choices made by its developer;
2022/06/13
Committee: IMCOLIBE
Amendment 926 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 b (new)
(1 b) 'general purpose AI system’ means an AI system that is able to perform generally applicable functions for multiple potential purposes, such as image or speech recognition, audio or video generation, pattern detection, question answering, and translation, is largely customizable and often open source software;
2022/06/13
Committee: IMCOLIBE
Amendment 930 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
(2) ‘provid'developer' means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view toand placinges it on the market or puttings it into service under its own name or trademark, whether for payment or free of charge or that adapts general purpose AI systems to a specific intended purpose;
2022/06/13
Committee: IMCOLIBE
Amendment 937 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3
(3) ‘small-scale provider’ means a provider that is a micro or small enterprise within the meaning of Commission Recommendation 2003/361/EC61 ; _________________ 61 Commission Recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises (OJ L 124, 20.5.2003, p. 36).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 939 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3 a (new)
(3 a) ‘risk’ means the combination of the probability of occurrence of a harm and the severity of that harm;
2022/06/13
Committee: IMCOLIBE
Amendment 940 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3 b (new)
(3 b) ‘significant harm‘ means a material harm to a person's life, health and safety or fundamental rights or entities or society at large whose severity is exceptional. The severity is in particular exceptional when the harm is hardly reversible, the outcome has a material adverse impact on health or safety of a person or the impacted person is dependent on the outcome;
2022/06/13
Committee: IMCOLIBE
Amendment 947 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
(4) ‘usdeployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
2022/06/13
Committee: IMCOLIBE
Amendment 1002 #
Proposal for a regulation
Article 3 – paragraph 1 – point 23
(23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service, which affectsis not foreseen or planned by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation oris affected or which results in a modification to the intended purpose for which the AI system has been assessed. A substantial modification is given if the remaining risk is increased by the modification of the AI system under the application of all necessary protective measures;
2022/06/13
Committee: IMCOLIBE
Amendment 1009 #
Proposal for a regulation
Article 3 – paragraph 1 – point 24
(24) ‘CE marking of conformity’ (CE marking) means a physical or digital marking by which a provider indicates that an AI system or a product with an embedded AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing;
2022/06/13
Committee: IMCOLIBE
Amendment 1037 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts or intentions of natural persons on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1044 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, or inferring their characteristics and attributes on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 1052 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
2022/06/13
Committee: IMCOLIBE
Amendment 1103 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘regulatory sandbox’ means a facility that provides a controlled environment that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan;
2022/06/13
Committee: IMCOLIBE
Amendment 1111 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘deep fake’ means an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.
2022/06/13
Committee: IMCOLIBE
Amendment 1129 #
Proposal for a regulation
Article 3 a (new)
Article 3 a General Purpose AI 1. General purpose AI applications shall not be considered as having an intended purpose within the meaning of this Regulation unless those systems have been adapted to a specific intended purpose that falls within the scope of this Regulation. 2. Any natural or legal person that adapts a general purpose AI application to a specific intended purpose and places it on the market or puts it into service shall be considered the provider and be subject to the obligations laid down in this Regulation. 3.The initial provider of a general purpose AI application shall comply with Article 15 of this Regulation at all times. After placing it on the market or putting it to service, and without compromising its own intellectual property rights or trade secrets, provide the new provider referred to in paragraph 2 with all essential, relevant and reasonably expected information that is necessary to comply with the obligations set out in this Regulation. 4. The initial provider of a general purpose AI application shall only be responsible for the accuracy of the provided information and compliance with Article 15 of this Regulation towards the natural or legal person that adapts the general purpose AI application to a specific intended purpose.
2022/06/13
Committee: IMCOLIBE
Amendment 1136 #
Proposal for a regulation
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, after an adequate and transparent consultation process involving the relevant stakeholders, to amend the list of techniques and approaches listed in Annex I within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of transparent characteristics that are similar to the techniques and approaches listed therein. Providers and users of AI systems should be given 24 months to comply with any amendment to Annex I.
2022/06/13
Committee: IMCOLIBE
Amendment 1169 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order towith the objective to or the effect of materially distorting a person’s behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1181 #
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of an individual, including characteristics of such individual’s known or predicted personality or social or economic situation, a specific group of persons due to their age, physical or mental or disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/06/13
Committee: IMCOLIBE
Amendment 1223 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service or use of an AI system for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics or past criminal behaviour of natural persons or groups of natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 1234 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:.
2022/06/13
Committee: IMCOLIBE
Amendment 1254 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1260 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1274 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1286 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(d a) the use of an AI system for the general monitoring, detection and interpretation of private content in interpersonal communication services, including all measures that would undermine end-to-end encryption..
2022/06/13
Committee: IMCOLIBE
Amendment 1354 #
Proposal for a regulation
Article 5 – paragraph 2
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1356 #
Proposal for a regulation
Article 5 – paragraph 2 – point a
(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1358 #
Proposal for a regulation
Article 5 – paragraph 2 – point b
(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1361 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1367 #
Proposal for a regulation
Article 5 – paragraph 3
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1375 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1387 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1423 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a main safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
2022/06/13
Committee: IMCOLIBE
Amendment 1429 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
(b) the product whose main safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment in order to ensure compliance with essential safety requirements with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
2022/06/13
Committee: IMCOLIBE
Amendment 1437 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk in the meaning of this regulation, if they will be deployed in a critical area referred to in Annex III and an individual assessment of the specific application carried out in accordance with Art. 6a showed that a significant harm is likely to arise.
2022/06/13
Committee: IMCOLIBE
Amendment 1456 #
Proposal for a regulation
Article 6 a (new)
Article 6 a Risk assessment 1. In order to determine the level of risk of AI systems, the provider of an AI system with an intended purpose in the areas referred to in Annex III has to conduct a risk assessment. 2.The risk assessment has to contain the following elements: a) name all possible harms to life, health and safety or fundamental rights of potentially impacted persons or entities or society at large; b) asses the likelihood and severity these harms might materialise; c) name the potential benefits of such system for the potentially impacted persons and society at large; d) name possible and taken measures to address, prevent, minimise or mitigate the identified harms with a high probability to materialise; e) asses the possibilities to reverse these negative outcome; f) the extent to which decision-making of the system is autonomous and outside of human influence. 3. If the risk assessment showed a significant harm is likely to materialise the provider has to comply with Chapter 2 in a way that is appropriate and proportionate to the identified risks.
2022/06/13
Committee: IMCOLIBE
Amendment 1466 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where, after an adequate and transparent consultation process involving the relevant stakeholders, to update the list in Annex III by withdrawing areas from that list or by adding critical areas. For additions both of the following conditions arneed to be fulfilled:
2022/06/13
Committee: IMCOLIBE
Amendment 1503 #
Proposal for a regulation
Article 7 – paragraph 2 – point b a (new)
(b a) the extent to which the AI system acts autonomously;
2022/06/13
Committee: IMCOLIBE
Amendment 1520 #
Proposal for a regulation
Article 7 – paragraph 2 – point e a (new)
(e a) the potential misuse and malicious use of the AI system and of the technology underpinning it;
2022/06/13
Committee: IMCOLIBE
Amendment 1531 #
Proposal for a regulation
Article 7 – paragraph 2 – point g a (new)
(g a) magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large;
2022/06/13
Committee: IMCOLIBE
Amendment 1538 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – introductory part
(h) the extent to which existing Union legislation, in particular the GDPR, provides for:
2022/06/13
Committee: IMCOLIBE
Amendment 1549 #
Proposal for a regulation
Article 7 – paragraph 2 a (new)
2 a. The Commission shall provide a transitional period of at least 24 months following each update of Annex III.
2022/06/13
Committee: IMCOLIBE
Amendment 1555 #
Proposal for a regulation
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
2022/06/13
Committee: IMCOLIBE
Amendment 1575 #
Proposal for a regulation
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in appropriate relation to high- risk AI systems and its risks identified in the risk assessment referred to in Art. 6a.
2022/06/13
Committee: IMCOLIBE
Amendment 1587 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and foreseeable risks associated with eachmost likely to occur to health, safety and fundamental rights in view of the intended purpose of the high-risk AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1591 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1598 #
Proposal for a regulation
Article 9 – paragraph 2 – point c
(c) evaluation of other possibly arisingnew arising significant risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
2022/06/13
Committee: IMCOLIBE
Amendment 1601 #
Proposal for a regulation
Article 9 – paragraph 2 – point d
(d) adoption of suitable risk management measureappropriate and targeted risk management measures to address identified significant risks in accordance with the provisions of the following paragraphs.
2022/06/13
Committee: IMCOLIBE
Amendment 1602 #
Proposal for a regulation
Article 9 – paragraph 2 a (new)
2 a. The risks referred to in paragraph 2 shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
2022/06/13
Committee: IMCOLIBE
Amendment 1605 #
Proposal for a regulation
Article 9 – paragraph 3
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specification, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1609 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual significant risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is reasonably judged to be acceptable, having regard to the benefits that the high-risk AI system is reasonably expected to deliver and provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual significant risks shall be communicated to the user.
2022/06/13
Committee: IMCOLIBE
Amendment 1621 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as posidentified and evaluated risks as far as economically and technologically feasible through adequate design and development of the high-risk AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1624 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point b
(b) where appropriate, implementation of adequate mitigation and control measures in relation to significant risks that cannot be eliminated;
2022/06/13
Committee: IMCOLIBE
Amendment 1627 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users.
2022/06/13
Committee: IMCOLIBE
Amendment 1639 #
Proposal for a regulation
Article 9 – paragraph 5
5. High-risk AI systems shall be tesevaluated for the purposes of identifying the most appropriate and targeted risk management measures. Testing and weighing any such measures against the potential benefits and intended goals of the system. Evaluations shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the relevant requirements set out in this Chapter.
2022/06/13
Committee: IMCOLIBE
Amendment 1653 #
Proposal for a regulation
Article 9 – paragraph 7
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1669 #
Proposal for a regulation
Article 9 – paragraph 9
9. For credit institutions regulated by Directive 2013/36/EUproviders and AI systems already covered by Union law that require them to establish a specific risk management, the aspects described in paragraphs 1 to 8 shall be part of the risk management procedures established by those institutions pursuant to Article 74 of that Directiveat Union law or deemed to be covered as part of it.
2022/06/13
Committee: IMCOLIBE
Amendment 1673 #
Proposal for a regulation
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be, as far as this can be reasonably expected and is feasible from a technical and economical point of view, developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
2022/06/13
Committee: IMCOLIBE
Amendment 1683 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices appropriate for the context of the use as well as the intended purpose of the AI system. Those practices shall concern in particular,
2022/06/13
Committee: IMCOLIBE
Amendment 1693 #
Proposal for a regulation
Article 10 – paragraph 2 – point c
(c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation;
2022/06/13
Committee: IMCOLIBE
Amendment 1702 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases that are likely to affect the output of the AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1707 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
(g) the identification of any possiblesignificant data gaps or shortcomings, and how those gaps and shortcomings can be addressed.
2022/06/13
Committee: IMCOLIBE
Amendment 1715 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errors and completeHigh-risk AI systems shall be designed and developed with the best efforts to ensure that training, validation and testing data sets shall be relevant, representative, and to the best extent possible, free of errors and complete in accordance with industry standards. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/06/13
Committee: IMCOLIBE
Amendment 1742 #
Proposal for a regulation
Article 10 – paragraph 6
6. Appropriate data governance and management practices shall apply fFor the development of high-risk AI systems nother than those which make use of using techniques involving the training of models in order to ensure that those high-risk AI systems comply with paragraph 2, paragraphs 2 to 5 shall apply only to the testing data sets.
2022/06/13
Committee: IMCOLIBE
Amendment 1753 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or, in the case of SMEs and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent authority.
2022/06/13
Committee: IMCOLIBE
Amendment 1778 #
Proposal for a regulation
Article 12 – paragraph 4
4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum: (a) recording of the period of each use of the system (start date and time and end date and time of each use); (b) the reference database against which input data has been checked by the system; (c) the input data for which the search has led to a match; (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5).deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1790 #
Proposal for a regulation
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title. Transparency shall thereby mean that, to the extent that can be reasonably expected and is feasible in technical terms, the AI systems output is interpretable by the user and the user is able to understand the general functionality of the AI system and its use of data.
2022/06/13
Committee: IMCOLIBE
Amendment 1793 #
Proposal for a regulation
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that helps supporting informed decision-making by users and is relevant, accessible and comprehensible to users.
2022/06/13
Committee: IMCOLIBE
Amendment 1801 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1808 #
Proposal for a regulation
Article 13 – paragraph 3 – point e a (new)
(e a) a description of the mechanisms included within the AI system that allow users to properly collect, store and interpret the logs in accordance with Article 12(1).
2022/06/13
Committee: IMCOLIBE
Amendment 1812 #
Proposal for a regulation
Article 14 – paragraph 1
1. HWhere proportionate to the risks associated with the high-risk system and where technical safeguards are not sufficient, high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.
2022/06/13
Committee: IMCOLIBE
Amendment 1818 #
Proposal for a regulation
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
2022/06/13
Committee: IMCOLIBE
Amendment 1830 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
4. The measures referred to For the purpose of implementing paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate to the circumstances 1 to 3, the high-risk AI system shall be provided to the user in such a way that the individuals to whom human oversight is assigned are enabled as appropriate and proportionate, to the circumstances and in accordance with industry standards:
2022/06/13
Committee: IMCOLIBE
Amendment 1832 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
(a) fulto be aware of and sufficiently understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
2022/06/13
Committee: IMCOLIBE
Amendment 1833 #
Proposal for a regulation
Article 14 – paragraph 4 – point b
(b) remain aware of the possible tendency of automatically relying or over- relying on the output produced by a high- risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
2022/06/13
Committee: IMCOLIBE
Amendment 1836 #
Proposal for a regulation
Article 14 – paragraph 4 – point c
(c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system andfor example the interpretation tools and methods available;
2022/06/13
Committee: IMCOLIBE
Amendment 1838 #
Proposal for a regulation
Article 14 – paragraph 4 – point d
(d) to be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 1841 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
(e) to be able to intervene on the operation of the high-risk AI system, halt or interrupt the system through a “stop” button or a similar procedurewhere reasonable and technically feasible and except if the human interference increases the risks or would negatively impact the performance in consideration of generally acknowledged state-of-the-art.
2022/06/13
Committee: IMCOLIBE
Amendment 1844 #
Proposal for a regulation
Article 14 – paragraph 5
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons separately.
2022/06/13
Committee: IMCOLIBE
Amendment 1850 #
Proposal for a regulation
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose and to the extent that can be reasonably expected and is in accordance with relevant industry standards, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
2022/06/13
Committee: IMCOLIBE
Amendment 1856 #
Proposal for a regulation
Article 15 – paragraph 2
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systemsrange of expected performance and the operational factors that affect that performance shall be declared in the accompanying instructions of use.
2022/06/13
Committee: IMCOLIBE
Amendment 1858 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
3. High-risk AI systems shall be resilientdesigned and developed with safety and security-by-design mechanism so that they achieve, in the light of their intended purpose, an appropriate level of cyber resilience as regards to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
2022/06/13
Committee: IMCOLIBE
Amendment 1863 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 2
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as aninfluencing input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures.
2022/06/13
Committee: IMCOLIBE
Amendment 1867 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 1
The technical solutions aimed at ensuring and organisational measures designed to uphold the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
2022/06/13
Committee: IMCOLIBE
Amendment 1887 #
Proposal for a regulation
Article 16 – paragraph 1 – point c
(c) draw-up the technical documentation of the high-risk AI system referred to in Article 18;
2022/06/13
Committee: IMCOLIBE
Amendment 1891 #
Proposal for a regulation
Article 16 – paragraph 1 – point d
(d) when under their control, keep the logs automatically generated by their high- risk AI systems as referred to in Article 20;
2022/06/13
Committee: IMCOLIBE
Amendment 1893 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its placing on the market or putting into service;
2022/06/13
Committee: IMCOLIBE
Amendment 1899 #
Proposal for a regulation
Article 16 – paragraph 1 – point g
(g) take the necessary corrective actions as referred to in Article 21, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title;
2022/06/13
Committee: IMCOLIBE
Amendment 1902 #
Proposal for a regulation
Article 16 – paragraph 1 – point j
(j) upon reasoned request of a national competent authority, provide the relevant information and documentation to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title.
2022/06/13
Committee: IMCOLIBE
Amendment 1914 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures andor instructions, and shall include at least the following aspects:
2022/06/13
Committee: IMCOLIBE
Amendment 1916 #
Proposal for a regulation
Article 17 – paragraph 1 – point a
(a) a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1921 #
Proposal for a regulation
Article 17 – paragraph 1 – point e
(e) technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full, the means to be used to ensure that the high-risk AI system complies with the requirements set out in Chapter 2 of this Title;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1934 #
Proposal for a regulation
Article 17 – paragraph 1 – point j
(j) the handling of communication with national competent authorities, competent authorities, including sectoral ones, providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;
2022/06/13
Committee: IMCOLIBE
Amendment 1935 #
Proposal for a regulation
Article 17 – paragraph 1 – point k
(k) systems and procedures for record keeping of all relevant documentation and information;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1956 #
Proposal for a regulation
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of industry standards, the intended purpose of high-risk AI system and applicable legal obligations under Union or national law.
2022/06/13
Committee: IMCOLIBE
Amendment 1965 #
Proposal for a regulation
Article 22 – paragraph 1
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known to the provider of the system, that provider shall immediately inform the national competentmarket surveillance authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system, in particular of the non-compliance and of any corrective actions taken.
2022/06/13
Committee: IMCOLIBE
Amendment 1969 #
Proposal for a regulation
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. Any information submitted in accordance with the provision of this article shall be considered by the national competent authority a trade secret of the company that is submitting such information and kept strictly confidential.
2022/06/13
Committee: IMCOLIBE
Amendment 1977 #
Proposal for a regulation
Article 23 a (new)
Article 23 a Conditions for other persons to be subject to the obligations of a provider 1. Concerning high risk AI systems any natural or legal person shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: (a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligationsare allocated otherwise; (b) they make a substantial modification to or modify the intended purpose of a high-risk AI system already placed on the market or put into service; (c) they modify the intended purpose of a non-high-risk AI system already placed on the market or put it to service, in a way which makes the modified system a high- risk AI system; (d) they fulfil the conditions referred in Article 3a(2). 2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the high-risk AI system on the market or put it into service shall no longer be considered a provider for the purposes of this Regulation. The initial provider subject to the previous sentence, shall upon request and without compromising its own intellectual property rights or trade secrets, provide the new provider referred to in paragraph (1a), (1b) or (1c) with all essential, relevant and reasonably expected information that is necessary to comply with the obligations set out in this Regulation. 3. For high-risk AI systems that are safety components of products to which the legal acts listed in Annex II, section A apply, the manufacturer of those products shall be considered the provider of the high- risk AI system and shall be subject to the obligations referred to in Article 16 under either of the following scenarios: (i) the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer; or (ii) the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market. 4. Third parties involved in the sale and the supply of software including general purpose application programming interfaces (API), software tools and components, providers who develop and train AI systems on behalf of a deploying company in accordance with their instruction, or providers of network services shall not be considered providers for the purposes of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 1978 #
Proposal for a regulation
Article 24
Obligations of product manufacturers Where a high-risk AI system related to products to which the legal acts listed in Annex II, section A, apply, is placed on the market or put into service together with the product manufactured in accordance with those legal acts and under the name of the product manufacturer, the manufacturer of the product shall take the responsibility of the compliance of the AI system with this Regulation and, as far as the AI system is concerned, have the same obligations imposed by the present Regulation on the provider.Article 24 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 1981 #
Proposal for a regulation
Article 25 – paragraph 1
1. Prior to making their systems available on the Union market, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union.
2022/06/13
Committee: IMCOLIBE
Amendment 1991 #
Proposal for a regulation
Article 25 – paragraph 2 – point c
(c) cooperate with competent national authorities, upon a reasoned request, on any action the latter takes into relation to the high-risk AI systemduce and mitigate the risks posed by a high-risk AI system covered by the authorised representative's mandate.
2022/06/13
Committee: IMCOLIBE
Amendment 2011 #
Proposal for a regulation
Article 27 – paragraph 2
2. Where a distributor considers or has reason to consider that a high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the provider or the importer of the system as well as the market surveillance authorities, as applicable, to that effect.
2022/06/13
Committee: IMCOLIBE
Amendment 2015 #
Proposal for a regulation
Article 27 – paragraph 4
4. A distributor that considers or has reason to consider that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the provider or the importer of the system as well as the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
2022/06/13
Committee: IMCOLIBE
Amendment 2018 #
Proposal for a regulation
Article 27 – paragraph 5
5. Upon a reasoned request from a national competent authority, distributors of high-risk AI systems shall provide that authority with all the information and documentation necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title. Distributors shall also cooperate with that national competent authority on any action taken by that authorityregarding its activities pursuant to paragraphs 1 to 4.
2022/06/13
Committee: IMCOLIBE
Amendment 2024 #
Proposal for a regulation
Article 28
Obligations of distributors, importers, users or any other third-party 1. Any distributor, importer, user or other third-party shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: (a) they place on the market or put into service a high-risk AI system under their name or trademark; (b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service; (c) they make a substantial modification to the high-risk AI system. 2. Where the circumstances referred to in paragraph 1, point (b) or (c), occur, the provider that initially placed the high-risk AI system on the market or put it into service shall no longer be considered a provider for the purposes of this Regulation.Article 28 deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2039 #
Proposal for a regulation
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systems and implement human oversight in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this article.
2022/06/13
Committee: IMCOLIBE
Amendment 2049 #
Proposal for a regulation
Article 29 – paragraph 3
3. Without prejudice to paragraph 1, to the extent the user exercises control over the input data, that user shall ensure that input data is relevant in view of the intended purpose of the high-risk AI system. To the extent the user exercises control over the high-risk AI system, that user shall also ensure that relevant and appropriate robustness and cybersecurity measures are in place and are regularly adjusted or updated.
2022/06/13
Committee: IMCOLIBE
Amendment 2054 #
Proposal for a regulation
Article 29 – paragraph 4 – introductory part
4. Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use and, when relevant, inform providers in accordance with Article 61. To the extent the user exercises control over the high-risk AI system, the user shall also establish a risk management system in line with Article 9 but limited to the potential adverse effects of using the high-risk AI system, the respective mitigation measures. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall inform the provider or distributor and suspend the use of the system. They shall also inform the provider or distributor when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.
2022/06/13
Committee: IMCOLIBE
Amendment 2059 #
Proposal for a regulation
Article 29 – paragraph 5 – introductory part
5. Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control. The logs shall be kept for a period that is appropriate in the light of industry standards, the intended purpose of the high-risk AI system and applicable legal obligations under Union or national law.
2022/06/13
Committee: IMCOLIBE
Amendment 2064 #
Proposal for a regulation
Article 29 – paragraph 6
6. Users of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, where applicaband may revert in part to those data protection impact assessments for fulfilling the obligations set out in this article.
2022/06/13
Committee: IMCOLIBE
Amendment 2068 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
6 a. Where a user of a high risk AI system is obliged pursuant to Regulation (EU) 2016/679 to provide information regarding the use of automated decision making procedures, the user shall not be obliged to provide information on how the AI system reached a specific result. When fulfilling the information obligations under Regulation (EU) 2016/679, the user shall not be obliged to provide information beyond the information he or she received from the provider under Article 13 of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2076 #
Proposal for a regulation
Article 29 – paragraph 6 b (new)
6 b. The obligations established by this Article shall not apply to users who use the AI system in the course of a personal non-professional activity.
2022/06/13
Committee: IMCOLIBE
Amendment 2133 #
Proposal for a regulation
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist and are not expected to be published within a reasonable period or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
2022/06/13
Committee: IMCOLIBE
Amendment 2138 #
Proposal for a regulation
Article 41 – paragraph 1 a (new)
1 a. When deciding to draft and adopt common specifications, the Commission shall consult the Board, the European standardisation organisations as well as the relevant stakeholders, and duly justify why it decided not to use harmonised standards. The abovementioned organisations shall be regularly consulted while the Commission is in the process of drafting the common specifications.
2022/06/13
Committee: IMCOLIBE
Amendment 2141 #
Proposal for a regulation
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of stakeholders, including SMEs and start- ups, relevant bodies or expert groups established under relevant sectorial Union law.
2022/06/13
Committee: IMCOLIBE
Amendment 2149 #
Proposal for a regulation
Article 41 – paragraph 4
4. Where providers of high-risk AI systems do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that are at least equivalent thereto.
2022/06/13
Committee: IMCOLIBE
Amendment 2150 #
Proposal for a regulation
Article 41 – paragraph 4 a (new)
4 a. If harmonised standards referred to in Article 40 are developed and the references to them are published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 in the future, the relevant common specifications shall no longer apply.
2022/06/13
Committee: IMCOLIBE
Amendment 2191 #
Proposal for a regulation
Article 43 – paragraph 4 – introductory part
4. High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure whenever they are substantially modified, regardless of whetherif the modified system is intended to be further distributed or continues to be used by the current user.
2022/06/13
Committee: IMCOLIBE
Amendment 2193 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. The same should apply to updates of the AI system for security reasons in general and to protect against evolving threats of manipulation of the system as long as the update does not include significant changes to the functionality of the system.
2022/06/13
Committee: IMCOLIBE
Amendment 2201 #
Proposal for a regulation
Article 43 – paragraph 5
5. TAfter consulting the AI Board referred to in Article 56 and after providing substantial evidence, followed by thorough consultation and the involvement of the affected stakeholders, the Commission is empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating Annexes VI and Annex VII in order to introduce elements of the conformity assessment procedures that become necessary in light of technical progress.
2022/06/13
Committee: IMCOLIBE
Amendment 2208 #
Proposal for a regulation
Article 43 – paragraph 6
6. TAfter consulting the AI Board referred to in Article 56 and after providing substantial evidence, followed by thorough consultation and the involvement of the affected stakeholders, the Commission is empowered to 6. adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.
2022/06/13
Committee: IMCOLIBE
Amendment 2232 #
Proposal for a regulation
Article 49 – paragraph 1
1. The physical CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
2022/06/13
Committee: IMCOLIBE
Amendment 2234 #
Proposal for a regulation
Article 49 – paragraph 1 a (new)
1 a. A digital CE marking may be used instead of or additionally to the physical marking if it can be accessed via the display of the product or via a machine- readable code or other electronic means.
2022/06/13
Committee: IMCOLIBE
Amendment 2241 #
Proposal for a regulation
Article 50 – paragraph 1 – introductory part
The provider shall, for a period ending 105 years after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:
2022/06/13
Committee: IMCOLIBE
Amendment 2244 #
Proposal for a regulation
Article 51 – paragraph 1
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) and Article 6a, the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
2022/06/13
Committee: IMCOLIBE
Amendment 2252 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Before putting into service or using a high-risk AI system in one of the areas listed in Annex III, users who are public authorities or Union institutions, bodies, offices or agencies or users acting on their behalf shall register in the EU database referred to in Article 60.
2022/06/13
Committee: IMCOLIBE
Amendment 2268 #
Proposal for a regulation
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
2022/06/13
Committee: IMCOLIBE
Amendment 2271 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose, in an appropriate, clear and visible manner, that the content has been artificially generated or manipulated.
2022/06/13
Committee: IMCOLIBE
Amendment 2278 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detectcontent is part of an obviously artistic, pcrevent, investigate and prosecute criminal offencesative or fictional cinematographic work or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
2022/06/13
Committee: IMCOLIBE
Amendment 2294 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisorthe European Commission, one or more Member States, or other competent entities shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuringin collaboration with and guidance by the European Commission or the competent authorities in order to identify risks to health and safety and fundamental rights, test mitigation measures for identified risks, demonstrate prevention of these risks and otherwise ensure compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
2022/06/13
Committee: IMCOLIBE
Amendment 2309 #
Proposal for a regulation
Article 53 – paragraph 2
2. The European Commission in collaboration with Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox.
2022/06/13
Committee: IMCOLIBE
Amendment 2329 #
Proposal for a regulation
Article 53 – paragraph 5
5. The European Commission, Member States’ competent authorities and other entities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the CommissionCommission’s AI Regulatory Sandboxing programme. The European Commission shall submit annual reports to the European Artificial Intelligence Board on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
2022/06/13
Committee: IMCOLIBE
Amendment 2340 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
6 a. The Commission shall establish an EU AI Regulatory Sandboxing Programme whose modalities referred to in Article 53(6) shall cover the elements set out in Annex IXa. The Commission shall proactively coordinate with national, regional and also local authorities, as relevant.
2022/06/13
Committee: IMCOLIBE
Amendment 2372 #
Proposal for a regulation
Article 55 – title
Measures for small-scale providerSMEs, start-ups and users
2022/06/13
Committee: IMCOLIBE
Amendment 2375 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
(a) provide small-scale providerSMEs and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
2022/06/13
Committee: IMCOLIBE
Amendment 2377 #
Proposal for a regulation
Article 55 – paragraph 1 – point b
(b) organise specific awareness raising activities about the application of this Regulation tailored to the needs of the small-scale providerSMEs, sart-ups and users;
2022/06/13
Committee: IMCOLIBE
Amendment 2379 #
(c) where appropriate, establish a dedicated channel for communication with small-scale providers andSMEs, start-ups, users and other innovators to provide guidance and respond to queries about the implementation of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2381 #
Proposal for a regulation
Article 55 – paragraph 1 – point c a (new)
(c a) support SME's increased participation in the standardisation development process;
2022/06/13
Committee: IMCOLIBE
Amendment 2387 #
Proposal for a regulation
Article 55 – paragraph 2
2. The specific interests and needs of the small-scale providerSMEs and start-ups shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size and market size.
2022/06/13
Committee: IMCOLIBE
Amendment 2389 #
Proposal for a regulation
Article 55 a (new)
Article 55 a Promoting research and development of AI in support of socially and environmentally beneficial outcomes Member States shall promote research and development of AI solutions which support socially and environmentally beneficial outcomes, including but not limited to development of AI-based solutions to increase accessibility for persons with disabilities, tackle socio- economic inequalities, and meet sustainability and environmental targets, by: (a) providing relevant projects with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions; (b) earmarking public funding, including from relevant EU funds, for AI research and development in support of socially and environmentally beneficial outcomes; (c) organising specific awareness raising activities about the application of this Regulation, the availability of and application procedures for dedicated funding, tailored to the needs of those projects; (d) where appropriate, establishing accessible dedicated channels for communication with projects to provide guidance and respond toqueries about the implementation of this Regulation.
2022/06/13
Committee: IMCOLIBE
Amendment 2400 #
Proposal for a regulation
Article 56 – paragraph 1
1. A ‘European Artificial Intelligence Board’ (the ‘Board’) is established as a body of the Union and shall have legal personality.
2022/06/13
Committee: IMCOLIBE
Amendment 2405 #
Proposal for a regulation
Article 56 – paragraph 2 – introductory part
2. The Board shall provide advice and assistance to the Commission and to the national supervisory authorities in order to:
2022/06/13
Committee: IMCOLIBE
Amendment 2408 #
Proposal for a regulation
Article 56 – paragraph 2 – point b
(b) coordinate and contribute toprovide guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2410 #
Proposal for a regulation
Article 56 – paragraph 2 – point c
(c) contribute to the effective and consistent application of this Regulation and assist the national supervisory authorities and the Commission in ensuring the consistent application of this Regulationthat regard.
2022/06/13
Committee: IMCOLIBE
Amendment 2414 #
(c a) contribute to the effective cooperation with the competent authorities of third countries and with international organisations.
2022/06/13
Committee: IMCOLIBE
Amendment 2431 #
Proposal for a regulation
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisorthe European Data Protection Supervisor as the EU Agency for Fundamental Rights, the EU Agency for Cybersecurity, the Joint Research Centre, the European Committee for Standardization, the European Committee for Electrotechnical Standardization, and the European Telecommunications Standards Institute, each with one representative. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
2022/06/13
Committee: IMCOLIBE
Amendment 2439 #
Proposal for a regulation
Article 57 – paragraph 1 a (new)
1 a. The Board shall act independently when performing its tasks or exercising its powers.
2022/06/13
Committee: IMCOLIBE
Amendment 2464 #
Proposal for a regulation
Article 57 – paragraph 4
4. The Board may invite external experts and observers to attend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end t, and hold consultations with relevant stakeholders and ensure appropriate participation. The Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory. The Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
2022/06/13
Committee: IMCOLIBE
Amendment 2484 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
When providing advice and assistance to the Commission and to the national supervisory authorities in the context of Article 56(2), the Board shall in particular:
2022/06/13
Committee: IMCOLIBE
Amendment 2498 #
Proposal for a regulation
Article 58 – paragraph 1 – point b
(b) contribute to uniform administrative practices in the Member States, including for the assessment , establishing, managing with the meaning of fostering cooperation and guaranteeing consistency among regulatory sandboxes, and functioning of regulatory sandboxes referred to in Article 53, Article54 and Annex IXa;
2022/06/13
Committee: IMCOLIBE
Amendment 2513 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
(c a) carry out annual reviews and analyses of the complaints sent to and findings made by national competent authorities, of the serious incidents reports referred to in Article 62, and of the new registration in the EU Database referred to in Article 60 to identify trends and potential emerging issues threatening the future health and safety and fundamental rights of citizens that are not adequately addressed by this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2521 #
Proposal for a regulation
Article 58 – paragraph 1 – point c b (new)
(c b) coordinate among national competent authorities; issue guidelines, recommendations and best practices with a view to ensuring the consistent implementation of this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 2526 #
Proposal for a regulation
Article 58 – paragraph 1 – point c c (new)
(c c) promote the cooperation and effective bilateral and multilateral exchange of information and best practices between the national supervisory authorities;
2022/06/13
Committee: IMCOLIBE
Amendment 2529 #
Proposal for a regulation
Article 58 – paragraph 1 – point c d (new)
(c d) annually publish recommendations to the Commission, in particular on the categorization of prohibited practices, high-risk systems, and codes of conduct for AI systems that are not classified as high-risk;
2022/06/13
Committee: IMCOLIBE
Amendment 2533 #
Proposal for a regulation
Article 58 – paragraph 1 – point c e (new)
(c e) carry out biannual horizon scanning and foresight exercises to extrapolate the impact the trends and emerging issues can have on the Union;
2022/06/13
Committee: IMCOLIBE
Amendment 2539 #
Proposal for a regulation
Article 58 – paragraph 1 – point c f (new)
(c f) promote public awareness and understanding of the benefits, rules and safeguards and rights in relation to the use of AI systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2572 #
Proposal for a regulation
Article 59 – paragraph 4
4. Member States shall ensure that national competent authorities are provided with adequate financial, technical and human resources to fulfil their tasks under this Regulation. In particular, national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data and data computing, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 2584 #
Proposal for a regulation
Article 59 – paragraph 6
6. The Commission and the board shall facilitate the exchange of experience between national competent authorities.
2022/06/13
Committee: IMCOLIBE
Amendment 2589 #
Proposal for a regulation
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providerSMEs and start-ups. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States mayshall also establish one central contact point for communication with operators and other stakeholders.
2022/06/13
Committee: IMCOLIBE
Amendment 2593 #
Proposal for a regulation
Article 59 – paragraph 8
8. When Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as the competent authority for their supervision and coordination.
2022/06/13
Committee: IMCOLIBE
Amendment 2615 #
Proposal for a regulation
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2)in one of the areas listed in Annex III which are registered in accordance with Article 51 and their uses by public authorities and Union institutions, bodies, offices or agencies or on their behalf.
2022/06/13
Committee: IMCOLIBE
Amendment 2627 #
Proposal for a regulation
Article 60 – paragraph 4
4. The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider. or the user, if the user is a public authority or a Union institution, body, office or agency or a user acting on their behalf.
2022/06/13
Committee: IMCOLIBE
Amendment 2643 #
Proposal for a regulation
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources, to the extent such data are readily accessible to the provider and taking into account the limits resulting from data protection, copyright and competition law, on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
2022/06/13
Committee: IMCOLIBE
Amendment 2648 #
Proposal for a regulation
Article 61 – paragraph 3
3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by ... [12 months following the entry into force of this Regulation].
2022/06/13
Committee: IMCOLIBE
Amendment 2655 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
1. Providers and, where applicable, users of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred.
2022/06/13
Committee: IMCOLIBE
Amendment 2657 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1
Such notification shall be made immediatwithout undue delay after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 15 day72 hours after the providers becomes aware of the serious incident or of the malfunctioning.
2022/06/13
Committee: IMCOLIBE
Amendment 2664 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1 a (new)
No report under this Article is required if the serious incident also leads to reporting requirements under other laws. In that case, the authorities competent under those laws shall forward the received report to the national competent authority.
2022/06/13
Committee: IMCOLIBE
Amendment 2668 #
Proposal for a regulation
Article 62 – paragraph 2 a (new)
2 a. Upon establishing a causal link between the AI system and the serious incident or malfunctioning or the reasonable likelihood of such a link, providers shall take appropriate corrective actions pursuant to Article 21.
2022/06/13
Committee: IMCOLIBE
Amendment 2673 #
Proposal for a regulation
Article 62 – paragraph 3 a (new)
3 a. National supervisory authorities shall on an annual basis notify the Board of the serious incidents and malfunctioning reported to them in accordance with this Article.
2022/06/13
Committee: IMCOLIBE
Amendment 2674 #
Proposal for a regulation
Article 63 – paragraph 2
2. The national supervisory authority shall report annually to the Commission on a regular basis the outcomes of relevant market surveillance activities. The national supervisory authority shall report, without delay, to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules.
2022/06/13
Committee: IMCOLIBE
Amendment 2676 #
Proposal for a regulation
Article 63 – paragraph 3 a (new)
3 a. The procedures referred to in Articles 65, 66, 67 and 68 of this Regulation shall not apply to AI systems related to products, to which legal acts listed in Annex II, section A apply, when such legal acts already provide for procedures having the same objective. In such a case, these sectoral procedures shall apply instead.
2022/06/13
Committee: IMCOLIBE
Amendment 2679 #
Proposal for a regulation
Article 64 – paragraph 1
1. AWithout prejudice to powers provided under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, market surveillance authorities may request access to data and documentation in the context of their activities, the market surveillance authorities shall be granted full access to the training, validation and testing datasets used by the provider, including that are strictly necessary for the purpose of its request., including, where appropriate and subject to security safeguards, through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
2022/06/13
Committee: IMCOLIBE
Amendment 2689 #
Proposal for a regulation
Article 64 – paragraph 2
2. WhereMarket surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when the following cumulative conditions are fulfilled: a) Access to source code is necessary to assess the conformity of thea high-risk AI system with the requirements set out in Title III, Chapter 2, and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system. b) testing/auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient.
2022/06/13
Committee: IMCOLIBE
Amendment 2709 #
Proposal for a regulation
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persons are concerned.
2022/06/13
Committee: IMCOLIBE
Amendment 2715 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3).
2022/06/13
Committee: IMCOLIBE
Amendment 2722 #
Proposal for a regulation
Article 65 – paragraph 3
3. Where the market surveillance authority considers that non-compliance is not restricted to its national territory, it shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the operator to take.
2022/06/13
Committee: IMCOLIBE
Amendment 2726 #
Proposal for a regulation
Article 65 – paragraph 5
5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market, to withdraw the product from that market or to recall it. That authority shall informnotify the Commission and the other Member States, without delay, of those measures.
2022/06/13
Committee: IMCOLIBE
Amendment 2727 #
Proposal for a regulation
Article 65 – paragraph 6 – introductory part
6. The informnotification referred to in paragraph 5 shall include all available details, in particular the datainformation necessary for the identification of the non- compliant AI system, the origin of the AI system, the nature of the non-compliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, the market surveillance authorities shall indicate whether the non-compliance is due to one or more of the following:
2022/06/13
Committee: IMCOLIBE
Amendment 2729 #
Proposal for a regulation
Article 65 – paragraph 6 – point a
(a) a failure of the high-risk AI system to meet requirements set out in Title III, Chapter 2;
2022/06/13
Committee: IMCOLIBE
Amendment 2730 #
Proposal for a regulation
Article 65 – paragraph 6 – point b a (new)
(b a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;
2022/06/13
Committee: IMCOLIBE
Amendment 2731 #
Proposal for a regulation
Article 65 – paragraph 6 – point b b (new)
(b b) non-compliance with provisions set out in Article 52;
2022/06/13
Committee: IMCOLIBE
Amendment 2735 #
Proposal for a regulation
Article 65 – paragraph 8
8. Where, within three months of receipt of the informnotification referred to in paragraph 5, no objection has been raised by either a Member State or the Commission in respect of a provisional measure taken by a Member State, that measure shall be deemed justified. This is without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020. The period referred to in the first sentence of this paragraph shall be reduced to 30 days in the case of non- compliance with the prohibition of the artificial intelligence practices referred to in Article 5.
2022/06/13
Committee: IMCOLIBE
Amendment 2737 #
Proposal for a regulation
Article 65 – paragraph 9
9. The market surveillance authorities of all Member States shall ensure that appropriate restrictive measures are taken in respect of the productAI system concerned, such as withdrawal of the product from their market, without delay.
2022/06/13
Committee: IMCOLIBE
Amendment 2739 #
Proposal for a regulation
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), or 30 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, objections are raised by a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, the Commission shall without delay enter into consultation with the relevant Member State’s market surveillance authority and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months, or 60 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, starting from the notification referred to in Article 65(5) and notify such decision to the Member State concerned. The Commission shall also inform all other Member States of such decision.
2022/06/13
Committee: IMCOLIBE
Amendment 2751 #
Proposal for a regulation
Article 67 – paragraph 1
1. Where, having performed an evaluation under Article 65, the market surveillance authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection or to fundamental rights, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
2022/06/13
Committee: IMCOLIBE
Amendment 2758 #
Proposal for a regulation
Article 67 – paragraph 4
4. The Commission shall without delay enter into consultation with the Member States concerned and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified or not and, where necessary, propose appropriate measures.
2022/06/13
Committee: IMCOLIBE
Amendment 2762 #
5. The Commission shall address its decision to the Member States concerned, and inform all other Member States.
2022/06/13
Committee: IMCOLIBE
Amendment 2769 #
Proposal for a regulation
Article 68 – paragraph 2
2. Where the non-compliance referred to in paragraph 1 persists, the Member State concerned shall take all appropriate and proportionate measures to restrict or prohibit the high- risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market.
2022/06/13
Committee: IMCOLIBE
Amendment 2772 #
Proposal for a regulation
Article 68 a (new)
Article 68 a Right to lodge a complaint with a supervisory authority 1. Without prejudice to any other administrative or judicial remedy, every natural or legal person shall have the right to lodge a complaint with a supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if the natural or legal person considers that their health, safety, or fundamental rights have been breached by an AI system falling within the scope of this Regulation. 2. Natural or legal persons shall have a right to be heard in the complaint handling procedure and in the context of any investigations conducted by the national supervisory authority as a result of their complaint. 3. The national supervisory authority with which the complaint has been lodged shall inform the complainants about the progress and outcome of their complaint. In particular, the national supervisory authority shall take all the necessary actions to follow up on the complaints it receives and, within three months of the reception of a complaint, give the complainant a preliminary response indicating the measures it intends to take and the next steps in the procedure, if any. 4. The national supervisory authority shall take a decision on the complaint and inform the complainant on the progress and the outcome of the complaint, including the possibility of a judicial remedy pursuant to Article 68b, without delay and no later than six months after the date on which the complaint was lodged.
2022/06/13
Committee: IMCOLIBE
Amendment 2779 #
Proposal for a regulation
Article 68 b (new)
Article 68 b Right to an effective judicial remedy against a national supervisory authority 1. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them. 2. Without prejudice to any other administrative or non-judicial remedy, each data subject shall have the right to a an effective judicial remedy where the national supervisory authority does not handle a complaint, does not inform the complainant on the progress or preliminary outcome of the complaint lodged within three months pursuant to Article 68a(3) or does not comply with its obligation to reach a final decision on the complaint within six months pursuant to Article 68a(4) or its obligations under Article 65. 3. Proceedings against a supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established.
2022/06/13
Committee: IMCOLIBE
Amendment 2787 #
Proposal for a regulation
Article 69 – paragraph 1
1. The Commission and the Member Statesboard shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.
2022/06/13
Committee: IMCOLIBE
Amendment 2793 #
Proposal for a regulation
Article 69 – paragraph 4
4. The Commission and the Board shall take into account the specific interests and needs of the small-scale providerSMEs and start-ups when encouraging and facilitating the drawing up of codes of conduct.
2022/06/13
Committee: IMCOLIBE
Amendment 2796 #
Proposal for a regulation
Article 70 – paragraph 1 – introductory part
1. National competent authorities and, notified bodies involved in the application of this Regulation shall respect, the Commission, the Board, and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, put appropriate technical and organisational measures in place to ensure the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
2022/06/13
Committee: IMCOLIBE
Amendment 2803 #
Proposal for a regulation
Article 70 – paragraph 1 – point c a (new)
(c a) the principles of purpose limitation and data minimization, meaning that national competent authorities minimize the quantity of data requested for disclosure inline with what is absolutely necessary for the perceived risk and its assessment, and they must not keep the data for any longer than absolutely necessary;
2022/06/13
Committee: IMCOLIBE
Amendment 2821 #
Proposal for a regulation
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the size and interests of small-scale providerSMEs and start-ups and their economic viability.
2022/06/13
Committee: IMCOLIBE
Amendment 2827 #
Proposal for a regulation
Article 71 – paragraph 2
2. The Member States shall without delay notify the Commission of those rules and of those measures and shall notify it, without delay, of any subsequent amendment affecting them.
2022/06/13
Committee: IMCOLIBE
Amendment 2830 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
3. The following infringementsNon-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 320 000 000 EUR or, if the offender is a company, up to 64 % of its total worldwide annual turnover for the preceding financial year, and in case of SMEs and start-ups, up to 3% of its worldwide annual turnover for the preceding financial year, whichever is higher: . .
2022/06/13
Committee: IMCOLIBE
Amendment 2838 #
Proposal for a regulation
Article 71 – paragraph 3 – point a
(a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2840 #
Proposal for a regulation
Article 71 – paragraph 3 – point b
(b) non-compliance of the AI system with the requirements laid down in Article 10.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 2848 #
Proposal for a regulation
Article 71 – paragraph 4
4. The grossly negligent non- compliance by the provider or the user of the AI s ystem with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 210 000 000 EUR or, if the offender is a company, up to 42 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
2022/06/13
Committee: IMCOLIBE
Amendment 2864 #
Proposal for a regulation
Article 71 – paragraph 6 – point b
(b) whether administrative fines have been already applied by other market surveillance authorities of one or more Member States to the same operator for the same infringement.
2022/06/13
Committee: IMCOLIBE
Amendment 2866 #
Proposal for a regulation
Article 71 – paragraph 6 – point c
(c) the size, the annual turnover and market share of the operator committing the infringement;
2022/06/13
Committee: IMCOLIBE
Amendment 2881 #
Proposal for a regulation
Article 71 – paragraph 8 a (new)
8 a. Administrative fines shall not be applied to a participant in a regulatory sandbox, who was acting in line with the recommendation issued by the supervisory authority;
2022/06/13
Committee: IMCOLIBE
Amendment 2962 #
Proposal for a regulation
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changesubstantial modification in their design or intended purpose as defined in Article 3(23) .
2022/06/13
Committee: IMCOLIBE
Amendment 2968 #
Proposal for a regulation
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III once a year following the entry into force of this Regulation. The findings of that assessment shall be presented to the European Parliament and the Council.
2022/06/13
Committee: IMCOLIBE
Amendment 2972 #
Proposal for a regulation
Article 84 – paragraph 1 a (new)
1 a. The Commission shall assess the need for amendment of the list in Annex III once a year following the entry into force of this Regulation. The findings of that assessment shall be presented to the European Parliament and the Council.
2022/06/13
Committee: IMCOLIBE
Amendment 2974 #
Proposal for a regulation
Article 84 – paragraph 3 – point a
(a) the status of the financial, technical and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation;
2022/06/13
Committee: IMCOLIBE
Amendment 3001 #
Proposal for a regulation
Article 85 – paragraph 2
2. This Regulation shall apply from [248 months following the entering into force of the Regulation].
2022/06/13
Committee: IMCOLIBE
Amendment 3007 #
Proposal for a regulation
Article 85 – paragraph 3 a (new)
3 a. Member States shall not until... [24 months after the date of application of this Regulation] impede the making available of AI systems and products which were placed on the market inconformity with Union harmonisation legislation before [the date of application of this Regulation].
2022/06/13
Committee: IMCOLIBE
Amendment 3008 #
Proposal for a regulation
Article 85 – paragraph 3 b (new)
3 b. At the latest by six months after entry into force of this Regulation, the European Commission shall submit a standardization request to the European Standardisation Organisations in order to ensure the timely provision of all relevant harmonised standards that cover the essential requirements of this regulation. Any delay in submitting the standardisation request shall add to the transitional period of 24 months as stipulated in paragraph 3a.
2022/06/13
Committee: IMCOLIBE
Amendment 3017 #
Proposal for a regulation
Annex I – point b
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3024 #
Proposal for a regulation
Annex I – point c
(c) Statistical approaches, Bayesian estimation, search and optimization methods.deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3045 #
Proposal for a regulation
Annex III – title
HIGH-RISK AI SYSTEMCRITICAL AREAS REFERRED TO IN ARTICLE 6(2)
2022/06/13
Committee: IMCOLIBE
Amendment 3059 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons; , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
2022/06/13
Committee: IMCOLIBE
Amendment 3066 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems intended to be used to make inferences on the basis of biometric data, including emotion recognition systems, or biometrics-based data, including speech patterns, tone of voice, lip-reading and body language analysis, that produces legal effects or affects the rights and freedoms of natural persons.
2022/06/13
Committee: IMCOLIBE
Amendment 3102 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educationalthose institutions.
2022/06/13
Committee: IMCOLIBE
Amendment 3108 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used formake autonomous decisions or materially influence decisions about recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
2022/06/13
Committee: IMCOLIBE
Amendment 3118 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for makingmake autonomous decisions or materially influence decisions on promotion and termination of work- related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
2022/06/13
Committee: IMCOLIBE
Amendment 3136 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providerSMEs and start-ups for their own use;
2022/06/13
Committee: IMCOLIBE
Amendment 3154 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities or on their behalf for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3164 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 3168 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point c
(c) AI systems intended to be used by law enforcement authorities or on their behalf to detect deep fakes as referred to in article 52(3);
2022/06/13
Committee: IMCOLIBE
Amendment 3172 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point d
(d) AI systems intended to be used by law enforcement authorities or on their behalf for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3177 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3184 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
(f) AI systems intended to be used by law enforcement authorities or on their behalf for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
2022/06/13
Committee: IMCOLIBE
Amendment 3188 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point g
(g) AI systems intended to be used by law enforcement authorities or on their behalf for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
2022/06/13
Committee: IMCOLIBE
Amendment 3195 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities or on their behalf as polygraphs and similar tools or to detect the emotional state of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 3205 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities or on their behalf to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
2022/06/13
Committee: IMCOLIBE
Amendment 3207 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point c
(c) AI systems intended to be used by competent public authorities or on their behalf for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
2022/06/13
Committee: IMCOLIBE
Amendment 3214 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assist competent public authorities or on their behalf for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
2022/06/13
Committee: IMCOLIBE
Amendment 3216 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
(d) AI systems intended to assistbe used by competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
2022/06/13
Committee: IMCOLIBE
Amendment 3233 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a
(a) AI systems intended to assist abe used by judicial authority in researching andies or on their behalf in interpreting facts andor the law and infor applying the law to a concrete set of facts.
2022/06/13
Committee: IMCOLIBE
Amendment 3279 #
Proposal for a regulation
Annex IV – paragraph 1 – point 5
5. A description of relevanyt changes made by providers to the system through its lifecycle;
2022/06/13
Committee: IMCOLIBE
Amendment 3281 #
Proposal for a regulation
Annex IV – paragraph 1 – point 6
6. A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Title III, Chapter 2, including a list of common specifications or other relevant standards and technical specifications applied;
2022/06/13
Committee: IMCOLIBE
Amendment 3312 #
Proposal for a regulation
Annex IX a (new)
ANNEX IXa: MODALITIES FOR AN EU AI REGULATORY SANDBOXING PROGRAMME 1.The European Commission shall establish the EU AI Regulatory Sandboxing Programme (‘sandboxing programme’) in collaboration with Member States and other competent entities such as regions or universities. 2.The Commission shall play a complementary role, allowing those entities with demonstrated experience with sandboxing to build on their expertise and, on the other hand, assisting and providing technical understanding and resources to those Member States and regions that seek guidance on the set-up of these regulatory sandboxes. 3.Participants in the sandboxing programme, in particular start-ups and SMEs, are granted access to pre- deployment services, such as preliminary registration of their AI system, compliance R&D support services, and to all the other relevant elements of the Union’s AI ecosystem and other Digital Single Market initiatives such as Testing &Experimentation Facilities, Digital Hubs, Centres of Excellence, and EU benchmarking capabilities;and to other value-adding services such as standardisation documents and certification, an online social platform for the community, contact databases, existing portal for tenders and grant making and lists of EU investors. 4.Foreign providers, in particular start- ups and SMEs, are eligible to take part in the sandboxes to incubate and refine their products incompliance with this Regulation. 5.Individuals such as researchers, entrepreneurs, innovators and other pre- market ideas owners are eligible to pre- register into the sandboxing programme to incubate and refine their products in compliance with this Regulation. 6.The sandboxing programme and its benefits shall be available from a single portal established by the European Commission. 7.The sandboxing programme shall develop and manage two types of regulatory sandboxes:Physical Regulatory Sandboxes for AI systems embedded in physical products or services and Cyber Regulatory Sandboxes for AI systems operated and used on a stand-alone basis, not embedded in physical products or services. 8.The sandboxing programme shall work with the already established Digital Innovation Hubs in Member States to provide a dedicated point of contact for entrepreneurs to raise enquiries with competent authorities and to seek non- binding guidance on the conformity of innovative products, services or business models embedding AI technologies. 9.One of the objectives of the sandboxing programme is to enable firms’ compliance with this Regulation at the design stage of the AI system (‘compliance-by- design’).To do so, the programme shall facilitate the development of software tools and infrastructure for testing, benchmarking, assessing and explaining dimensions of AI systems relevant to sandboxes, such as accuracy, robustness and cybersecurity. 10.The sandboxing programme shall include a Reg Tech lab, to help authorities experiment and develop enforcement tools and protocols for enforcing this Regulation. 11. The sandboxing programme shall be rolled out in a phased fashion, with the various phases launched by the Commission upon success of the previous phase. The sandboxing programme will have a built-in impact assessment procedure to facilitate the review of cost- effectiveness against the agreed-upon objectives. This assessment shall be drafted with input from Member States based on their experiences and shall be included as part of the Annual Report submitted by the Commission to the European Artificial Intelligence Board.
2022/06/13
Committee: IMCOLIBE