BETA

45 Amendments of Ivo HRISTOV related to 2021/0106(COD)

Amendment 130 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of trustworthy artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, the environment, and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
2022/03/31
Committee: ITRE
Amendment 135 #
Proposal for a regulation
Recital 2 a (new)
(2a) The deployment of artificial intelligence applications across sectors will only accelerate in the years to come. The European Union should therefore consider, in separate legislation, the creation of an Artificial Intelligence Adjustment Fund, which could be beneficial for Member States to cover the accustoming of their labour markets to the new conditions arising from the rapid mass introduction of artificial intelligence systems that could affect specific job sectors.
2022/03/31
Committee: ITRE
Amendment 168 #
Proposal for a regulation
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive, access brain or brain-generated data without consent, or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
2022/03/31
Committee: ITRE
Amendment 171 #
Proposal for a regulation
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf, assessing the risk of natural person for offending or reoffending, or categorising persons based on biometrics or biometrics-based data, may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score or risk assessment obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social and legal contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
2022/03/31
Committee: ITRE
Amendment 176 #
Proposal for a regulation
Recital 18
(18) The use of AI systems for ‘real- time’ remotebiometric or biometrics-based identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
2022/03/31
Committee: ITRE
Amendment 177 #
Proposal for a regulation
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1)biometric or biometrics- based systems that can be used for monitoring large numbers of persons, be it in public or private spaces, should therefore be prohibited.
2022/03/31
Committee: ITRE
Amendment 181 #
Proposal for a regulation
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.deleted
2022/03/31
Committee: ITRE
Amendment 184 #
Proposal for a regulation
Recital 21
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.deleted
2022/03/31
Committee: ITRE
Amendment 186 #
Proposal for a regulation
Recital 22
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.deleted
2022/03/31
Committee: ITRE
Amendment 187 #
Proposal for a regulation
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.deleted
2022/03/31
Committee: ITRE
Amendment 189 #
Proposal for a regulation
Recital 24
(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
2022/03/31
Committee: ITRE
Amendment 234 #
Proposal for a regulation
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
2022/03/31
Committee: ITRE
Amendment 249 #
Proposal for a regulation
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence. The Board should work towards establishing a European Regulatory Agency for Artificial Intelligence in line with the provisions of Article 56(3).
2022/03/31
Committee: ITRE
Amendment 265 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5a. This Regulation shall facilitate the exchange of data used solely for academic and scientific endeavours in a safe scientific space.
2022/03/31
Committee: ITRE
Amendment 273 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means any software or machine- based system that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content,make predictions, recommendations, or decisions influencing real or virtual environments. the environments they interact with;. AI systems can be designed with varying levels of autonomy.
2022/03/31
Committee: ITRE
Amendment 293 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 a (new)
(33a) ‘biometrics-based data’ means personal data resulting from specific technical processing related to physical, physiological or behavioural signals or characteristics of a natural person, such as facial expressions, movements, pulse frequency, voice, keystrokes or gait, which may or may not allow or confirm the unique identification of a natural person;
2022/03/31
Committee: ITRE
Amendment 294 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intention, thoughts, memories, intentions, or other mental states of natural persons on the basis of their biometric or biometrics- based data;
2022/03/31
Committee: ITRE
Amendment 295 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts, or intentions of natural persons on the basis of their biometric or biometrics-based data;
2022/03/31
Committee: ITRE
Amendment 296 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin, health, mental ability, behavioural traits, or sexual or political orientation, on the basis of their biometric or biometrics-based data;
2022/03/31
Committee: ITRE
Amendment 298 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35 a (new)
(35a) ‘biometric inferences’ means conclusions with regards to permanent or long-term physical, physiological, or behavioural characteristics of a natural person, on the basis of biometrics, biometrics-based data, or other personal data;
2022/03/31
Committee: ITRE
Amendment 326 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
2022/03/31
Committee: ITRE
Amendment 327 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remoteputting into service or use of biometric or biometrics-based identification systems in publicly accessible spaces for ththat allow the comprehensive or large-scale psurpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:veillance of natural persons in any context., including surveillance in the workplace.
2022/03/31
Committee: ITRE
Amendment 329 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;deleted
2022/03/31
Committee: ITRE
Amendment 330 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;deleted
2022/03/31
Committee: ITRE
Amendment 331 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).deleted
2022/03/31
Committee: ITRE
Amendment 332 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(da) the placing on the market, putting into service, or use of an AI system for the specific technical processing of brain or brain-generated data in order to access, infer, influence, or manipulate a person's thoughts, emotions, memories, intentions, beliefs, or other mental states against that person's will or in a manner that causes or is likely to cause that person or another person physical or psychological harm;
2022/03/31
Committee: ITRE
Amendment 334 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
(db) the putting into service or use of AI systems that allow the categorisation of individuals on the basis of their biometric, biometrics-based data, or biometric inferences into clusters according to ethnicity, gender, political or sexual orientation, or any other grounds that may lead to discrimination prohibited under Article 21 of the Charter of Fundamental Rights of the European Union;
2022/03/31
Committee: ITRE
Amendment 336 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
(dc) The putting into service or use of AI systems for making individual or group assessments of natural persons in order to assess the risk of a natural person or a group of persons for offending or reoffending, or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons or on the assessment of personality traits, characteristics, or past criminal behaviour.
2022/03/31
Committee: ITRE
Amendment 341 #
Proposal for a regulation
Article 5 – paragraph 2
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/03/31
Committee: ITRE
Amendment 342 #
Proposal for a regulation
Article 5 – paragraph 2 – point a
(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;deleted
2022/03/31
Committee: ITRE
Amendment 343 #
Proposal for a regulation
Article 5 – paragraph 2 – point b
(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.deleted
2022/03/31
Committee: ITRE
Amendment 345 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.deleted
2022/03/31
Committee: ITRE
Amendment 346 #
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/03/31
Committee: ITRE
Amendment 348 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real- time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.deleted
2022/03/31
Committee: ITRE
Amendment 349 #
Proposal for a regulation
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.deleted
2022/03/31
Committee: ITRE
Amendment 350 #
Proposal for a regulation
Article 5 – paragraph 4 a (new)
4a. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of prohibited practices listed in paragraph 1 of this Article, in order to update that list on the basis of a similar threat to fundamental human rights and values.
2022/03/31
Committee: ITRE
Amendment 406 #
Proposal for a regulation
Article 10 – paragraph 2 – point a
(a) the relevant design choices; all appliances should be designed with the option to forbid the constantly open microphone/camera of apps and offer consumers a clear option for all recording features to be shut down when the corresponding app is not in use;
2022/03/31
Committee: ITRE
Amendment 425 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be sufficiently relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/03/31
Committee: ITRE
Amendment 429 #
Proposal for a regulation
Article 10 – paragraph 6 a (new)
6a. This Regulation shall guarantee the protection of citizens who choose to lead an "offline life" and ensure that there are always offline options and services available for them, especially when this concerns the provision of essential private and public services.
2022/03/31
Committee: ITRE
Amendment 439 #
Proposal for a regulation
Article 14 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in usewhole lifecycle of the AI system. AI systems shall not be used to substitute, but rather to complement human decision- making. All AI systems shall be explainable by design.
2022/03/31
Committee: ITRE
Amendment 441 #
Proposal for a regulation
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks can affect the wellbeing, health or physical integrity of children and minors, or persist notwithstanding the application of other requirements set out in this Chapter. Special attention shall be paid on AI systems used for the development or as components of children toys.
2022/03/31
Committee: ITRE
Amendment 469 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 2
The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws which could lead to harmful decision-making.
2022/03/31
Committee: ITRE
Amendment 595 #
Proposal for a regulation
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. Other national authoritiesThe European Union Agency for Fundamental Rights will have the status of observer in the Board. Other national authorities, as well as representatives of small and medium-sized enterprises and startups, may be invited to the meetings, where the issues discussed are of relevance for them.
2022/03/31
Committee: ITRE
Amendment 604 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
(ca) work towards establishing an independent and well-resourced European Regulatory Agency for Artificial Intelligence within the first two years after the entry into force of this Regulation. Among its tasks, said agency will: a) ensure the enforcement of this Regulation and advise and propose amendments to the European Commission when the need arises to update any of its articles, including the list of prohibited artificial intelligence practices (Article 5), the classification rules for high-risk AI systems (Article 6), or any of the annexes accompanying this Regulation; b) establish a risk assessment matrix for classifying algorithm types and application domains according to their potential negative impact on health, safety, the environment, or fundamental rights; c) collaborate with and advise other regulatory agencies and national regulators regarding Artificial Intelligence systems as they apply to the remit of those agencies (e.g. on data protection or the use of Artificial Intelligence systems used by law enforcement or judicial agencies); d) facilitate the effectiveness of the tort liability mechanism as means for regulating accountability of Artificial Intelligence systems by providing a contact point for citizens who are not familiar with legal procedures; e) audit the algorithmic impact assessments of high-risk AI systems defined in Article 6(2) and Annex III, and approve or reject the proposed uses of algorithmic decision-making in highly sensitive or safety-critical application domains (e.g. private health-care); f) investigate suspected cases of human rights violations by algorithmic decision- making systems, in both individual decision instances (e.g. singular aberrant outcomes) and statistical decision patterns (e.g. discriminatory bias); g) produce the necessary guidelines to support the harmonised implementation of this Regulation, particularly on the establishment and operation of AI regulatory sandboxes and on the obligations of stakeholders along the AI value chain (e.g. providers, importers, and users).
2022/03/31
Committee: ITRE
Amendment 644 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or, establish their credit score, or predict medical human conditions and health-related outcomes, with the exception of AI systems put into service by small scale providers for their own use;
2022/03/31
Committee: ITRE