BETA

33 Amendments of Marina KALJURAND related to 2021/0106(COD)

Amendment 921 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) 'artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listcan for example perceive, learn, reason or model based ion Annex I and can, for a given set of human-defined objectives,machine and/or human based inputs, to generate outputs such as content, hypotheses, predictions, recommendations, or decisions influencing the real or virtual environments they interact with;
2022/06/13
Committee: IMCOLIBE
Amendment 1022 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33
(33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic dataas defined in Article 4, point (14) of Regulation (EU) 2016/679;
2022/06/13
Committee: IMCOLIBE
Amendment 1030 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 b (new)
(33 b) ‘biometric identification’ means the use of AI-systems for the purpose of the automated recognition of physical, physiological, behavioural, and psychological human features such as the face, eye movement, facial expressions, body shape, voice, speech, gait, posture, heart rate, blood pressure, odour, keystrokes, psychological reactions (anger, distress, grief, etc.) for the purpose of verification of an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database (one-to-many identification);
2022/06/13
Committee: IMCOLIBE
Amendment 1112 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘artificial intelligence system with indeterminate uses’ means an artificial intelligence system without specific and limited provider-defined purposes;
2022/06/13
Committee: IMCOLIBE
Amendment 1225 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service, or use of AI systems intended to be used as polygraphs and similar tools to detect the emotional state, trustworthiness or related characteristics of a natural person;
2022/06/13
Committee: IMCOLIBE
Amendment 1288 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
(d a) the creation or expansion of biometric databases through the untargeted or generalised scraping of biometric data from social media profiles or CCTV footage, or equivalent methods;
2022/06/13
Committee: IMCOLIBE
Amendment 1307 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
(d d) the placing on the market, putting into service or use of an AI system for making predictions, profiles or risk assessments based on data analysis or profiling of natural persons, groups or locations, for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour;
2022/06/13
Committee: IMCOLIBE
Amendment 1319 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
(d f) the placing on the market, putting into service, or use of AI systems that are aimed at automating judicial or similarly intrusive binding decisions by state actors;
2022/06/13
Committee: IMCOLIBE
Amendment 1322 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
(d g) the placing on the market, putting into service or the use of AI systems by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
2022/06/13
Committee: IMCOLIBE
Amendment 1447 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
2 a. An artificial intelligence system with indeterminate uses shall also be considered high risk if so identified per Article 9, paragraph 2, point (a).
2022/06/13
Committee: IMCOLIBE
Amendment 1452 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
2 b. In addition to the high-risk AI systems referred to in paragraph 1 and paragraph 2, AI systems that create foreseeable high-risks when combined shall also be considered high-risk.
2022/06/13
Committee: IMCOLIBE
Amendment 1563 #
Proposal for a regulation
Article 8 – paragraph 2
2. The intended purpose of the high- risk AI system, the foreseeable uses and foreseeable misuses of AI systems with indeterminate uses and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
2022/06/13
Committee: IMCOLIBE
Amendment 1583 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and the reasonably foreseeable risks associated with each high-risk AI system;that the high-risk AI system, and AI systems with indeterminate uses, can pose to: (i) the health or safety of natural persons; (ii) the legal rights or legal status of natural persons; (iii) the fundamental rights; (iv) the equal access to services and opportunities of natural persons; (v) the Union values enshrined in Article 2 TEU.
2022/06/13
Committee: IMCOLIBE
Amendment 1701 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, especially where data outputs are used as an input for future operations(‘feedback loops’);
2022/06/13
Committee: IMCOLIBE
Amendment 1729 #
Proposal for a regulation
Article 10 – paragraph 4
4. Training, validation and testing dData sets shall take into account, to the extent required by the intended purpose, the foreseeable uses and reasonably foreseeable misuses of AI systems with indeterminate uses, the characteristics or elements that are particular to the specific geographical, ,behavioural or functional setting within which the high-risk AI system is intended to be used.
2022/06/13
Committee: IMCOLIBE
Amendment 1805 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point v
(v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purposedata sets used, including their limitation and assumptions, taking into account the intended purpose, the foreseeable and reasonably foreseeable misuses of the AI system.
2022/06/13
Committee: IMCOLIBE
Amendment 1849 #
Proposal for a regulation
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracythe foreseeable uses and reasonably foreseeable misuses, an appropriate level of perfomance (such as accuracy, reliability and true positive rate), robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
2022/06/13
Committee: IMCOLIBE
Amendment 1883 #
Proposal for a regulation
Article 16 – paragraph 1 – point a a (new)
(a a) ensure that the performance of their high-risk AI system is measured appropriately, including its level of accuracy, robustness and cybersecurity;
2022/06/13
Committee: IMCOLIBE
Amendment 1886 #
Proposal for a regulation
Article 16 – paragraph 1 – point a b (new)
(a b) provide specifications for the input data, or any other relevant information in terms of the data sets used, including their limitation and assumptions, taking into account of the intended purpose and the foreseeable and reasonably foreseeable misuses of the AI system;
2022/06/13
Committee: IMCOLIBE
Amendment 2036 #
Proposal for a regulation
Article 29 – paragraph -1 (new)
-1. Users of high-risk AI systems shall ensure that natural persons assigned to ensure or entrusted with human oversight for high-risk AI systems are competent, properly qualified and trained, free from external influence and neither seek nor take instructions from anybody. They shall have the necessary resources in order to ensure the effective supervision of the system in accordance with Article 14.
2022/06/13
Committee: IMCOLIBE
Amendment 2056 #
Proposal for a regulation
Article 29 – paragraph 4 – introductory part
4. Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall immediately inform the provider or distributor and suspend the use of the system. They shall also immediately inform the provider or distributor when they have identified any serious incident or any malfunctioning, including near misses, within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.
2022/06/13
Committee: IMCOLIBE
Amendment 2072 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
6 a. Users of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to an affected person, shall inform them that they are subject to the use of the high-risk AI system. This information shall include the type of the AI system used, its intended purpose and the type of decisions it makes.
2022/06/13
Committee: IMCOLIBE
Amendment 2078 #
Proposal for a regulation
Article 29 a (new)
Article 29 a Fundamental rights impact assessment for a high-risk AI system 1. Prior to putting a high-risk AI system into use, as defined in Article 6(2), the user shall conduct an assessment of the system’s impact in the context of use. This assessment shall consist of, but not limited to, the following elements: (a) a clear outline of the intended purpose for which the system will be used; (b) a clear outline of the intended geographic and temporal scope of the system’s use; (c) verification that the use of the system is compliant with Union and national law; (d) categories of natural persons and groups likely to be affected by the use of the system; (e) the foreseeable direct and indirect impact on fundamental rights of putting the high-risk AI system into use; (f) any specific risk of harm likely to impact marginalised persons or vulnerable groups; (g) the foreseeable impact of the use of the system on the environment, including, but not limited to, energy consumption; (h) any other negative impact on the protection of the values enshrined in Article 2 TEU; (i) in the case of public authorities, any other impact on democracy, rule of law and allocation of public funds; and (j) detailed plan on how the risk of harm or the negative direct and indirect impact on fundamental rights identified will be mitigated. 2. If a detailed plan to mitigate the risks outlined in the course of the assessment in paragraph 1 cannot be identified, the user shall refrain from putting the high-risk AI system into use and inform the provider, the national supervisory authority and market surveillance authority without undue delay. Market surveillance authorities or, where relevant, national supervisory authorities, pursuant to their capacity under Articles 65, 67 and 67a, shall take this information into account when investigating systems which present a risk at national level. 3. The obligations as per paragraph 1 apply for each new deployment of the high-risk AI system. 4. In the course of the impact assessment, the user shall notify the national supervisory authority, the market surveillance authority and the relevant stakeholders. and involve representatives of the foreseeable persons or groups of persons affected by the high-risk AI system, as identified in paragraph 1, including but not limited to: equality bodies, consumer protection agencies, social partners and data protection agencies, with a view to receiving input into the impact assessment. The user must allow a period of six weeks for bodies to respond. 5. The user shall publish the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51(2). 6. Where the user is already required to carry out a data protection impact assessment pursuant to Article 29(6), the impact assessment outlined in paragraph 1 shall be conducted in conjunction to the data protection impact assessment.
2022/06/13
Committee: IMCOLIBE
Amendment 3067 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems that are or may be used for the detection of a person’s presence, in workplaces, in educational settings, and in border surveillance, including in the virtual / online version of these spaces, on the basis of their biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3075 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that are or may be used for monitoring compliance with health and safety measures or inferring alertness /attentiveness for safety purposes, on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3080 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used to diagnose or support diagnosis of medical conditions or medical emergencies on the basis of biometric or biometrics-based data;
2022/06/13
Committee: IMCOLIBE
Amendment 3149 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3160 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3178 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3194 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3197 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
(b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;deleted
2022/06/13
Committee: IMCOLIBE
Amendment 3244 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose or reasonably foreseeable use, the person/s developing the system, the date and the version of the system;
2022/06/13
Committee: IMCOLIBE
Amendment 3251 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point b
(b) how the AI system interacts or can be used to interact with hardware or software, including other AI systems, that isare not part of the AI system itself, where applicable;
2022/06/13
Committee: IMCOLIBE