33 Amendments of Marina KALJURAND related to 2021/0106(COD)
Amendment 921 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘'artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listcan for example perceive, learn, reason or model based ion Annex I and can, for a given set of human-defined objectives,machine and/or human based inputs, to generate outputs such as content, hypotheses, predictions, recommendations, or decisions influencing the real or virtual environments they interact with;
Amendment 1022 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33
Article 3 – paragraph 1 – point 33
(33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic dataas defined in Article 4, point (14) of Regulation (EU) 2016/679;
Amendment 1030 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 b (new)
Article 3 – paragraph 1 – point 33 b (new)
(33 b) ‘biometric identification’ means the use of AI-systems for the purpose of the automated recognition of physical, physiological, behavioural, and psychological human features such as the face, eye movement, facial expressions, body shape, voice, speech, gait, posture, heart rate, blood pressure, odour, keystrokes, psychological reactions (anger, distress, grief, etc.) for the purpose of verification of an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database (one-to-many identification);
Amendment 1112 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘artificial intelligence system with indeterminate uses’ means an artificial intelligence system without specific and limited provider-defined purposes;
Amendment 1225 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service, or use of AI systems intended to be used as polygraphs and similar tools to detect the emotional state, trustworthiness or related characteristics of a natural person;
Amendment 1288 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(d a) the creation or expansion of biometric databases through the untargeted or generalised scraping of biometric data from social media profiles or CCTV footage, or equivalent methods;
Amendment 1307 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
Article 5 – paragraph 1 – point d d (new)
(d d) the placing on the market, putting into service or use of an AI system for making predictions, profiles or risk assessments based on data analysis or profiling of natural persons, groups or locations, for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour;
Amendment 1319 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
Article 5 – paragraph 1 – point d f (new)
(d f) the placing on the market, putting into service, or use of AI systems that are aimed at automating judicial or similarly intrusive binding decisions by state actors;
Amendment 1322 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
Article 5 – paragraph 1 – point d g (new)
(d g) the placing on the market, putting into service or the use of AI systems by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
Amendment 1447 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2 a. An artificial intelligence system with indeterminate uses shall also be considered high risk if so identified per Article 9, paragraph 2, point (a).
Amendment 1452 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
Article 6 – paragraph 2 b (new)
2 b. In addition to the high-risk AI systems referred to in paragraph 1 and paragraph 2, AI systems that create foreseeable high-risks when combined shall also be considered high-risk.
Amendment 1563 #
Proposal for a regulation
Article 8 – paragraph 2
Article 8 – paragraph 2
2. The intended purpose of the high- risk AI system, the foreseeable uses and foreseeable misuses of AI systems with indeterminate uses and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
Amendment 1583 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and the reasonably foreseeable risks associated with each high-risk AI system;that the high-risk AI system, and AI systems with indeterminate uses, can pose to: (i) the health or safety of natural persons; (ii) the legal rights or legal status of natural persons; (iii) the fundamental rights; (iv) the equal access to services and opportunities of natural persons; (v) the Union values enshrined in Article 2 TEU.
Amendment 1701 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, especially where data outputs are used as an input for future operations(‘feedback loops’);
Amendment 1729 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing dData sets shall take into account, to the extent required by the intended purpose, the foreseeable uses and reasonably foreseeable misuses of AI systems with indeterminate uses, the characteristics or elements that are particular to the specific geographical, ,behavioural or functional setting within which the high-risk AI system is intended to be used.
Amendment 1805 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point v
Article 13 – paragraph 3 – point b – point v
(v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purposedata sets used, including their limitation and assumptions, taking into account the intended purpose, the foreseeable and reasonably foreseeable misuses of the AI system.
Amendment 1849 #
Proposal for a regulation
Article 15 – paragraph 1
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracythe foreseeable uses and reasonably foreseeable misuses, an appropriate level of perfomance (such as accuracy, reliability and true positive rate), robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
Amendment 1883 #
Proposal for a regulation
Article 16 – paragraph 1 – point a a (new)
Article 16 – paragraph 1 – point a a (new)
Amendment 1886 #
Proposal for a regulation
Article 16 – paragraph 1 – point a b (new)
Article 16 – paragraph 1 – point a b (new)
(a b) provide specifications for the input data, or any other relevant information in terms of the data sets used, including their limitation and assumptions, taking into account of the intended purpose and the foreseeable and reasonably foreseeable misuses of the AI system;
Amendment 2036 #
Proposal for a regulation
Article 29 – paragraph -1 (new)
Article 29 – paragraph -1 (new)
-1. Users of high-risk AI systems shall ensure that natural persons assigned to ensure or entrusted with human oversight for high-risk AI systems are competent, properly qualified and trained, free from external influence and neither seek nor take instructions from anybody. They shall have the necessary resources in order to ensure the effective supervision of the system in accordance with Article 14.
Amendment 2056 #
Proposal for a regulation
Article 29 – paragraph 4 – introductory part
Article 29 – paragraph 4 – introductory part
4. Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall immediately inform the provider or distributor and suspend the use of the system. They shall also immediately inform the provider or distributor when they have identified any serious incident or any malfunctioning, including near misses, within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.
Amendment 2072 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Users of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to an affected person, shall inform them that they are subject to the use of the high-risk AI system. This information shall include the type of the AI system used, its intended purpose and the type of decisions it makes.
Amendment 2078 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Amendment 3067 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
Annex III – paragraph 1 – point 1 – point a a (new)
Amendment 3075 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that are or may be used for monitoring compliance with health and safety measures or inferring alertness /attentiveness for safety purposes, on the basis of biometric or biometrics-based data;
Amendment 3080 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used to diagnose or support diagnosis of medical conditions or medical emergencies on the basis of biometric or biometrics-based data;
Amendment 3149 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
Amendment 3160 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
Amendment 3178 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 3194 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
Amendment 3197 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
Amendment 3244 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose or reasonably foreseeable use, the person/s developing the system, the date and the version of the system;
Amendment 3251 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point b
Annex IV – paragraph 1 – point 1 – point b
(b) how the AI system interacts or can be used to interact with hardware or software, including other AI systems, that isare not part of the AI system itself, where applicable;