51 Amendments of Janusz LEWANDOWSKI related to 2021/0106(COD)
Amendment 138 #
Proposal for a regulation
Recital 3 a (new)
Recital 3 a (new)
(3a) The development of AI applications might bring down the costs and increase the volume of services available, e.g. health services, public transport, Farming 4.0, making them more affordable to a wider spectrum of society; that AI applications may also result in the rise of unemployment, pressure on social care systems, and an increase of poverty; in accordance with the values enshrined in Article 3 of the Treaty on European Union, there might be a need to adapt Union AI transformation to socioeconomic capacities, to create adequate social shielding, support education and incentives to create alternative jobs; the establishment of a Union AI Adjustment Fund building upon the experience of The European Globalisation Adjustment Fund (EGF) or the currently developed Just Transition Fund should be considered.
Amendment 169 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby with due diligence it could be predicted that physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 250 #
(76a) An AI advisory council (‘the Advisory Council’) should be established as a sub-group of the Board consisting of relevant representatives from industry, research, academia, civil society, standardisation organisations, relevant common European data spaces, and other relevant stakeholders, including social partners, where appropriate depending on the subject matter discussed, representing all Member States to maintain geographical balance. The Advisory Council should support the work of the Board by providing advice relating to the tasks of the Board. The Advisory Council should nominate a representative to attend meetings of the Board and to participate in its work.
Amendment 253 #
Proposal for a regulation
Recital 86 a (new)
Recital 86 a (new)
(86a) In order to ensure uniform conditions for the implementation of this Regulation, it shall be accompanied by the publication of guidelines to help all stakeholders to interpret key concepts covered by the Regulation, such as prohibited or high-risk AI cases and the precise means and implementation rules of the Regulation by national competent authorities;
Amendment 260 #
Proposal for a regulation
Article 2 – paragraph 1 – point b
Article 2 – paragraph 1 – point b
(b) users of AI systems located withusing the AI system in the Union ;
Amendment 275 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, indispensably with some degree of autonomy, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 322 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm that could be predicted with due diligence;
Amendment 324 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm that could be predicted with due diligence;
Amendment 328 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use by law enforcement is strictly necessary for one of the following objectives:
Amendment 328 #
(3 a) The development of AI applications might bring down the costs and increase the volume of services available, e.g. health services, public transport, Farming 4.0, making them more affordable to a wider spectrum of society; that AI applications may also result in the rise of unemployment, pressure on social care systems, and an increase of poverty; in accordance with the values enshrined in Article 3 of the Treaty on European Union, there might be a need to adapt the Union AI transformation to socioeconomic capacities, to create adequate social shielding, support education and incentives to create alternative jobs; the establishment of a Union AI Adjustment Fund building upon the experience of The European Globalisation Adjustment Fund (EGF) or the currently developed Just Transition Fund should be considered;
Amendment 351 #
Proposal for a regulation
Article 6 – paragraph -1 (new)
Article 6 – paragraph -1 (new)
-1. The AI system shall be considered high-risk where it meets the following two cumulative criteria: (a) the AI system is used or applied in a sector where, given the characteristics of the activities typically undertaken, significant risks of harm to the health and safety or a risk of adverse impact on fundamental rights of users, as outlined in Article 7(2) can be expected to occur. (b) the AI system application in the sector in question is used in such a manner that significant risks of harm to the health and safety or a risk of adverse impact on fundamental rights of users, as outlined in Article 7(2) are likely to arise.
Amendment 357 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, and in accordance with paragraph -1 of this Article, AI systems referred to in Annex III shall also be considered high-risk.
Amendment 376 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of biometric identification system, including remote biometric identification system as used in this Regulation, should be defined functionally, as an AI system intended for the identification of natural persons including at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database repository, excluding verification/ authentication systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real- time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
Amendment 431 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby with due diligence it could be predicted that physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 520 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. In particular, the classification as high-risk according to Article 6 should not apply to AI systems whose intended purpose demonstrates that the generated output is a recommendation, provided it is delivered with the information on its accuracy or other relevant methodical aspects necessary for the decision making. A human intervention is required to convert this recommendation into an action.
Amendment 523 #
Proposal for a regulation
Article 51 – paragraph 1
Article 51 – paragraph 1
1. Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2), the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Amendment 527 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Article 51 – paragraph 1 a (new)
2. A high-risk AI system designed, developed, trained, validate, tested or approved to be placed on the market or put into service, outside the EU, can be registered in the EU database referred to in Article 60 and placed on the market or put into service in EU only if it is proven that at all the stages of its design, development, training, validation, testing or approval, all the obligations required from such AI systems in EU have been met.
Amendment 549 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons, including remote biometric identification, can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems , including remote biometric identification, should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.
Amendment 572 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Due to the fact that AI systems related to low-value credits for the purchase of movables do not cause high risk, it is proposed to exclude this category from the scope of high-risk AI category as well. . Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 597 #
Proposal for a regulation
Article 57 – paragraph 3 a (new)
Article 57 – paragraph 3 a (new)
3a. The Board shall establish a AI Advisory Council (Advisory Council). The Advisory Council shall be composed of relevant representatives from industry, research, academia, civil society, standardisation organisations, relevant common European data spaces and other relevant stakeholders or third parties appointed by the Board, representing all Member States to maintain geographical balance. The Advisory Council shall support the work of the Board by providing advice relating to the tasks of the Board. The Advisory Council shall nominate a relevant representative, depending on the configuration in which the Board meets, to attend meetings of the Board and to participate in its work. The composition of the Advisory Council and its recommendations to the Board shall be made public.
Amendment 622 #
Proposal for a regulation
Recital 43 a (new)
Recital 43 a (new)
(43 a) Fundamental rights impact assessments for high-risk AI systems may include a clear outline of the intended purpose for which the system will be used, a clear outline of the intended geographic and temporal scope of the system’s use, categories of natural persons and groups likely to be affected by the use of the system or any specific risk of harm likely to impact marginalised persons or groups at risk of discrimination, or increase societal inequalities;
Amendment 623 #
Proposal for a regulation
Article 71 – paragraph 1
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, the Commission in consultation with Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and in cooperation with Member States shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers andsize and the interests of SME providers, including start-ups and their economic viability.
Amendment 625 #
Proposal for a regulation
Article 71 – paragraph 2
Article 71 – paragraph 2
Amendment 638 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
Amendment 742 #
Proposal for a regulation
Recital 76 a (new)
Recital 76 a (new)
(76 a) An AI advisory council(‘the Advisory Council’) should be established as a sub-group of the Board consisting of relevant representatives from industry, research, academia, civil society, standardisation organisations, relevant common European data spaces, and other relevant stakeholders, including social partners, where appropriate depending on the subject matter discussed, representing all Member States to maintain geographical balance. The Advisory Council should support the work of the Board by providing advice relating to the tasks of the Board. The Advisory Council should nominate a representative to attend meetings of the Board and to participate in its work.
Amendment 775 #
Proposal for a regulation
Recital 86 a (new)
Recital 86 a (new)
(86 a) In order to ensure uniform conditions for the implementation of this Regulation, it should be accompanied by the publication of guidelines to help all stakeholders to interpret key concepts covered by the Regulation, such as prohibited or high-risk AI cases and the precise means and implementation rules of the Regulation by national competent authorities;
Amendment 822 #
Proposal for a regulation
Article 2 – paragraph 1 – point b
Article 2 – paragraph 1 – point b
(b) users of AI systems located withusing the AI system in the Union;
Amendment 914 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, indispensably with some degree of autonomy, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 1049 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system, including remote biometric identification, for the purpose of identifying natural persons including at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database repository, excluding verification/authentication systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises; , and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
Amendment 1137 #
Proposal for a regulation
Article 4 – paragraph 1
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of characteristics and hazards that are similar to the techniques and approaches listed therein.
Amendment 1165 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm that could be predicted with due diligence;
Amendment 1183 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm that could be predicted with due diligence;
Amendment 1243 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use by law enforcement is strictly necessary for one of the following objectives:
Amendment 1412 #
Proposal for a regulation
Article 6 – paragraph -1 (new)
Article 6 – paragraph -1 (new)
-1. The AI system shall be considered high-risk where it meets the following two cumulative criteria: (a) the AI system is used or applied in a sector where, given the characteristics of the activities typically undertaken, significant risks of harm to the health and safety or a risk of adverse impact on fundamental rights of users, as outlined in Article 7(2) can be expected to occur. (b) the AI system application in the sector in question is used in such a manner that significant risks of harm to the health and safety or a risk of adverse impact on fundamental rights of users, as outlined in Article 7(2) are likely to arise.
Amendment 1443 #
2. In addition to the high-risk AI systems referred to in paragraph 1 and in accordance with Article 6– paragraph -1a, AI systems referred to in Annex III shall also be considered high-risk.
Amendment 1698 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases defined as a statistical error or a top-down introduction of assumptions harmful to an individual, that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
Amendment 1716 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing datasets sets shall be relevant, representative, up-to-date, and to the extent that it could be reasonably expected, taking into account the state of the art, free of errors and as complete as could be reasonably expected . They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1813 #
Proposal for a regulation
Article 14 – paragraph 1
Article 14 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use, where required by the risk analysis as foreseen in the product legislations listed in Annex II.
Amendment 2061 #
Proposal for a regulation
Article 29 – paragraph 5 a (new)
Article 29 – paragraph 5 a (new)
5 a. Users of high-risk AI systems which affect natural persons, in particular, by evaluating or assessing them, making predictions about them, recommending information, goods or services to them or determining or influencing their access to goods and services, shall inform the natural persons that they are subject to the use of such an high-risk AI system. This information shall include a clear and concise indication of the user and the purpose of the high-risk AI system, information about the rights of the natural person conferred under this Regulation, and a reference to publicly available resource where more information about the high-risk AI system can be found, in particular the relevant entry in the EU database referred to in Article 60, if applicable.This information shall be presented in a concise, intelligible and easily accessible form, including for persons with disabilities. This obligation shall be without prejudice to other Union or Member State laws, in particular Regulation 2016/679 [GDPR], Directive 2016/680 [LED], Regulation 2022/XXX [DSA].
Amendment 2080 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Article 29 a Fundamental rights impact assessments for high-risk AI systems 1. The user of a high-risk AI system as defined in Article 6 paragraph 2 shall conduct an assessment of the system’s impact on fundamental rights and public interest in the context of use before putting the system into use and at least every two years afterwards. The information on clear steps as to how the potential harms identified will be mitigated and how effective this mitigation is likely to be should be included. 2. If adequate steps to mitigate the risks outlined in the course of the assessment in paragraph 1 cannot be identified, the system shall not be put into use. Market surveillance authorities, pursuant to their capacity under Articles 65 and 67, shall take this information into account when investigating systems which present a risk at national level. 3. In the course of the impact assessment, the user shall notify relevant national authorities and all relevant stakeholders. 4. Where, following the impact assessment process, the user decides to put the high- risk AI system into use, the user shall be required to publish the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51 paragraph 2. 5. Users of high-risk AI systems shall use the information provided to them by providers of high-risk AI systems under Article 13 to comply with their obligation under paragraph 1. 6. The obligations on users in paragraph 1 is without prejudice to the obligations on users of all high-risk AI systems as outlined in Article 29.
Amendment 2245 #
Proposal for a regulation
Article 51 – paragraph 1
Article 51 – paragraph 1
1. Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2), the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Amendment 2250 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Article 51 – paragraph 1 a (new)
2. A high-risk AI system designed, developed, trained, validate, tested or approved to be placed on the market or put into service, outside the EU, can be registered in the EU database referred to in Article 60 and placed on the market or put into service in the EU only if it is proven that at all stages of its design, development, training, validation, testing or approval, all the obligations required from such AI systems in EU have been met;
Amendment 2457 #
Proposal for a regulation
Article 57 – paragraph 3 a (new)
Article 57 – paragraph 3 a (new)
3 a. The Board shall establish a AI Advisory Council (Advisory Council). The Advisory Council shall be composed of relevant representatives from industry, research, academia, civil society, standardisation organisations, relevant common European data spaces and other relevant stakeholders or third parties appointed by the Board, representing all Member States to maintain geographical balance. The Advisory Council shall support the work of the Board by providing advice relating to the tasks of the Board. The Advisory Council shall nominate a relevant representative, depending on the configuration in which the Board meets, to attend meetings of the Board and to participate in its work. The composition of the Advisory Council and its recommendations to the Board shall be made public.
Amendment 2774 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Article 68 a Representation of affected persons and the right of public interest organisation to lodge complaints 1. Without prejudice to Directive 2020/1828/EC, natural per-sons or groups of natural persons affected by an AI system shall have the right to mandate a body, organisation or association to lodge a complaint referred to in Article 68 on their behalf, to exercise the right to remedy referred to in Article 68 on their behalf, and to exercise on their behalf other rights under this Regulation, in particular the right to receive an explanation referred to in Article 4a 2. Without prejudice to Directive 2020/1828/EC, the bodies, organisations or associations referred to in paragraph 1 shall have the right to lodge a complaint with national supervisory authorities, independently of the mandate of the natural per-son, if they consider that an AI system has been placed on the market, put into service, or used in a way that infringes this Regulation, or is otherwise in violation of fundamental rights or other aspects of public interest protection, pursuant to article 67. 3. National supervisory authorities have the duty to investigate, in conjunction with relevant market surveillance authority if applicable, and respond within a reasonable period to all com- plaints referred to in paragraph 2.
Amendment 2817 #
Proposal for a regulation
Article 71 – paragraph 1
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, the Commission in consultation with Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and in cooperation with Member States shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the size and the interests of small-scaleSME providers andincluding start- ups and their economic viability.
Amendment 2823 #
Proposal for a regulation
Article 71 – paragraph 2
Article 71 – paragraph 2
Amendment 3054 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
Annex III – paragraph 1 – point 1 – introductory part
1. Biometric identification and categorisation of natural persons:
Amendment 3062 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons without their agreement, including remote biometric identification;
Amendment 3111 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
Amendment 3131 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use; or AI systems related to low- value credits for the purchase of movables;
Amendment 3145 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c a (new)
Annex III – paragraph 1 – point 5 – point c a (new)