BETA

5 Amendments of Sylwia SPUREK related to 2021/0106(COD)

Amendment 420 #
Proposal for a regulation
Recital 15 a (new)
(15 a) The European Union and its Member States as signatories to the United Nations Convention on the Rights of Persons with Disabilities (CRPD) are obliged to protect persons with disabilities from discrimination and to promote their equality. They are obliged to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems and to ensure respect for the fundamental rights, including that of privacy, of persons with disabilities.
2022/06/13
Committee: IMCOLIBE
Amendment 423 #
Proposal for a regulation
Recital 15 b (new)
(15 b) Providers of AI systems should ensure that these systems are designed in accordance with the accessibility requirements set out in Directive (EU) 2019/882 and guarantee full, equal, and unrestricted access for everyone potentially affected by or using AI systems, including persons with disabilities.
2022/06/13
Committee: IMCOLIBE
Amendment 1632 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 2
In eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environmendeployer, to the socio-technical context in which the system is intended to be used, and to reasonably foreseeable use or misuse.
2022/06/13
Committee: IMCOLIBE
Amendment 1662 #
Proposal for a regulation
Article 9 – paragraph 8 – point a (new)
(a) adversely affect specific groups of people, in particular on the basis of gender, sexual orientation, age, ethnicity, disability, religion, socio-economic standing, religion or origin, including asylum seekers including migrants, refugees and asylum seekers;
2022/06/13
Committee: IMCOLIBE
Amendment 1671 #
Proposal for a regulation
Article 9 a (new)
Article 9 a Fundamental rights impact assessments for high-risk AI systems 1. Providers, and deployers at each proposed deployment, must designate the categories of individuals and groups likely to be impacted by the system, assess the system’s impact on fundamental rights, its accessibility for persons with disabilities, and its impact on the environment and broader public interest. Deployers of high-risk AI systems as defined in Article 6(2) shall, prior to putting the system into use, publish a fundamental rights impact assessment of the systems’ impact in the context of use throughout the entire lifecycle. This assessment shall include at least: a) the intended purpose for which the system will be used; b) the intended geographic and temporal scope of the system; c) the potential risks of the use to the rights and freedoms of natural persons, including any indirect impacts or consequences of the systems; d) the categories of natural persons and groups likely or foreseen to be affected; e) the proportionality and necessity of the system’s use; f) verification of the legality of the use of the system in accordance with Union and national law; g) any specific risk of harm likely to impact marginalised, vulnerable persons or groups at risk of discrimination, and risk of increasing existing societal inequalities; h) the foreseeable impact of the use of the system on the environment over its entire life cycle, including but not limited to energy consumption; i) any other negative impact on the public interest and clear plans relating to how the harms identified will be mitigated, and how effective this mitigation is expected to be; and j) the governance system the deployer will put in place, including human oversight, complaint-handling and redress. 2. If adequate steps to mitigate the risks outlined in the course of the assessment in paragraph 1 cannot be identified, the system shall not be put into use. Market surveillance authorities, pursuant to Articles 65 and 67, may take this information into account when investigating systems which present a risk at national level. 3. The obligation outlined under paragraph 1 applies for each new deployment of the high-risk AI system. 4. Deployers shall consult with relevant stakeholders, in particular groups of natural persons exposed to heightened risks from the AI system, civil society and social partners when preparing the impact assessment. The impact assessment shall be repeated on a regular basis throughout the entire lifecycle. 5. Publication of the results of the impact assessment shall be part of the registration of use pursuant to Article 51(2). 6. Where the deployer is already required to carry out a data protection impact assessment under Article 35 of Regulation(EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the impact assessment outlined in paragraph 1 shall be conducted in conjunction to the data protection impact assessment and be published as an addendum. 7. Deployers of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation under paragraph 1.
2022/06/13
Committee: IMCOLIBE