BETA

24 Amendments of Tudor CIUHODARU related to 2020/2016(INI)

Amendment 10 #
Motion for a resolution
Citation 12 a (new)
- having regard to the Council of Europe’s European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment, adopted on 4 December 2018,
2020/07/20
Committee: LIBE
Amendment 21 #
Motion for a resolution
Recital A a (new)
A a. whereas AI can be seen as the ability of a system to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation; Whereas the key components of development in AI are the availability of vast quantities of: data, computing power, and human capital and talent;
2020/07/20
Committee: LIBE
Amendment 24 #
Motion for a resolution
Recital A b (new)
A b. whereas, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring understanding of context or critical analysis; whereas, some AI applications have attained the performance levels of human experts and professionals in performing certain specific tasks, and can provide results in a completely different speed and scale;
2020/07/20
Committee: LIBE
Amendment 26 #
Motion for a resolution
Recital A c (new)
A c. whereas several Member States use the application of embedded artificial intelligence (AI) systems in the field of law enforcement;
2020/07/20
Committee: LIBE
Amendment 31 #
Motion for a resolution
Recital B a (new)
B a. whereas the use of AI technology should be developed in such a way as to put people at its center and therefore to be worth of public trust;
2020/07/20
Committee: LIBE
Amendment 36 #
Motion for a resolution
Recital C a (new)
C a. whereas AI systems always have to be in the service of humans and have the ultimate safety valve of being designed so that they can always be shut down by a human operator;
2020/07/20
Committee: LIBE
Amendment 47 #
E a. whereas the development and operation of AI systems for police and judicial authorities involves the contribution of multiple individuals, organisations, machine components, software algorithms, and human users in often complex and challenging environments;
2020/07/20
Committee: LIBE
Amendment 51 #
Motion for a resolution
Recital F a (new)
F a. whereas allocating and distributing responsibility between humans and machines is increasingly difficult; whereas ultimately it is the responsibility of the Member States to guarantee the full respect of fundamental rights when AI systems are used in the field of law enforcement;
2020/07/20
Committee: LIBE
Amendment 52 #
Motion for a resolution
Recital F b (new)
F b. whereas the relationship between protecting fundamental rights and effective policing must always be an essential element in the discussions on whether and how AI should be used by law enforcement sector, where decisions may have long lasting consequences on the life and freedom of individuals;
2020/07/20
Committee: LIBE
Amendment 65 #
Motion for a resolution
Recital H a (new)
H a. whereas AI has the potential to be a permanent part of our criminal justice ecosystem by providing investigative analysis and assistance;
2020/07/20
Committee: LIBE
Amendment 85 #
Motion for a resolution
Paragraph 1 a (new)
1 a. recalls that the EU has already established data protection standards for law enforcement, which form the foundation for any future regulation in AI; recalls that processing of personal data must be lawful and fair; the purposes of processing must be specified, explicit and legitimate; must be adequate, relevant and not excessive in relation to the purpose for which is it processed; be accurate and kept up to date (inaccurate data should, subject to the purpose for which it would otherwise be retained, be correcte dor erased); should be kept for no longer than is necessary and processed in a secure manner;
2020/07/20
Committee: LIBE
Amendment 87 #
Motion for a resolution
Paragraph 2
2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, human dignity, prevention of damage, transparency, impartiality and accuracy, fairness and explainability of the use of biometric recognition technologies, guarantee of the human control by the user, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of association, equality before the law, and the right to an effective remedy and a fair trial;
2020/07/20
Committee: LIBE
Amendment 94 #
Motion for a resolution
Paragraph 2 a (new)
2 a. notes that the use of biometric data, such as for facial recognition technologies, relates more broadly to the principle of the right to human dignity; human dignity is the basis of all fundamental rights guaranteed by the Charter of Fundamental Rights; The Court of Justice of the EU (CJEU) has confirmed in its case law that the fundamental right to dignity is part of EU law, therefore biometric data, including facial images, must be processed in a way that respects human dignity;
2020/07/20
Committee: LIBE
Amendment 106 #
Motion for a resolution
Paragraph 3 a (new)
3 a. Underlines the right of the parties to access the data collection process and that relating to prognostic assessments useful for crime prevention police, to the cataloguing and evaluation of criminal evidence and to preventive assessments of whether a suspect might be a danger to society, to the risk of recidivism and the output produced or obtained through AI for notification procedures, as well as the role of AI and related technologies in criminal law enforcement and crime prevention;
2020/07/20
Committee: LIBE
Amendment 138 #
Motion for a resolution
Paragraph 7
7. Highlights the power asymmetry between those who develop and employ AI technologies and those who interact and are subject to them; it is, therefore, essential also to provide for a rule that ensures the transparency of the corporate structures of companies that produce and manage AI systems and institutionalise the principle of independence of the programmers, since it is they who prepare not only the selection of data and information to be processed at the basis of the algorithms, but also the assessment criteria that inform and produce a decision;
2020/07/20
Committee: LIBE
Amendment 143 #
Motion for a resolution
Paragraph 8 a (new)
8 a. stresses that only a robust European AI governance enable the necessary operationalisation of fundamental rights principles;
2020/07/20
Committee: LIBE
Amendment 145 #
Motion for a resolution
Paragraph 9
9. Considers it necessary to create a clear and fair regime for assigning legal responsibility for the potential adverse consequences produced by these advanced digital technologies; Recognises the challenges to correctly locate the responsibility for potential harm, given the complexity of development and operation of AI systems;
2020/07/20
Committee: LIBE
Amendment 149 #
Motion for a resolution
Paragraph 9 a (new)
9 a. Highlights how individuals have become overly trusting in the seemingly objective and scientific nature of AI tools and thus fail to consider the possibility of their results being incorrect, incomplete or irrelevant, with potentially grave adverse consequences specifically in the area of law enforcement and justice; Emphasises the over-reliance on the results provided for by AI systems, and notes with concern the lack of confidence and knowledge, by authorities, to question or override an algorithmic recommendation;
2020/07/20
Committee: LIBE
Amendment 156 #
Motion for a resolution
Paragraph 10
10. Underlines that in judicial and law enforcement contexts, the final decision always needs to be taken by a human, who can be held accountable for the decisions made, and include the possibility of a recourse for a remedy; it is necessary to prevent the use of algorithms – so-called automated decision systems – can replace human minds in final decisions, in order to avoid deterministic approaches and ensure the formation of the free judgment of judicial authorities, and whose decisions must always be justifiable, responsible and free of prejudices;
2020/07/20
Committee: LIBE
Amendment 160 #
Motion for a resolution
Paragraph 11
11. Calls for algorithmic explainability and transparency in order to ensure that the development, deployment and use of AI systems for judiciary and law enforcement comply with fundamental rights, and are trusted by citizens, as well as in order to ensure that results generated by AI algorithms can be rendered intelligible to users and to those subject to these systems, and that there is transparency on the source data and how the system arrived at a certain conclusion; on that note, it is necessary to develop specific mandatory rules of conduct for public and private entities responsible for the design and use of AI, to ensure that they adhere to the principles of transparency and clarity relating to the processes for developing mathematical models and predictive algorithms, while complying with the requirement for independent verification of the quality and reliability of the results achieved, in terms of acquiring and assessing evidence - especially circumstantial evidence - beyond all reasonable doubt;
2020/07/20
Committee: LIBE
Amendment 164 #
Motion for a resolution
Paragraph 11 a (new)
11 a. Calls for, in order to guarantee the algorithmic explainability and transparency of law enforcement AI systems, only such tools to be allowed to be purchased by the law enforcement in the Union, which algorithms and logic are open, to at least the police forces themselves, that can be audited, evaluated and vetted by them, and not closed and labelled proprietary by the vendors;
2020/07/20
Committee: LIBE
Amendment 166 #
Motion for a resolution
Paragraph 11 b (new)
11 b. considers that the use and collection of any biometric data for remote identification purposes, for example by conducting facial recognition in public places, as well as at automatic border control gates used for border checks at airports, may pose specific risks to fundamental rights; the implications of which could vary considerably depending on the purpose, context and scope of use;
2020/07/20
Committee: LIBE
Amendment 171 #
Motion for a resolution
Paragraph 13
13. Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or judiciary, in order to assess any potential risks to fundamental rights; Underlines that this could oftentimes be built upon the mandatory Data Protection Impact Assessments;
2020/07/20
Committee: LIBE
Amendment 198 #
Motion for a resolution
Paragraph 16 a (new)
16 a. Calls for the Fundamental Rights Agency, in collaboration with the European Data Protection Board and the European Data Protection Supervisor to draft comprehensive guidelines for the development, use and deployment of AI applications and solutions for the use by law enforcement and judicial authorities;
2020/07/20
Committee: LIBE