Activities of Cornelia ERNST related to 2020/2016(INI)
Plenary speeches (1)
Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (debate)
Shadow reports (1)
REPORT on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters
Amendments (38)
Amendment 14 #
Motion for a resolution
Recital A
Recital A
A. whereas digital technologies in general and artificial intelligence (AI) in particular bring with them extraordinary promise, but unfortunately there is growing evidence of a sharp divide between promises and practices; whereas AI is one of the strategic technologies of the 21st century, potentially generating substantial benefits in efficiency, accuracy, and convenience, and thus bringing positive change to the European economysociety, but also enormous risks for fundamental rights and democracies based on the rule of law; whereas AI should not be seen as an end in itself, but as a tool for serving people, with the ultimate aim of increasing human well- being;
Amendment 22 #
Motion for a resolution
Recital A a (new)
Recital A a (new)
A a. Whereas the increasing use of AI in the criminal law field is based on the promises that it would reduce crime and would lead to more objective decisions; whereas, however, experience has shown that there are several reasons not to believe in such promises;
Amendment 29 #
Motion for a resolution
Recital B
Recital B
B. whereas the development of AI must respect EU law, as well as the values on which the Union is founded, in particular human dignity, freedom, democracy, equality, the rule of law, and human and fundamental rights;
Amendment 33 #
Motion for a resolution
Recital C
Recital C
C. whereas trustworthy AI systems need to be accountable, designed for the protection and benefit of all (including consideration of vulnerable, marginalised populations in their design), be non- discriminatory, safe and transparent, and respect human autonomy and fundamental rights;
Amendment 38 #
Motion for a resolution
Recital D
Recital D
D. whereas the Union together with the Member States bear a critical responsibility for ensuring that policy choices surrounding the development, deployment and use of AI applications in the field of the judiciary and law enforcement are made in a transparent manner, respect the principles of necessity and proportionality, and guarantee that the policies and measures adopted will fully safeguard fundamental rights within the Union and reflect the societies` expectation in a constitutional, fair, and humane criminal justice system;
Amendment 45 #
Motion for a resolution
Recital E
Recital E
E. whereas AI applications offer greatsome opportunities in the field of law enforcement, in particular in improving the working methods of law enforcement agencies and judicial authorities, and combating certain types of crime more efficiently, in particular in the field of financial crime, money laundering and terrorist financing, as well as certain types of cybercrime;
Amendment 50 #
Motion for a resolution
Recital F
Recital F
F. whereas a clear model for assigning legal responsibility for the potential harmful effects of AI systems in the field of criminal law is imperative; whereas regulatory provisions in this field should always maintain human accountability;
Amendment 56 #
Motion for a resolution
Recital G
Recital G
G. whereas AI applications in use by law enforcement include a heterogeneous array of applications such as facial recognition technologies, automated number plate recognition, speaker identification, speech identification, lip- reading technologies, aural surveillance (i.e. gunshot detection algorithms), autonomous research and analysis of identified databases, forecasting (predictive policing and crime hotspot analytics), behaviour detection tools, autonomous tools to identify financial fraud and terrorist financing, social media monitoring (scraping and data harvesting for mining connections), international mobile subscriber identity (IMSI) catchers, and automated surveillance systems incorporating different detection capabilities (such as heartbeat detection and thermal cameras); whereas the aforementioned applications have vastly varying degrees of reliability and accuracy, accuracy and impact on fundamental rights and on the dynamics of criminal justice systems;
Amendment 62 #
Motion for a resolution
Recital H
Recital H
H. whereas AI tools and applications are also used by the judiciary worldwide, including to support decisions on pre-trial detention, in sentencing, calculating probabilities for reoffending and in determining probation;
Amendment 67 #
Motion for a resolution
Recital I
Recital I
I. whereas use of AI in law enforcement entails a number of potential risks, such as opaque decision-making, different types of discrimination, and risks to the protection of privacy and personal data, the protection of freedom of expression and information, and the presumption of innocence and, most importantly in the criminal law field, huge risks for the freedom and security of individuals;
Amendment 71 #
Motion for a resolution
Recital I a (new)
Recital I a (new)
I a. whereas predictive policing systems necessarily rely heavily on historical data, which can contain biases so that any subsequent police method or strategy based upon such data are inclined to reproduce those biases in its results. Whereas these biases can have a ‘ratchet effect’ meaning that the distortion will get incrementally worse each year if police services rely on the evidence of last year’s data in order to set next year’s targets 1a. _________________ 1aWilliams, Patrick and Kind, Eric (2019)Data-driven Policing: The hardwiring of discriminatory policing practices across Europe. Project Report. European Network Against Racism (ENAR)
Amendment 73 #
Motion for a resolution
Recital I b (new)
Recital I b (new)
I b. whereas persons who belong to minority ethnic groups are much more likely to be subject to stop and searches by police, prosecution, punishment and imprisonment in comparison to the respective "majority" population; whereas, as recognised by Commissioner Vestager in her keynote speech at the European AI Forum on 30 June 2020, migrants and people belonging to certain ethnic groups might be targeted by predictive policing techniques that direct all the attention of law enforcement to them;
Amendment 76 #
Motion for a resolution
Recital J a (new)
Recital J a (new)
Amendment 78 #
Motion for a resolution
Recital J b (new)
Recital J b (new)
J b. whereas, however, the deployment of AI in this field should not be seen as a mere technical question of ensuring the accuracy and effectiveness of such tools, but rather a crucial political decision concerning the design and the objectives of law enforcement and of criminal justice systems, which will inevitably bring about a deep impact on the lives and fundamental rights of people;
Amendment 79 #
Motion for a resolution
Recital J c (new)
Recital J c (new)
J c. Whereas a full enforcement of law is a dream that should not be pursued at all costs; whereas detecting and sanctioning all law infringements is not possible unless resorting to ubiquitous surveillance; whereas detecting all forms of illegal conduct with the same high level of efficacy is not a legitimate goal in democratic societies that value the privacy of individuals, and which, in order to protect such a value, are ready to accept that in some cases disobedience is not punished;
Amendment 80 #
Motion for a resolution
Recital J d (new)
Recital J d (new)
J d. Whereas an increasing number of authorities and legislators worldwide have banned, or are considering to ban, the use of facial recognition by law enforcement authorities; whereas, in the wake of protests around the murder of George Floyd, Amazon, Microsoft and IBM denied police departments access to their facial recognition technology, calling governments around the world to regulate the use of facial recognition;
Amendment 81 #
Motion for a resolution
Recital J e (new)
Recital J e (new)
J e. Whereas EU instruments on judicial cooperation, such as the European Arrest Warrant, do not modify the obligation to respect the fundamental rights and legal principles as enshrined in Article 6 of the TEU; Whereas on several occasions the CJEU has concluded that mutual trust is not blind, and that the executing judicial authority might be required to assess whether there is a real risk that the individual concerned will suffer a breach of fundamental rights if surrendered to the issuing state; whereas the CJEU has applied this principle both as regards a potential violation of the prohibition of torture and inhuman or degrading treatment, and of the right to a fair trial;
Amendment 82 #
Motion for a resolution
Recital J f (new)
Recital J f (new)
J f. Whereas modern liberal criminal law is based on the idea that state authorities react to an offence after it has been committed, without assuming that people are dangerous and need to be constantly monitored in order to prevent any potential wrongdoing; whereas AI- based surveillance techniques deeply challenge such an approach and urge legislators worldwide to thoroughly assess the consequences of allowing the deployment of technologies that reduce the role of human beings in law enforcement and adjudication.
Amendment 88 #
Motion for a resolution
Paragraph 2
Paragraph 2
2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, including the right to silence, freedom of expression and information, freedom of assembly and of association, equality before the law, and the right to an effective remedy and a fair trial; stresses that any use of AI must be prohibited when evidently incompatible with fundamental rights;
Amendment 97 #
Motion for a resolution
Paragraph 2 a (new)
Paragraph 2 a (new)
2 a. Considers it is necessary to lower the expectations on technological solutions that promise a perfect law enforcement and the unrealistic detection of all committed offences
Amendment 98 #
Motion for a resolution
Paragraph 3
Paragraph 3
3. Considers, in this regard, that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, secure and fit for purpose, respect the principles of fairness, accountability, transparency and explainability, with their deployment subject to a strict necessity and proportionality test; Urges the EU and national legislators to take into great consideration the five principles of the ‘Ethical Charter on the use of artificial intelligence in judicial systems and their environment’ adopted by the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe, and to pay particular attention to the ‘uses to be considered with the most extreme reservation’, identified by CEPEJ;
Amendment 107 #
Motion for a resolution
Paragraph 3 a (new)
Paragraph 3 a (new)
3 a. Stresses that the use of AI in this field poses risks for human rights - namely privacy, data protection, and fair trial - and that in the future it may pose further risks that are still unknown; calls for the precautionary principle to be at the heart of any legal frameworks on AI;
Amendment 109 #
Motion for a resolution
Paragraph 3 b (new)
Paragraph 3 b (new)
3 b. Considers it essential, both for the effectiveness of the exercise of defence rights and for the transparency of national criminal justice systems, that a specific, clear and precise legal framework regulates the conditions, modalities and consequences of the use of AI tools in this field, as well as the rights of targeted persons, including possibilities to seek legal remedy; stresses that in the absence of such a legal framework, AI should not be used in this arena;
Amendment 110 #
Motion for a resolution
Paragraph 3 c (new)
Paragraph 3 c (new)
3 c. Urges executing authorities, when deciding on a request of extradition (or surrender) to another Member State or third country, to assess whether the use of AI tools in the requesting (or issuing) country might compromise the essence of the fundamental right to a fair trial; Considers that the first step of such an assessment should be conducted ‘on the basis of material that is objective, reliable, specific and properly updated concerning the operation of the system of justice in the issuing Member State’ (C-216/18 PPU, §61); calls on the Commission to publish updated information concerning the use of AI in the Member States’ judicial and law enforcement systems, and to issue guidelines on how to conduct such an assessment in the context of judicial cooperation in criminal matters;
Amendment 114 #
Motion for a resolution
Paragraph 4
Paragraph 4
4. Highlights the importance of preventing mass surveillance by means of AI technologies, and of banning applications that would result in it; Reminds that individuals not only have the right to be correctly identified, but they also have the right not to be identified at all, unless it is required by law for compelling and legitimate public interests;
Amendment 127 #
Motion for a resolution
Paragraph 5 a (new)
Paragraph 5 a (new)
5 a. Stresses that biases inherent in underlying datasets are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to minority ethnic groups or racialized communities; considers that such an effect is unacceptable in particular in the area of law enforcement;
Amendment 128 #
Motion for a resolution
Paragraph 5 b (new)
Paragraph 5 b (new)
5 b. Stresses that the datasets and algorithmic systems used when making classifications, assessments and predictions at the different stages of data processing in the development of AI and related technologies may also result in differential treatment of and indirect discrimination against groups of people with similar characteristics; calls for a rigorous examination of AI’s classification practices and harms; emphasises that AI technologies require that the field centre non-technical disciplines whose work traditionally examines such issues, including science and technology studies, critical race studies, disability studies, and other disciplines attuned to social context, including how difference is constructed, the work of classification, and its consequences; stresses the need therefore to systematically invest in integrating these disciplines into AI study and research at all levels;
Amendment 129 #
Motion for a resolution
Paragraph 5 c (new)
Paragraph 5 c (new)
5 c. Notes that the field of AI is strikingly homogenous and lacking in diversity, where in particular minority ethnic groups and other marginalized groups are underrepresented; stresses the need to ensure that the teams that design, develop, test, maintain, deploy and procure these systems reflect the diversity of its uses and of society in general as a non-technical means to reduce the risks of increased discrimination.
Amendment 133 #
Motion for a resolution
Paragraph 6 a (new)
Paragraph 6 a (new)
6 a. Stresses that data used to train predictive policing algorithms reflect ongoing surveillance priorities and that, as a consequence, AI predictions based on characteristics of a specific group of persons end up in amplifying and reproducing existing forms of discrimination and racial domination;
Amendment 140 #
Motion for a resolution
Paragraph 7
Paragraph 7
7. Highlights the power asymmetry between those who develop and employ AI technologies and those who interact and are subject to them; stresses the impact on defence rights and the burdensome or even impossible tasks for persons under investigation to challenge the results of AI tools;
Amendment 146 #
Motion for a resolution
Paragraph 9
Paragraph 9
9. Considers it necessary to create a clear and fair regime for assigning legal responsibility for the potential adverse consequences produced by these advanced digital technologies; considers it imperative for this regime to always identify a responsible person for decisions taken with the support of AI;
Amendment 158 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. Underlines that in judicial and law enforcement contexts, the finevery legal decision always needs to be taken by a human, who can be held accountable for the decisions made, and include the possibility of a recourse for a remedy; in this regard, stresses that the use of AI may influence human decisions and have an impact on all phases of criminal procedure;
Amendment 173 #
Motion for a resolution
Paragraph 13
Paragraph 13
13. Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or judiciary, in order to assess any potential risks to fundamental rights; Calls for an obligation to make the results of such impact assessments public;
Amendment 187 #
Motion for a resolution
Paragraph 15
Paragraph 15
15. Calls for a moratoriumEU-wide ban on the deployment of live facial recognition systems for law enforcement, until the technical standards can be considered fully fundamental rights compliant, results derivand on the use of automated are non-discriminatory, and therecognition isn public trust in the necessity and proportionality for the deployment of such technologiespaces of other human features, such as gait, fingerprints, DNA, voice, and other biometric and behavioural signals;
Amendment 189 #
Motion for a resolution
Paragraph 15 a (new)
Paragraph 15 a (new)
15 a. Reminds that in Europe lie detector tests are generally not considered reliable evidence and their use is often forbidden as they have a detrimental impact on self-determination; stresses the contested scientific validity of affect recognition technology, such as cameras detecting eye-movements and changes in pupil size to flag potential deception, and calls for a ban on its use in the law enforcement and criminal justice field, as well as in border control;
Amendment 191 #
Motion for a resolution
Paragraph 15 b (new)
Paragraph 15 b (new)
15 b. Calls for a ban on uses of AI to make behavioural predictions with significant effect on people based on past behaviour, group membership, or any other characteristics, such as predictive policing;
Amendment 193 #
Motion for a resolution
Paragraph 16
Paragraph 16
16. Calls for greater overall transparency from Member States, and for a comprehensive understanding of the use of AI applications in the Union, broken down by Member State law enforcement and judicial authority, the type of tool in use, the types of crime they are applied to, and the companies whose tools are being used; calls, in particular, for binding rules mandating public disclosure and debate of public-private partnerships, contracts and acquisition.
Amendment 196 #
Motion for a resolution
Paragraph 16 a (new)
Paragraph 16 a (new)
16 a. Stresses the importance of independent evaluation of the functioning of AI in practice; urges EU and national authorities to invest in independent empirical research, in particular concerning the influence of AI-based legal decisions affecting individuals’ position; notes that, without such an independent evaluation, it is impossible to have a fully informed democratic debate on the necessity and proportionality of AI in the criminal justice field;