39 Amendments of Jeroen LENAERS related to 2020/2016(INI)
Amendment 6 #
Motion for a resolution
Citation 4 a (new)
Citation 4 a (new)
- having regard to the European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment of the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe;
Amendment 7 #
Motion for a resolution
Citation 6 a (new)
Citation 6 a (new)
- having regard to the ‘Ethics Guidelines for Trustworthy AI’ of the High-Level Expert Group on Artificial Intelligence set up by the Commission of 8 April 2019;
Amendment 15 #
Motion for a resolution
Recital A
Recital A
A. whereas digital technologies in general and the proliferation of data processing and analytics enabled by artificial intelligence (AI) in particular bring with them extraordinary promise; whereas AI isdevelopment has made a big leap forward in recent years which makes it one of the strategic technologies of the 21st century, generating substantial benefits in efficiency, accuracy, and convenience, and thus bringing positive change to the European economy and society; whereas AI should not be seen as an end in itself, but as a tool for serving people, with the ultimate aim of increasing human well-being, human capabilities and safety;
Amendment 28 #
Motion for a resolution
Recital B
Recital B
B. whereas the development of AI must respect the values on which the Union is founded, in particular human dignity, freedom, democracy, equality, the rule of law, and human and fundamental rights, have to be respected throughout the life cycle of AI tools, notably during their design, development, deployment and use;
Amendment 34 #
Motion for a resolution
Recital C
Recital C
C. whereas trustworthy AI systems need to be accountable, designed for all (including consideration of vulnerable, marginalised populations in their design), be non-discriminatory, safe and transparent, and respect human autonomy and fundamental rightsnon-discriminatory, safe and transparent, and respect human autonomy and fundamental rights in order to be trustworthy, as described in the Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence;
Amendment 39 #
Motion for a resolution
Recital D
Recital D
D. whereas the Union together with the Member States bear a critical responsibility for ensuring that policy choicedecisions surrounding the development, deployment and uslife-cycle of AI applications in the field of the judiciary and law enforcement are made in a transparent manner, respect the principles of necessity and proportionality, and guarantee that the policies and measures adopted will fully safeguard fundamental rights within the Union and fully safeguard fundamental rights; whereas the relevant policy choices should respect the principles of necessity and proportionality;
Amendment 44 #
Motion for a resolution
Recital E
Recital E
E. whereas AI applications offer great opportunities in the field of law enforcement, in particular in improving the working methods of law enforcement agencies and judicial authorities, and preventing and combating certain types of crime more efficiently, in particular financial crime, money laundering and terrorist financing, as well as certain types of cybercrime, thereby contributing to the safety and security of EU citizens;
Amendment 48 #
Motion for a resolution
Recital F
Recital F
Amendment 55 #
Motion for a resolution
Recital G
Recital G
G. whereas AI applications in use by law enforcement include applications such as facial recognition technologies, e.g. to search suspect databases and identify victims of human trafficking or child sexual exploitation and abuse, automated number plate recognition, speaker identification, speech identification, lip- reading technologies, aural surveillance (i.e. gunshot detection algorithms), autonomous research and analysis of identified databases, forecasting (predictive policing and crime hotspot analytics), behaviour detection tools, advanced virtual autopsy tools to help determine the cause of death, autonomous tools to identify financial fraud and terrorist financing, social media monitoring (scraping and data harvesting for mining connections), international mobile subscriber identity (IMSI) catchers, and automated surveillance systems incorporating different detection capabilities (such as heartbeat detection and thermal cameras); whereas the aforementioned applications have vastly varying degrees of reliability and accuracy;
Amendment 61 #
Motion for a resolution
Recital H
Recital H
H. whereas AI tools and applications are also used by the judiciary worldwide, including in sentencing, calculating probabilities for reoffending and in determining probation, online dispute resolution, case law management, and the provision of facilitated access to the law;
Amendment 64 #
Motion for a resolution
Recital H a (new)
Recital H a (new)
H a. whereas the applications of AI in law enforcement and the judiciary are in different development stages, ranging from conceptualisation through prototyping or evaluation to post-approval use; whereas new possibilities of use may arise in the future as the technology becomes more mature due to ongoing intensive scientific research worldwide;
Amendment 66 #
Motion for a resolution
Recital I
Recital I
I. whereas the use of AI in law enforcement entails a number of potential risks, such as opaque decision-making, different types of discrimination, and errors inherent in the underlying algorithm which can be reinforced by feedback loops, as well as risks to the protection of privacy and personal data, the protection of freedom of expression and information, and the presumption of innocence; whereas the extent of these risks varies between different applications and depending on the purpose of their use;
Amendment 72 #
Motion for a resolution
Recital I a (new)
Recital I a (new)
I a. whereas some countries make greater use of AI applications in law enforcement and the judiciary than others, which is partly due to a lack of regulation and regulatory differences which enable or prohibit AI use for certain purposes;
Amendment 74 #
Motion for a resolution
Recital J
Recital J
J. whereas AI systems used by law enforcement and judiciary are also vulnerable to AI- empowered attacks or data poisoning, whereby a wrong data set is included on purpose to produce biased results; whereas in these situations the resulting damage is potentially even morery significant, and can result in exponentially greater levels of harm to both individuals and groups;
Amendment 83 #
Motion for a resolution
Paragraph 1
Paragraph 1
1. RWelcomes the positive contribution of AI applications to the work of law enforcement and judicial authorities across the Union as a key enabling technology to ensure safety and security of citizens; highlights e.g. the enhanced case law management achieved by tools allowing for additional search options; believes that there is a wide range of other potential uses for AI by law enforcement and the judiciary which should be explored, subject to methodological precautions and scientific assessments; reiterates that, as processing large quantities of data is at the heart of AI, the right to the protection of private life and the right to the protection of personal data apply to all areas of AI, and that the Union legal framework for data protection and privacy must be fully complied with;
Amendment 92 #
Motion for a resolution
Paragraph 2
Paragraph 2
2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of association, equality before the law, the principle of equality of arms, and the right to an effective remedy and a fair trial;
Amendment 95 #
Motion for a resolution
Paragraph 2 a (new)
Paragraph 2 a (new)
2 a. Acknowledges that the speed at which AI applications are developed around the world necessitates a future- oriented approach and that any attempts at exhaustive listing of applications will quickly become outdated; calls, in this regard, for a clear and coherent governance model that guarantees the respect of fundamental rights, but also allows companies and organizations to further develop artificial intelligence applications;
Amendment 99 #
Motion for a resolution
Paragraph 3
Paragraph 3
3. Considers, in this regard, that safeguards should be proportionate to potential risks associated with the use specific AI applications; believes that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, robust, secure and fit for purpose, respect the principles of fairness, accountability, transparency and, non- discrimination as well as explainability, with their deployment subject to a strict necessity and proportionality test;
Amendment 112 #
Motion for a resolution
Paragraph 4
Paragraph 4
4. Highlights the importance of preventing mass surveillance by means, which per definition does not correspond to the principles of necessity and proportionality; strongly supports high thresholds for and transparency in the use of AI technologies, and of banning applications that would result in it; applications that could result in it; calls for law enforcement or the judiciary to use AI applications that adhere to the privacy-by- design principle whenever possible to avoid function creep;
Amendment 124 #
5. Stresses the potential for bias and discrimination arising from the use of machine learning and AI applications; notes that biases can beAI applications such as machine learning; notes that discrimination can result from biases inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings;
Amendment 130 #
Motion for a resolution
Paragraph 6
Paragraph 6
6. Underlines the fact that many algorithmically driven identification technologies that are currently in use disproportionately misidentify non-white people, childaccording to ethnicity, age and gender; considers, thereforen, the elderly, as well as womenat strong scientific and ethical standards are needed and that strong efforts should be made to avoid automated discrimination and bias;
Amendment 134 #
Motion for a resolution
Paragraph 6 a (new)
Paragraph 6 a (new)
6 a. Calls for strong additional safeguards in case AI systems in law enforcement or the judiciary are used on or in relation to minors, who are particularly vulnerable;
Amendment 137 #
Motion for a resolution
Paragraph 7
Paragraph 7
Amendment 141 #
Motion for a resolution
Paragraph 8
Paragraph 8
8. UTakes note of the risks related to data leaks, data security breaches and unauthorised access to personal data and other information related to criminal investigations or court cases that are processed by AI systems; underlines that security and safety aspects of AI systems used in law enforcement need to be carefully considered, and be sufficiently robust and resilient to prevent the potentially catastrophic consequences of malicious attacks on AI systems;
Amendment 148 #
9. Considers it necessary to create a clear and fair regime for assigning legal responsibility and liability for the potential adverse consequences produced by these advanced digital technologies;
Amendment 151 #
Motion for a resolution
Paragraph 9 a (new)
Paragraph 9 a (new)
9 a. Calls for the adoption of appropriate procurement processes for AI systems by Member States and EU agencies when used in law enforcement or judicial context, so as to ensure their compliance with fundamental rights;
Amendment 157 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. Underlines that in judicial and law enforcement contexts, the final decision always needs to be taken by a human, who can be held accountable for the decisions made, and includeTakes the view that law enforcement and judicial authorities that make use of AI systems need to uphold high legal standards, in particular when analysing data; underlines the need to ensure human intervention and accountability throughout the different stages of decision-making, to assess both the quality of the data and the appropriateness of each decision taken on the basis of that information; considers that persons subject to these systems should be given the possibility of a recourse for a remedy;
Amendment 163 #
Motion for a resolution
Paragraph 11
Paragraph 11
11. Calls for algorithmic explainability and transparency as a necessary part of oversight in order to ensure that the development, deployment and use of AI systems for judiciary and law enforcement comply with fundamental rights, and are trusted by citizens, as well as in order to ensure that results generated by AI algorithms can be rendered intelligible to users and to those subject to these systems, and that there is transparency on the source data and how the system arrived at a certain conclusion;
Amendment 167 #
Motion for a resolution
Paragraph 12
Paragraph 12
12. Calls for traceability of AI systems that defines the capabilitiethe decision making process of AI systems within law enforcement and the judiciary which outlines the functions and limitations of the systems, and keeps track of where the defining attributes for a decision originate, for instance through compulsory documentation obligations;
Amendment 172 #
Motion for a resolution
Paragraph 13
Paragraph 13
13. Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or judiciary purposes, in order to assess any potential risks to fundamental rights and, where necessary, define appropriate safeguards to address these risks;
Amendment 174 #
Motion for a resolution
Paragraph 13 a (new)
Paragraph 13 a (new)
13 a. Deplores that many law enforcement and judicial authorities in the EU lack the funding, capacities and capabilities to reap the benefits AI tools can offer for their work; encourages law enforcement and judicial authorities to identify, structure and categorise their needs to enable the development of tailor- made AI solutions and to exchange best practices on AI deployment; stresses the need to provide the authorities with the necessary funding, as well as to equip them with the necessary expertise to guarantee full compliance with the ethical, legal and technical requirements attached to AI deployment;
Amendment 175 #
Motion for a resolution
Paragraph 13 b (new)
Paragraph 13 b (new)
Amendment 176 #
Motion for a resolution
Paragraph 14
Paragraph 14
14. Calls for periodic mandatory auditing of all AI systems used by law enforcement and the judiciary to test and evaluatean adequate institutional framework, including proper regulatory and supervisory oversight, to ensure proper implementation; calls for periodic mandatory auditing of all AI systems used by law enforcement and the judiciary by an independent authority to test and evaluate the context, purpose, accuracy, performance, and scale of algorithmic systems once they are in operation, in order to detect, investigate, diagnose and rectify any unwanted and adverse effects and thereby ensure continuous compliance with the applicable regulatory framework;
Amendment 179 #
Motion for a resolution
Paragraph 14 a (new)
Paragraph 14 a (new)
14 a. Supports the recommendations of the Commission’s High-Level Expert Group on AI for a ban on AI-enabled mass scale scoring of individuals; considers that any form of normative citizen scoring on a large scale by public authorities, in particular within the field of law enforcement and the judiciary, leads to the loss of autonomy, endangers the principle of non-discrimination and cannot be considered in line with European values;
Amendment 181 #
Motion for a resolution
Paragraph 14 b (new)
Paragraph 14 b (new)
14 b. Welcomes the recommendations of the Commission’s High-Level Expert Group on AI for a proportionate use of biometric recognition technology; shares the view that the use of remote biometric identification should always be considered “high risk” and therefore be subject to additional requirements;
Amendment 182 #
Motion for a resolution
Paragraph 15
Paragraph 15
15. Calls for a moratoriumStrongly believes that the deployment of facial recognition systems by law enforcement should be limited to clearly warranted purposes in full respect onf the deployment of facial recognition systems for law enforcement, until the technical standardapplicable law; reaffirms that as a minimum, the use of facial recognition technology must comply with the requirements of data minimisation, data accuracy, storage limitation, data security and accountability, as well as being lawful, fair, transparent and following a specific, explicit and legitimate purpose that is clearly defined in Member State or Union law; reminds that these systems are already successfully used, inter alia to search suspect databases and identify victims of human trafficking or child sexual exploitation and abuse; emphasises the need to ensure that the technical standards and underlying algorithms can be considered fully fundamental rights compliant, and that results derived are non-discriminatory, and there is public trust ; believes that this will be decisive to ensure public trust and support regarding the necessity and proportionality ofor the deployment of such technologies;
Amendment 190 #
Motion for a resolution
Paragraph 15 a (new)
Paragraph 15 a (new)
15 a. Notes that predictive policing is among the AI applications used in the area of law enforcement; acknowledges that this can allow law enforcement to work more effectively and proactively, but warns that while predictive policing can analyse the necessary data sets for the identification of patterns and correlations, it cannot answer the question of causality and therefore cannot constitute the sole basis for an intervention;
Amendment 194 #
Motion for a resolution
Paragraph 16
Paragraph 16
16. Calls for greater overall transparency from Member States, and for a comprehensive understanregarding of the use of AI applications in the Union, broken down by Member State law enforcement and judicial authority, the type of tool in use, the types of crime they are applied to, and the companies whose tools are being used; requests Member States to provide an overview of the tools used by their law enforcement and judicial authorities, the purposes for which they are used, and the names of the companies or organizations which have developed those tools;
Amendment 197 #
Motion for a resolution
Paragraph 16 a (new)
Paragraph 16 a (new)
16 a. Reminds that AI applications, including applications used in the context of law enforcement and the judiciary, are being developed globally at a rapid pace; urges all European stakeholders, including the Commission and EU agencies, to ensure international cooperation and to engage third country partners in order to find a common and complementary ethical framework for the use of AI, in particular for law enforcement and the judiciary;