Activities of Stéphane SÉJOURNÉ related to 2020/2012(INL)
Shadow reports (1)
REPORT with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies
Amendments (142)
Amendment 10 #
Motion for a resolution
Recital A
Recital A
A. whereas artificial intelligence, robotics and related technologies with the potential to directly impact all aspects of our societies, including basic social and economic principles and values,generate opportunities for business and citizens while directly impact all aspects of our societies are being developed very quickly;
Amendment 12 #
Motion for a resolution
Recital A a (new)
Recital A a (new)
Aa. whereas artificial intelligence, robotics and related technologies can make a huge contribution to reaching our common goal of improving the lives of citizens and fostering prosperity within the EU;
Amendment 13 #
Motion for a resolution
Recital A b (new)
Recital A b (new)
A. whereas, in areas such as health, agriculture, energy, transport, climate and various industrial processes artificial intelligence, robotics and related technologies can contribute to the development of better strategies and innovations;
Amendment 14 #
Motion for a resolution
Recital A c (new)
Recital A c (new)
Ac. whereas the development of artificial intelligence, robotics and related technologies is also a condition to reach the sustainability goals of the European Green Deal in many different sectors; whereas digital technologies can boost the impact of policies in delivering environmental protection;
Amendment 18 #
Motion for a resolution
Recital B a (new)
Recital B a (new)
Ba. whereas a European operational framework is of key importance in avoiding the fragmentation of the Single Market, resulting from differing national legislations; whereas, an action at European level will help fostering much needed investment, data infrastructure, research and common ethical norms; whereas this framework should be established according to the better regulation principle;
Amendment 19 #
Motion for a resolution
Recital B b (new)
Recital B b (new)
Bb. whereas such a framework should include legislative actions, where needed, including mandatory measures to prevent practices that would undoubtedly undermine fundamental rights and freedoms as defined in the Charter of Fundamental Rights of the European Union;
Amendment 21 #
Motion for a resolution
Recital C
Recital C
C. whereas a common European framework forshould ensure the development, the deployment and the use of artificial intelligence, robotics and related technologies within the Union shoultrustworthy, ethical and technically robust artificial intelligence, based on Union’s laws and values and guided boy the protect citizens from their potential risks and promote the trustworthiness of such technologies in the worldinciples of transparency and explainability, fairness, accountability and responsibility;
Amendment 35 #
Motion for a resolution
Recital F
Recital F
F. whereas for the scope of that framework toshould be adequate, proportionate and thoroughly assessed; whereas while it should cover a wide range of technologies and their components, including algorithms, software and data used or produced by them, a targeted approach based on the concept of high risk is necessary to avoid hampering future innovations;
Amendment 39 #
Motion for a resolution
Recital G
Recital G
G. whereas that framework should encompass all situations requiring due consideration of the Union’s principles and values, namelymake sure that the development, the deployment and the use of the relevant technologies and their components are fully compliant with the Union’s principles and values;
Amendment 44 #
Motion for a resolution
Recital I
Recital I
I. whereas action at Union level is justified by the need for a homogenous application of common ethical principles when developing, deploying and using artificial intelligence, robotics and related technologies; whereas clear rules are needed where major risks are at stake;
Amendment 47 #
Motion for a resolution
Recital I a (new)
Recital I a (new)
Ia. whereas the European Union needs to recognise, harness and promote the benefits of artificial intelligence, robotics and related technologies for the society, while democratically deciding on the limitations to be laid down and safeguards to be provided to ensure the development, deployment and use of ethically embedded technologies that respect the Charter of Fundamental Rights of the European Union;
Amendment 52 #
Motion for a resolution
Recital K
Recital K
K. whereas each Member State should destablishignate a national supervisory authority responsible for ensuring, assessing and monitoring complianceassessing the compliance of high-risk technologies with the ethical framework, and for enabling discussion and exchange of points of view in close cooperation with the concerned stakeholders and the civil society;
Amendment 63 #
Motion for a resolution
Recital L
Recital L
L. whereas Parliament continues to call for the establishment of a European Agency tothe idea of entrusting an existing European body with the task of ensureing a harmonised approach across the Union and addresshould be assessed as regards the new opportunities and challenges, in particular those of a cross-border nature, arising from ongoing technological developments.
Amendment 67 #
Motion for a resolution
Paragraph -1 (new)
Paragraph -1 (new)
-1. Believes that any legislative action, in particular, related to new technologies should be in line with the principles of necessity and proportionality; points out, in this respect, that the ethical framework considered in this report should be applicable to high-risk artificial intelligence, robotics and related technologies;
Amendment 68 #
Motion for a resolution
Paragraph -1 a (new)
Paragraph -1 a (new)
-1a. Considers that such an approach will allow companies to introduce innovative products into the market and create new opportunities while ensuring the protection of the European values;
Amendment 69 #
Motion for a resolution
Paragraph -1 b (new)
Paragraph -1 b (new)
-1b. Considers that artificial intelligence, robotics and related technologies should be considered« high risk technologies » when they are used in sectors, where given the characteristics of the activities typically undertaken, and are used in such a manner that significant risks can be expected to occur from the viewpoint of safety and fundamental rights and freedoms;
Amendment 70 #
Motion for a resolution
Paragraph -1 c (new)
Paragraph -1 c (new)
-1c. Asks the Commission to establish an exhaustive list of the technologies fulfilling these criteria in the form of annex to the Regulation on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies; considers that the Commission should review the exhaustive list, if necessary, every six months by means of a delegated act to add new high- risk technologies or delete existing ones if they do not fulfil the criteria anymore; recalls that any changes to the annex should be thoroughly assessed and justified;
Amendment 73 #
Motion for a resolution
Paragraph 1
Paragraph 1
1. Declares that the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including but not exclusively by human beings, should always respect human agency and oversight, as well as allow the retrieval of human control at any time;
Amendment 80 #
Motion for a resolution
Paragraph 2
Paragraph 2
Amendment 96 #
Motion for a resolution
Paragraph 3
Paragraph 3
3. Maintains that artificial intelligence, robotics and relatedhigh-risk technologies, including the software, algorithms and data used or produced by such technologies should be developed in a legal, secure, and technically rigorobust manner and in good faith;
Amendment 100 #
Motion for a resolution
Paragraph 4
Paragraph 4
4. Underlines that transparency and explainability isare essential to ensuring thate citizens’ trust in these technologies, even if the degree of explainability is relative to the complexity of the technologies, and that it should be complemented by auditability and traceability; considers that the respect of these principles is a precondition to guarantee accountability;
Amendment 105 #
Motion for a resolution
Paragraph 4 a (new)
Paragraph 4 a (new)
4a. Considers that citizens should be informed when interacting with a system using artificial intelligence in particular to personalise a product or service to its users, whether they can switch off or restrain the personalisation; considers, furthermore, that transparency measures should be accompanied, as far as this is technically possible, by clear and understandable explanations of the data used, of the algorithm, of its purpose, of its outcomes, and of its potential dangers;
Amendment 110 #
Motion for a resolution
Paragraph 5
Paragraph 5
5. Recalls thatIs concerned by the risks of biases and discrimination in the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies,; recalls that, in all circumstances, they should respect human dignity and ensure equal treatment for all;
Amendment 117 #
Motion for a resolution
Paragraph 6
Paragraph 6
6. AffirmConsiders that possible bias in and discrimination by software, and algorithms and data shcould be addressed by setting rules for theon data processes through which they are designed and used, as this approach would have the potential to turn software, algorithms and data into a considerable counterbalance to bias and discrimination, and a positive force for social changeing; points out also that the use of AI, robotics and related technologies has the potential to fight discrimination in certain situation;
Amendment 122 #
Motion for a resolution
Subheading 5 a (new)
Subheading 5 a (new)
Right to redress
Amendment 123 #
Motion for a resolution
Paragraph 6 a (new)
Paragraph 6 a (new)
6a. Considers that any natural or legal person should be able to seek redress of a decision issued by a high-risk artificial intelligence, robotics or related technology at his or her detriment;
Amendment 130 #
Motion for a resolution
Paragraph 7
Paragraph 7
7. Emphasises that socially responsible artificial intelligence, robotics and related technologies should aim at safeguarding and promoteing fundamental values of our society such as democracy, diverse and independent media and objective and freely available information, health and economic prosperity, equality of opportunity, workers’ and social rights, quality education, cultural and linguistic diversity, gender balance, digital literacy, innovation and creativity;
Amendment 144 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. States that it is essential that artificial intelligence, robotics and related technologies can be used by government and businesses to support the achievement of sustainable development, climate neutrality and circular economy goals; the development, deployment and use of these technologies should be environmentally friendly, and contribute to minimising any harm caused to the environment during their lifecycle and across their entire supply chain;
Amendment 150 #
Motion for a resolution
Paragraph 12 a (new)
Paragraph 12 a (new)
12a. Considers that the objectives of social responsibility, gender balance, environmental protection and sustainability should be without prejudice to existing general and sectorial obligations within these fields; believes that the Commission should establish non-binding guidelines to the intention of developers, deployers and users on the methodology for the achievement of these objectives;
Amendment 160 #
Motion for a resolution
Subheading 8 a (new)
Subheading 8 a (new)
Public power decisions
Amendment 161 #
Motion for a resolution
Paragraph 15 a (new)
Paragraph 15 a (new)
15a. Points out that while the benefits of deploying artificial intelligence, robotics and related technologies within the framework of public power decisions are unquestionable, severe misuses are also possible, such as mass surveillance, predictive policing and breaches of due process rights;
Amendment 162 #
Motion for a resolution
Paragraph 15 b (new)
Paragraph 15 b (new)
15b. Considers that technologies which can replace decisions taken by public authorities should be treated with the utmost precaution, notably in the area of justice and law enforcement;
Amendment 163 #
Motion for a resolution
Paragraph 15 c (new)
Paragraph 15 c (new)
15c. Believes that Member States should have recourse to such technologies only if there is thorough evidence of their trustworthiness and if human verification is possible or systematic in cases where fundamental liberties are at stake; underlines the importance for national authorities to undertake strict fundamental rights impact assessment for high-risk artificial intelligence systems deployed in these cases;
Amendment 164 #
Motion for a resolution
Paragraph 15 d (new)
Paragraph 15 d (new)
15d. Is of the opinion that any decision taken by high-risk artificial intelligence, robotics or related technologies within the framework of prerogatives of public power should be subject to strict human verification and due process;
Amendment 165 #
Motion for a resolution
Paragraph 15 e (new)
Paragraph 15 e (new)
15e. Believes that the technological advancement should not allow for the use of artificial intelligence, robotics and related technologies to autonomously distribute rights or to impose legal obligations on individuals;
Amendment 167 #
Motion for a resolution
Paragraph 16
Paragraph 16
16. Stresses that appropriate governance of the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including by having measures in place focusing on accountability and addressing potential risks of bias and discrimination, increases citizens’ safety and trust in those technologies;
Amendment 178 #
Motion for a resolution
Paragraph 18
Paragraph 18
18. Underlines the need to ensure that personal data is protected adequately, especially data belonging to vulnerable groups, such as people with disabilities, patients, children, minorities and migrants, are protected adequately;
Amendment 208 #
Motion for a resolution
Paragraph 22
Paragraph 22
Amendment 217 #
Motion for a resolution
Paragraph 23
Paragraph 23
23. Calls on the Commission to follow- up on that request, especially in view of the added-value of having a body at Union level coordinating the mandates andcoordinate the actions of each national supervisory authority as referred to in the previous sub- section;
Amendment 220 #
Motion for a resolution
Paragraph 23 a (new)
Paragraph 23 a (new)
23a. Calls on the Commission to assess whether a European body would be necessary to ensure a harmonised implementation of the European ethical framework for high-risk artificial intelligence, robotics and related technologies;
Amendment 221 #
Motion for a resolution
Paragraph 23 b (new)
Paragraph 23 b (new)
23b. Calls on the Commission to explore entrusting an existing EU body, such as ENISA, EDPS, or the European Ombudsman, to ensure the harmonised implementation of the European ethical framework for high-risk artificial intelligence, robots and related technologies;
Amendment 229 #
Motion for a resolution
Paragraph 24
Paragraph 24
24. Believes that such a body, as well as the certification referred to in the following paragraph, would not only benefit the development of Union industry and innovation in that context but also increase the awareness of our citizens regarding the opportunities and risks inherent to these technologies;
Amendment 236 #
Motion for a resolution
Paragraph 25
Paragraph 25
25. Suggests that the European Agency for Artificial Intelligence develops common criteria and an application process relating to the granting ofCalls on the Commission to explore the possibility to develop a European certificateion of ethical compliance following a, to be granted at the request by anyof developer, deployer or user seeking to certify the positive assessment of compliance carried out by the respective national supervisory authoritys of artificial intelligence, robotics and related technologies, which comply with the European ethical framework;
Amendment 249 #
Motion for a resolution
Paragraph 27
Paragraph 27
27. Recalls that the opportunities and risks inherent to these technologies have a global dimension that requires a consistent approach at international level and thus calls on the Commission to work in bilateral and multilateral settings to advocate and ensurepromote the European model of ethical compliance.
Amendment 256 #
Motion for a resolution
Paragraph 28
Paragraph 28
28. Points out the added-value of a European Agencyregulatory framework for high- risk artificial intelligence, robotics and related technologies as referred to above in this context as well.
Amendment 258 #
Motion for a resolution
Paragraph 29
Paragraph 29
29. Concludes, following the above reflections on aspects related to the ethical dimension of high-risk artificial intelligence, robotics and related technologies, that the ethical dimension should be framed as a series of principles resulting in a legal framework at Union level supervised by national competent authorities, coordinated and enhanced by a European Agency for Artificial Intelligence and duly respected and certified within the internal market;
Amendment 269 #
Motion for a resolution
Paragraph 31
Paragraph 31
31. Recommends that the European Commission, after consulting with all the relevant stakeholders, review existing Union law applicable to high-risk artificial intelligence, robotics and related technologies in order to review it when necessary and address the rapidity of their development in line with the recommendations set out in the annex hereto;
Amendment 270 #
Motion for a resolution
Paragraph 31 a (new)
Paragraph 31 a (new)
31a. Believes that a periodical assessment of the European regulatory framework related to artificial intelligence, robotics and related technologies will be essential to ensure that the applicable legislation is up to date with the rapidly growing technological progress;
Amendment 274 #
Motion for a resolution
Paragraph 32
Paragraph 32
32. Considers that the requested proposal would have financial implications if a new European Agency for Artificial Intelligence is set upn existing European body is entrusted with the above-mentioned functions in order to ensure the necessary technical means and human resources to fulfil its newly attributed tasks;
Amendment 276 #
Motion for a resolution
Annex I – part A – point I – indent 1
Annex I – part A – point I – indent 1
- to build trust in artificial intelligence, robotics and related technologies by ensuring that when these technologies will beentail a high-risk for the protection of safety and fundamental rights and freedoms their developedment, deployedment and used will be done in an ethical manner;
Amendment 287 #
Motion for a resolution
Annex I – part A – point I – indent 3
Annex I – part A – point I – indent 3
- to support deployment of artificial intelligence, robotics and related technologies in the Union by providing the appropriate and proportionate regulatory framework;
Amendment 295 #
Motion for a resolution
Annex I – part A – point II – indent 2
Annex I – part A – point II – indent 2
Amendment 302 #
Motion for a resolution
Annex I – part A – point II – indent 5 a (new)
Annex I – part A – point II – indent 5 a (new)
- annex establishing a list of high- risk technologies which fall under the scope of this Regulation;
Amendment 306 #
Motion for a resolution
Annex I – part A – point III – indent 2
Annex I – part A – point III – indent 2
- riskcompliance assessment of high- risk artificial intelligence, robotics and related technologies;
Amendment 310 #
Motion for a resolution
Annex I – part A – point III – indent 4 a (new)
Annex I – part A – point III – indent 4 a (new)
- right to redress;
Amendment 314 #
Motion for a resolution
Annex I – part A – point III – indent 7 a (new)
Annex I – part A – point III – indent 7 a (new)
- safeguards related to the use of high-risk artificial intelligence, robotics and related technologies within the framework of public power decisions;
Amendment 318 #
Motion for a resolution
Annex I – part A – point IV – indent 1 a (new)
Annex I – part A – point IV – indent 1 a (new)
- regularly assessing and if necessary, reviewing the annex of the Regulation by means of a delegated act
Amendment 321 #
Motion for a resolution
Annex I – part A – point IV – indent 2 a (new)
Annex I – part A – point IV – indent 2 a (new)
- engaging discussions on global ethical norms at international level;
Amendment 322 #
Motion for a resolution
Annex I – part A – point IV – indent 2 b (new)
Annex I – part A – point IV – indent 2 b (new)
- establishing binding guidelines on the methodology of the compliance assessment to be followed by the national supervisory authorities;
Amendment 323 #
Motion for a resolution
Annex I – part A – point IV – indent 2 c (new)
Annex I – part A – point IV – indent 2 c (new)
- establishing non-binding guidelines directed to the developers, the deployers and the users;
Amendment 327 #
Motion for a resolution
Annex I – part A – point V
Annex I – part A – point V
Amendment 343 #
Motion for a resolution
Annex I – part A – point VI – indent 1
Annex I – part A – point VI – indent 1
- to assess wthether compliance of high- risk artificial intelligence, robotics and related technologies, including software, algorithms and data u with the ethical principles sedt or produced by such technologies, developed, deployed and used in the Union are high-risk technologiesut in the proposed Regulation;
Amendment 345 #
Motion for a resolution
Annex I – part A – point VI – indent 2
Annex I – part A – point VI – indent 2
Amendment 350 #
Motion for a resolution
Annex I – part A – point VI – indent 4
Annex I – part A – point VI – indent 4
Amendment 360 #
Motion for a resolution
Annex I – part A – point VII
Annex I – part A – point VII
VII. The key role of stakeholders should be to engage with the Commission, the European Agency for Artificial Intelligence and the “Supervisory Authority” in each Member State.
Amendment 365 #
Motion for a resolution
Annex I – part B – recital 1
Annex I – part B – recital 1
(1) The development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, arshould be based on a desire to serve society. TheySome of these technologies can entail opportunities and risks, which should be addressed and regulated by a comprehensive legal framework of ethical principles to be complied with from the moment of the development and deployment of such technologies to their use.
Amendment 372 #
Motion for a resolution
Annex I – part B – recital 3
Annex I – part B – recital 3
(3) In this context, the current diversity of the rules and practices to be followed across the Union poses a significant risk to the protection of the well-being and prosperity of individuals and society alikeof fragmentation of the Single Market , as well as to the coherent exploration of the full potential that artificial intelligence, robotics and related technologies have in promoting and preserving that well-being and prosperityinnovation. Differences in the degree of consideration of the ethical dimension inherent to these technologies can prevent them from being freely developed, deployed or used within the Union and such differences can constitute an obstacle to the pursuit ofa level playing field and to the pursuit of technological progress and economic activities at Union level, distort competition and impede authorities in the fulfilment of their obligations under Union law. In addition, the absence of a common framework of ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies results in legal uncertainty for all those involved, namely developers, deployers and users.
Amendment 373 #
Motion for a resolution
Annex I – part B – recital 4
Annex I – part B – recital 4
Amendment 378 #
Motion for a resolution
Annex I – part B – recital 6
Annex I – part B – recital 6
(6) A common understanding in the Union of notions such as artificial intelligence, robotics, related technologies, algorithms and biometric recognition is required in order to allow for a harmonized regulatory approach. However, the specific legal definitions need to be developed in the context of this Regulation without prejudice to other definitions used in other legal acts and international jurisdictions. They should be technologically neutral and also subject to review whenever necessary.
Amendment 389 #
Motion for a resolution
Annex I – part B – recital 8 a (new)
Annex I – part B – recital 8 a (new)
(8a) This Regulation should be strictly proportionate to its objective so as not to hamper innovation in the Union. In this respect, it should be based on a targeted risk-based approach focusing on specific sectors where major interests are at stake.
Amendment 390 #
Motion for a resolution
Annex I – part B – recital 8 b (new)
Annex I – part B – recital 8 b (new)
(8b) The scope of the Regulation should be limited to high-risk artificial intelligence, robotics and related technologies used in sectors, where given the characteristics of the activities typically undertaken, significant risks can be expected to occur, and where their use is likely to create significant risk. A significant risk should be understood as directly endangering the protection of safety or fundamental rights and freedoms.
Amendment 391 #
Motion for a resolution
Annex I – part B – recital 8 c (new)
Annex I – part B – recital 8 c (new)
(8c) The sectors covered should be notably, but not exclusively, healthcare, transport, energy and finance. The high- risk technologies should be exclusively listed in the annex of this Regulation, which should be revised on a regular basis, keeping up with technological development.
Amendment 392 #
Motion for a resolution
Annex I – part B – recital 9
Annex I – part B – recital 9
(9) Any artificial intelligence, robotics and relatedHigh-risk technologies, including the software, algorithms and data used or produced by such technologies, which entails a high risk of breachingshould respect the principles of safety, transparency, accountability, non-bias or non- discrimination, social responsibility and gender balance, environmental friendliness andright to redress, sustainability, privacy and governance, should be considered high- risk from a compliance with ethical principles perspective where that is the conclusion of an impartial, regulated and external risk assessment by the national supervisory authority.
Amendment 396 #
Motion for a resolution
Annex I – part B – recital 9 a (new)
Annex I – part B – recital 9 a (new)
(9a) The Commission should prepare non-binding guidelines on the methodology for compliance with this Regulation intended to developers, deployers and users. In doing so, the Commission should consult relevant stakeholders.
Amendment 398 #
Motion for a resolution
Annex I – part B – recital 10
Annex I – part B – recital 10
(10) Notwithstanding the risk assessment carried out in relation to compliance with ethical principles, artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies,High-risk technologies should always be assessed as to their risk on the basis of the objective criteria and in line withlaid down in this Regulation and without prejudice to the relevant sector-specific legislation applicable in different fields such as those of health, transport, employment, justice and home affairs, media, education and culture.
Amendment 401 #
Motion for a resolution
Annex I – part B – recital 10 a (new)
Annex I – part B – recital 10 a (new)
(10a) When a high-risk technology has been considered compliant with the principles laid out in this regulation, the software, algorithms and data which are used or produced by the technology should be presumed compliant with this regulation, unless the national supervisory authority decides to conduct an assessment at its own initiative or at the request of the developer, the deployer or the user.
Amendment 405 #
Motion for a resolution
Annex I – part B – recital 12
Annex I – part B – recital 12
(12) Developers, deployers and users of high-risk technologies are responsible for compliance with safety, transparency, and accountability principles to the extent of their involvement with the artificial intelligence, robotics and related technologies concerned, including the software, algorithms and data used or produced by such technologies. Developers should ensure that the technologies concerned are designed and built in line with safety features, whereas deployers and users should deploy and use the concerned technologies in full observance of those features. To this end, developers of high- risk technologies should evaluate and anticipate the risks of misuse of their own technologies, in order to respond effectively if the problem arises.
Amendment 409 #
Motion for a resolution
Annex I – part B – recital 14
Annex I – part B – recital 14
Amendment 414 #
Motion for a resolution
Annex I – part B – recital 16
Annex I – part B – recital 16
(16) Society’s trust in artificial intelligence, robotics and related high-risk technologies, including the software, algorithms and data used or produced by such technologies, depends on the degree to which their assessment, auditability and traceability are enabled in the technologies concerned. Where the extent of their involvement so requires, developers should ensure that such technologies are designed and built in a manner that enables such an assessment, auditing and traceability. DWithin the limits of what is technically possible, developers, deployers and users should ensure that artificial intelligence, robotics and related technologies are deployed and used in full respect of transparency requirements, and allowing auditing and traceability.
Amendment 417 #
Motion for a resolution
Annex I – part B – recital 16 a (new)
Annex I – part B – recital 16 a (new)
(16a) In order to ensure transparency and accountability, citizens should be informed when a system uses artificial intelligence, when AI systems personalise a product or service to its users, whether they can switch off or restrain the personalisation and when they are faced with an automated-decision making technology. Furthermore, transparency measures should be accompanied, as far as this is technically possible, by clear and understandable explanations of the data used, of the algorithm, of its purpose, of its outcomes, and of its potential dangers;
Amendment 418 #
Motion for a resolution
Annex I – part B – recital 17
Annex I – part B – recital 17
(17) Bias in and discrimination by software, algorithms and data is unlawful and should be addressed by regulating the processes through which they are designed and usdeployed.
Amendment 420 #
Motion for a resolution
Annex I – part B – recital 18
Annex I – part B – recital 18
(18) Software, algorithms and data used or produced by artificial intelligence, robotics and relatedhigh-risk technologies1a should be considered biased where, for example, they display suboptimal results in relation to any person or group of persons, on the basis of a prejudiced personal, social or partial perception and subsequent processing of data relating to their traits. __________________ 1aFrom this point, “artificial intelligence, robotics and related technologies” should be replaced by “high-risk technologies” throughout the recitals.
Amendment 437 #
Motion for a resolution
Annex I – part B – recital 28
Annex I – part B – recital 28
(28) TWhere applicable, the development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should take into consideration their environmental footprint and should not cause harm to the environment during their lifecycle and across their entire supply chain. Accordingly, such technologies should be developed, deployed and used in an environmentally friendly manner that supports the achievement of climate neutrality and circular economy goals.
Amendment 438 #
Motion for a resolution
Annex I – part B – recital 29
Annex I – part B – recital 29
Amendment 440 #
Motion for a resolution
Annex I – part B – recital 30
Annex I – part B – recital 30
Amendment 443 #
Motion for a resolution
Annex I – part B – recital 31
Annex I – part B – recital 31
(31) TWhere applicable, these technologies should also be developed, deployed and used with a view to supporting the achievement of environmental goals such as reducing waste production, diminishing the carbon footprint, preventing climate change and avoiding environmental degradation, and their potential in that context should be maximized and explored through research and innovation projects. The Union and the Member States should therefore mobilise their resources for the purpose of supporting and investing in such projects.
Amendment 449 #
Motion for a resolution
Annex I – part B – recital 34
Annex I – part B – recital 34
(34) The ethical boundaries of the use of artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, should be duly considered when using remote recognition technologies, such as biometric recognition, to automatically identify individuals. When these technologies are used by public authorities during times of national emergency, such as during a national health crisis, the use should be proportionate and temporary. Clear criteria for that use should be defined in order to be able to determine whether, when and how it should take place, and such use should be mindful of its psychological and sociocultural impact with due regard for human dignity and the fundamental rights set out in the Charter.
Amendment 450 #
Motion for a resolution
Annex I – part B – recital 35
Annex I – part B – recital 35
Amendment 452 #
Motion for a resolution
Annex I – part B – recital 35 a (new)
Annex I – part B – recital 35 a (new)
(35a) Public authorities should conduct fundamental rights impact assessment before deploying high-risk technologies which replace public power decisions and which have a direct and significant impact on citizen’s rights and obligations. In addition, these technologies should allow for human verification and due process, especially in the areas of justice and law enforcement, where fundamental rights protected by the Charter, are at stake.
Amendment 453 #
Motion for a resolution
Annex I – part B – recital 36
Annex I – part B – recital 36
(36) AmongDevelopers, deployers and users should continue to observe the existing relevant governance standards are, for example, the ‘Ethics Guidelines for Trustworthy AI’ drafted by the High-Level Expert Group on Artificial Intelligence set up by the European Commission, and other technical standards adopted by the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI), at European level, the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE), at international level.
Amendment 460 #
Motion for a resolution
Annex I – part B – recital 38
Annex I – part B – recital 38
(38) The effective application of the ethical principles laid down in this Regulation will largely depend on Member States’ should appointment of an independent public authority to act as a supervisory authority. In particular, eEach national supervisory authority should be responsible for assessing and monitoring the compliance of artificial intelligence, robotics and related technologies considered a high-risk in light of the obligations set out inhigh-risk technologies with this Rregulation.
Amendment 465 #
Motion for a resolution
Annex I – part B – recital 39
Annex I – part B – recital 39
(39) Each national supervisory authority shall also carry the responsibility of regulating the governance of these technologies. They thereforewill have an important role to play in promoting the trust and safety of Union citizens, as well as in enabling a democratic, pluralistic and equitable society.
Amendment 473 #
Motion for a resolution
Annex I – part B – recital 42 a (new)
Annex I – part B – recital 42 a (new)
(42a) The Commission should establish binding guidelines to be followed by the national supervisory authorities when conducting their compliance assessment.
Amendment 475 #
Motion for a resolution
Annex I – part B – recital 44
Annex I – part B – recital 44
(44) The rapid development of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, as well as of the technical machine learning, reasoning processes and other technologies underlying that development are unpredictable. As such, it is both appropriate and necessary to establish a review mechanism in accordance with which, in addition to its reporting on the application of the Regulation, the Commission is to regularly submit a report concerning the possible modification of the scope of application of this Regulation. In addition, the Commission should review, if necessary, every six months the annex of this Regulation by means of delegated act.
Amendment 480 #
Motion for a resolution
Annex I – part B – recital 46
Annex I – part B – recital 46
Amendment 484 #
Motion for a resolution
Annex I – part B – before Article 1 – chapter title (new)
Annex I – part B – before Article 1 – chapter title (new)
Chapter I: General provisions
Amendment 488 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1
Annex I – part B – Article 2 – paragraph 1
1. This Regulation applies to high- risk artificial intelligence, robotics and related technologies, hereafter “high-risk technologies”1a including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union. 2. Within the meaning of this Regulation are considered being high- risk technologies, technologies that: (a) are used in sectors, where given the characteristics of the activities typically undertaken, significant risks can be expected to occur from the viewpoint of protection of safety and fundamental rights and freedoms, and (b) are used in such a manner, that such risks can be expected to occur from the viewpoint of protection of safety and fundamental rights and freedoms 3. The high-risk technologies mentioned in Paragraph 2 shall be listed in an Annex to this Regulation. The Commission is empowered to adopt delegated acts in accordance with Article 16(a), to amend the exhaustive list, by: (a) including new types of high-risk artificial intelligence, robotics and related technologies; (b) deleting types of technologies that can no longer be considered to pose a high-risk; (c) changing the critical sectors for existing high-risk technologies. Any delegated act amending the Annex shall come into force six months after its adoption. When determining new critical sectors and/or high-risk technologies to be inserted by means of delegated acts in the Annex, the Commission shall take full account of the criteria set out in this Regulation, in particular those set out in paragraph 2 of this Article. __________________ 1aFrom this point “artificial intelligence, robotics and related technologies” is replaced by “high-risk technologies” throughout the Regulation
Amendment 494 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1 – point a
Annex I – part B – Article 4 – paragraph 1 – point a
(a) ‘artificial intelligence’ means softwarea systems that, inter alia, collect, process and interpret structured or unstructured data, identify patterns and establish models in order to reach conclusions or take displays intelligent behaviour by analysing their environment and taking actions, with some degree of autonomy, to achieve specific goals; AI- systems can be purely software-based, actionsng in the physical or virtual dimension basvirtual world, or can be embedded oin such conclusionhardware devices;
Amendment 520 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1 – point p
Annex I – part B – Article 4 – paragraph 1 – point p
Amendment 528 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 3
Annex I – part B – Article 5 – paragraph 3
Amendment 530 #
Motion for a resolution
Annex I – part B – after article 5 – chapter title (new)
Annex I – part B – after article 5 – chapter title (new)
Chapter II: Obligations for high-risk technologies
Amendment 533 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1
Annex I – part B – Article 6 – paragraph 1
1. Any artificial intelligence, robotics and relatedHigh-risk technologies, including software, algorithms and data used or produced by such technologies, shall: (a) be developed, deployed and used in a human- centric manner with the aim of contributing to the existence of a democratic, pluralistic and equitable society by safeguarding human autonomy and decision-making and ensuring human agency.;
Amendment 534 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 2
Annex I – part B – Article 6 – paragraph 2
Amendment 537 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 3
Annex I – part B – Article 6 – paragraph 3
Amendment 540 #
Motion for a resolution
Annex I – part B – Article 7
Annex I – part B – Article 7
Amendment 550 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point a
Annex I – part B – Article 8 – paragraph 1 – point a
Amendment 554 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point b
Annex I – part B – Article 8 – paragraph 1 – point b
(b) developed, deployed and used in a resilient manner so that they ensure an adequate level of security, and one that prevents any technical vulnerabilities from being exploited for unfair or unlawful purposes;
Amendment 557 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point d
Annex I – part B – Article 8 – paragraph 1 – point d
(d) developed, deployed and used in a manner that ensures that there is trust that the performance is reliablare reliable performance as regards reaching the aims and carrying out the activities they have been conceived for, including by ensuring that all operations are reproducible;
Amendment 559 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point e
Annex I – part B – Article 8 – paragraph 1 – point e
(e) developed, deployed and used in a manner that ensures that the performance of the aims and activities of the particular technologies is accurate; if occasional inaccuracies cannot be avoided, the system shall indicate, to the extent possible, the likeliness of errors and inaccuracies to deployers and users through an appropriate disclaimer message;
Amendment 562 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point g
Annex I – part B – Article 8 – paragraph 1 – point g
(g) developed, deployed and used in a manner such that they are capable of warninginform users that they are interacting with artificial intelligence systems, duly and comprehensively disclosing their capabilities, accuracy and limitations to artificial intelligence developers, deployers and users of high- risk technologies;
Amendment 565 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2
Annex I – part B – Article 8 – paragraph 2
2. In accordance with Article 6(2b), thehigh-risk technologies mentioned in paragraph 1, including software, algorithms and data used or produced by such technologies shall be developed, deployed and used in transparent and traceable manner so that their elements, processes and phases are documented to the highest standards, and that it is possible for the national supervisory authorities referred to in Article 14 to assess the compliance of such technologies with the obligations set out in this Regulationapplicable standards. In particular, the developer, deployer or user of those technologies shall be responsible for, and be able to demonstrate, compliance with the safety features set out in paragraph 1.
Amendment 569 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 3
Annex I – part B – Article 8 – paragraph 3
3. The developer, deployer or user of thehigh-risk technologies mentioned in paragraph 1 shall ensure that the measures taken to ensure compliance with the safety features set out in paragraph 1 can be audited by the national supervisory authorities referred to in Article 14.
Amendment 571 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 4
Annex I – part B – Article 8 – paragraph 4
4. Users shall be presumed to have complied with the obligations set out in this Article where their use of artificial, robotics and related technologies, including software, algorithms and data used or produced by such technologies, is carried out in good faith and in no way contravenes the ethical principles laid down in this Regulation.
Amendment 575 #
Motion for a resolution
Annex I – part B – Article 9 – paragraph 1
Annex I – part B – Article 9 – paragraph 1
1. AnyHigh-risk technologies, including software, algorithm or data used or produced by artificial intelligence, robotics and relatedsuch technologies developed, deployed or used in the Union shall be such as to ensure respect for human dignity and equal treatment for all.
Amendment 578 #
Motion for a resolution
Annex I – part B – Article 9 – paragraph 2
Annex I – part B – Article 9 – paragraph 2
2. AnyHigh-risk technologies, including software, algorithm ors and data used or produced by artificial intelligence, robotics and relatedsuch technologies, developed, deployed or used in the Union shall be unbiased and, without prejudice to paragraph 3, shall not discriminate on grounds such as race, gender, sexual orientation, pregnancy, disability, physical or genetic features, age, national minority, ethnic or social origin, language, religion or belief, political views or civic participation, citizenship, civil or economic status, education, or criminal record.
Amendment 580 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 1
Annex I – part B – Article 10 – paragraph 1
1. Any artificial intelligence, robotics and relatedHigh-risk technologies, ,including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in the Union in compliance with the relevant Union law, principles and values, in a manner that ensures optimal social, environmental and economic outcomes and that does not result in injury or harm of any kind to being caused to individuals or society..
Amendment 584 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – introductory part
Annex I – part B – Article 10 – paragraph 2 – introductory part
2. Any artificial intelligence, robotics and relatedHigh-risk technologies, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union shall be developed, deployed and used in a socially responsible manner. In particular, such a manner shall mean that such technologies are:
Amendment 585 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point a
Annex I – part B – Article 10 – paragraph 2 – point a
Amendment 587 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point b
Annex I – part B – Article 10 – paragraph 2 – point b
Amendment 590 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point c
Annex I – part B – Article 10 – paragraph 2 – point c
Amendment 593 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point d
Annex I – part B – Article 10 – paragraph 2 – point d
Amendment 594 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point e
Annex I – part B – Article 10 – paragraph 2 – point e
Amendment 599 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 3
Annex I – part B – Article 10 – paragraph 3
Amendment 602 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 4
Annex I – part B – Article 10 – paragraph 4
Amendment 607 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 1
Annex I – part B – Article 11 – paragraph 1
1. Any artificial intelligence, robotics and relatedHigh-risk technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed or used in the Union in compliance with Union law, principles and values, in a manner that ensures optimal environmentally friendly outcomes and and commitments related to the protection of the environment and where applicable they shall pursue the objective of minimisesing their environmental footprint during their lifecycle and through their entire supply chain, in order to support the achievement of climate neutrality and circular economy goals.
Amendment 608 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 2
Annex I – part B – Article 11 – paragraph 2
Amendment 610 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 3
Annex I – part B – Article 11 – paragraph 3
Amendment 613 #
Motion for a resolution
Annex I – part B – Article 11 a (new)
Annex I – part B – Article 11 a (new)
Amendment 614 #
Motion for a resolution
Annex I – part B – Article 11 b (new)
Annex I – part B – Article 11 b (new)
Article 11b Implementation guidance The Commission shall prepare non- binding guidelines on the methodology for compliance with this Regulation intended to developers, deployers and users. In doing so, the Commission shall consult relevant stakeholders. The Commission shall publish the guidelines by the date of the entry into force of this Regulation.
Amendment 620 #
Motion for a resolution
Annex I – part B –After Article 12 – chapter title (new)
Annex I – part B –After Article 12 – chapter title (new)
Chapter III: Specific requirements
Amendment 621 #
Motion for a resolution
Article 12 a (new)
Article 12 a (new)
Article 12a Public power decisions 1. Member States shall conduct an impact assessment on fundamental rights for high-risk technologies used within their prerogatives of public powers that have a significant and direct impact on the rights and obligations of natural or legal persons. 2. High-risk technologies that have a direct and significant impact on rights and obligations of natural and legal persons shall be subject to strict human verification, and due process.
Amendment 622 #
Motion for a resolution
Annex I – part B – Article 12 b (new)
Annex I – part B – Article 12 b (new)
Article 12b Right to redress 1. Any natural or legal person shall be able to seek redress for damages caused by a decision issued at her/his detriment by high-risk technologies.
Amendment 623 #
Motion for a resolution
Annex I – part B –After Article 12 b – chapter title (new)
Annex I – part B –After Article 12 b – chapter title (new)
Chapter IV: Institutional oversight
Amendment 624 #
Motion for a resolution
Annex I – part B – Article 13 – paragraph 1
Annex I – part B – Article 13 – paragraph 1
Amendment 628 #
Motion for a resolution
Annex I – part B – Article 13 – paragraph 2
Annex I – part B – Article 13 – paragraph 2
2. Data used or produced by artificial intelligence, robotics and relatedhigh-risk technologies developed, deployed or used in the Union shall be managed by developers, deployers and users in accordance with the relevant standards referred to in paragraph 1national, European and international rules, as well as with relevant industry and business protocols. In particular, developers and deployers shall carry out, where feasible, quality checks of the external sources of data used by artificial intelligence, robotics and related technologies, and shall put oversight mechanisms in place regarding their collection, storage, processing and use.
Amendment 629 #
Motion for a resolution
Annex I – part B – Article 13 – paragraph 3
Annex I – part B – Article 13 – paragraph 3
3. Without prejudice to portability rights and rights of persons whose usage of artificial intelligence, robotics and relatedhigh-risk technologies has generated data, the collection, storage, processing, sharing of and access to data used or produced by artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall comply with the relevant standards referred to in paragraph 1national, European and international rules, as well as with relevant industry and business protocols. In particular, developers and deployers shall ensure those protocols are applied during the development and deployment of artificial intelligence, robotics and relatedhigh-risk technologies, by clearly defining the requirements for processing and granting access to data used or produced by these technologies, as well as the purpose, scope and addressees of the processing and the granting of access to such data, all of which shall at all times be auditable and traceable.
Amendment 632 #
Motion for a resolution
Annex I – part B – Article 14 – paragraph 1
Annex I – part B – Article 14 – paragraph 1
1. Each Member State shall designate an independent public authority to be responsible for monitoring the application of this Regulation (‘supervisory authority’). In accordance with Article 7(1) and (2), each national supervisory authority shall be responsible for assessing whether artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed and used in the Union areand monitoring the compliance of high- risk technologies and, if so, for assessing and monitoring their compliance with the ethical principles set out in this Regulation.
Amendment 636 #
Motion for a resolution
Annex I – part B – Article 14 – paragraph 2
Annex I – part B – Article 14 – paragraph 2
2. Each national supervisory authority shall contribute to the consistent application of this Regulation throughout the Union. For that purpose, the supervisory authorities in each Member State shall cooperate with each other, the Commission and other relevant institutions, bodies, offices and agencies of the Union, in particular as regards establishing the governance standards referred to in Article 13(1).
Amendment 639 #
Motion for a resolution
Annex I – part B – Article 14 – paragraph 3
Annex I – part B – Article 14 – paragraph 3
Amendment 642 #
Motion for a resolution
Annex I – part B – Article 14 – paragraph 4
Annex I – part B – Article 14 – paragraph 4
4. Each national supervisory authority shall provide professional and administrative guidance and support on the general implementation of the ethical principlesfor the implementation of the harmonised ethical framework set out in this Regulation, includingespecially to small and medium-sized enterprises or start-ups.
Amendment 643 #
Motion for a resolution
Annex I – part B – Article 14 – paragraph 6
Annex I – part B – Article 14 – paragraph 6
6. Member States shall take all measures necessary to ensure the implementation of the ethical principles set out in this Regulation. Member States shall support relevant stakeholders and civil society, at both Union and national level, in their efforts to ensure a timely, ethical and well- informed response to the new opportunities and challenges, in particular those of a cross-border nature, arising from technological developments relating to artificial intelligence, robotics and related technologies.
Amendment 651 #
Motion for a resolution
Annex I – part B – Article 17 a (new)
Annex I – part B – Article 17 a (new)