BETA

31 Amendments of Gunnar BECK related to 2021/0106(COD)

Amendment 312 #
Proposal for a regulation
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventwhile giving Member States froma clear possibility of imposing restrictions on the development, marketing and use of AI systems, unless explic that could threaten or jeopardise the integritly authorised by this Regulationnd sovereignty of those countries and their people.
2022/03/24
Committee: JURI
Amendment 318 #
Proposal for a regulation
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). Nevertheless, respect for the specific legal - and especially constitutional - characteristics of the Member States should enable them to benefit from special derogations if they have a higher level of protection of safety and personal data at national level. To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
2022/03/24
Committee: JURI
Amendment 324 #
Proposal for a regulation
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law, and to those of Member States with a strong constitutional tradition, which would go beyond the protection of Union law. Such harm might be material or immaterial.
2022/03/24
Committee: JURI
Amendment 328 #
Proposal for a regulation
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to-date in the light of market and technological developments through the adoption of delegated acts by the Commissionordinary legislative procedure to amend that list.
2022/03/24
Committee: JURI
Amendment 333 #
Proposal for a regulation
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. The data used must be stored solely in Europe. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.
2022/03/24
Committee: JURI
Amendment 336 #
Proposal for a regulation
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. Moreover, every Member State with a different legal tradition should be able to give priority to ensuring maximum protection for its citizens, particularly on the basis of its constitution.
2022/03/24
Committee: JURI
Amendment 342 #
Proposal for a regulation
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child. Particular attention must be paid to AI systems from third countries to ensure that they are not used as a Trojan horse for non-European interests or that they do not lower our level of protection of data and fundamental rights.
2022/03/24
Committee: JURI
Amendment 362 #
Proposal for a regulation
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. If they come from third countries, they should be monitored extremely closely by the European supervisory authorities and by independent bodies working in that field. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
2022/03/24
Committee: JURI
Amendment 372 #
Proposal for a regulation
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight. If they come from third countries, these systems, particularly those that use facial recognition and gather private data, such as Clearview AI, must be monitored extremely closely by the European supervisory authority and independent bodies.
2022/03/24
Committee: JURI
Amendment 374 #
Proposal for a regulation
Recital 34
(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. These systems must not be designed or manufactured in a third country and their components must be monitored extremely closely in order to prevent any extra-European control over the sensitive infrastructures of the Member States.
2022/03/24
Committee: JURI
Amendment 384 #
Proposal for a regulation
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences. None of these systems for sensitive use should be allowed to store outside the Union the data gathered, and any links to third countries should be particularly transparent.
2022/03/24
Committee: JURI
Amendment 386 #
Proposal for a regulation
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. Every Member State should have the right to exercise full control over the systems they choose and to store the data gathered on their territory. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49, the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
2022/03/24
Committee: JURI
Amendment 389 #
Proposal for a regulation
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. The sovereignty of the Member States must be respected. The Member States must have control over the entire chain of these systems, particularly the data gathered that are not intended to be stored in a third country. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
2022/03/24
Committee: JURI
Amendment 394 #
Proposal for a regulation
Recital 46
(46) Having information on how high- risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date. If they have been manufactured in third countries, all of these systems must be wholly controlled by the Member State using them, which must ensure continuous monitoring of the entire chain, including manufacture, repair and development.
2022/03/24
Committee: JURI
Amendment 396 #
Proposal for a regulation
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. All instructions and graphics must be drawn up in the language of the Member State using them, in addition to the usual languages.
2022/03/24
Committee: JURI
Amendment 403 #
Proposal for a regulation
Recital 54
(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation in the language of the Member State concerned and establish a robust post-market monitoring system. All elements, from design to future development, must be transparent for the user. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
2022/03/24
Committee: JURI
Amendment 404 #
Proposal for a regulation
Recital 58
(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regard the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Users should in particular use high-risk AI systems in accordance with the instructions of use, which must be drawn up in the user’s language in order to avoid any lack of understanding whatsoever, and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record- keeping, as appropriate.
2022/03/24
Committee: JURI
Amendment 412 #
Proposal for a regulation
Recital 68
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment. However, transparency regarding their design, use and possible dangers must be obligatory.
2022/03/24
Committee: JURI
Amendment 418 #
Proposal for a regulation
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border usersIn addition to the usual languages, it is essential for all technical texts and instructions accompanying the system to be drawn up in the user’s language.
2022/03/24
Committee: JURI
Amendment 453 #
Proposal for a regulation
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems developed or used exclusively for military purposes.
2022/03/24
Committee: JURI
Amendment 457 #
Proposal for a regulation
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to intersupranational organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.
2022/03/24
Committee: JURI
Amendment 499 #
Proposal for a regulation
Article 4 – title
4 Amendments toReview clause regarding Annex I
2022/03/24
Committee: JURI
Amendment 501 #
Proposal for a regulation
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in ords reviewed every to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed thereinhree years, according to the normal legislative procedure, in order to guarantee full democratic oversight.
2022/03/24
Committee: JURI
Amendment 578 #
Proposal for a regulation
Article 7 – title
7 Amendments toReview clause regarding Annex III
2022/03/24
Committee: JURI
Amendment 581 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systemslist of high-risk AI systems listed in Annex III is reviewed every three years, according to the normal legislative procedure, in order to guarantee full democratic oversight, where both of the following conditions are fulfilled:
2022/03/24
Committee: JURI
Amendment 590 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commissionfollowing criteria shall be taken into account the following criteria:
2022/03/24
Committee: JURI
Amendment 596 #
Proposal for a regulation
Article 7 – paragraph 2 – point h a (new)
(ha) The share of public funding the development of the AI systems receives from third-country investors of public authorities.
2022/03/24
Committee: JURI
Amendment 633 #
Proposal for a regulation
Article 11 – paragraph 1 – introductory part
1. The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. It must be drawn up in the language of the system user, in addition to the usual languages allowing it to be read by as many people as possible.
2022/03/24
Committee: JURI
Amendment 649 #
Proposal for a regulation
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users and, in particular, that is drawn up in the user’s language.
2022/03/24
Committee: JURI
Amendment 695 #
Proposal for a regulation
Article 16 – paragraph 1 – point j a (new)
(ja) Provide an overview of all investors, either via direct participation, venture capital or bank financing, participating in the development, production and distribution of the AI system.
2022/03/24
Committee: JURI
Amendment 696 #
Proposal for a regulation
Article 18 – paragraph 1
1. Providers of high-risk AI systems shall draw up the technical documentation referred to in Article 11 in accordance with Annex IV. One of the languages used must always by the end user’s language in order to prevent any misunderstandings.
2022/03/24
Committee: JURI