Activities of Gianna GANCIA related to 2021/0106(COD)
Shadow opinions (1)
OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts
Amendments (44)
Amendment 137 #
Proposal for a regulation
Recital 3 a (new)
Recital 3 a (new)
(3a) Technologies based on artificial intelligence are having a rapid and disruptive impact on the world of work. They have the potential to create new opportunities for gender equality, but at the same time they can reinforce stereotypes, sexism and gender discrimination in the labour market. It is becoming clearer and clearer that automating some tasks will have a greater impact on the female workforce, because a higher number of women are employed in routine work. At the same time, AI can represent a major opportunity for reducing gender inequalities, but only if steps are taken to change regulations and policies to promote the equal representation of men and women in decision-making. Support by European institutions and Member States of an approach designed to encourage women to study STEM subjects will also be vital in combating gender stereotyping.
Amendment 145 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system shouldmust be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, iAI software, distinguishing it from more traditional software systems and modelling approaches such as logistic regression and other techniques that are similarly transparent and capable of being explained and interpreted. In particular, for the ability, for a given set of human-defined objectives, to generate outputs purposes of this Regulation, AI systems should be understood as having the ability, on the basis of machine and/or human-based data and inputs, to deduce how to achieve a given set of human- defined objectives through learning, reasoning or modelling for a given set of human-defined objectives, to generate specific outputs in the form of content, for generative AI systems (such as contenxt, video or images), and predictions, recommendations, or decisions which influence the environment with which the system interacts, be it inin both a physical orand digital dimension. AI systems can be designed to operate with varying levels of autonomy andFor the purposes of this AI Regulation, AI systems can be designed that must follow an approach with limited explanations and operate with varying levels a very high level of autonomy. These systems may be used on a stand-alonen autonomous basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be accomplementanied by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments and developments in the market through the adoption of delegated acts by the Commission to amend that list.
Amendment 152 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. However, with regard to the risk management system for AI systems considered to be high-risk, the EU’s harmonisation legislation should focus on the essential requirements and leave their technical implementation to be governed by voluntary product-specific and cutting- edge standards, developed by the stakeholders. It is therefore desirable for European legislation to focus on the desired outcome of the risk management and evaluation systems, and to expressly leave industry the task of designing its systems and tailoring them to its internal operations and structures, particularly by developing cutting-edge standardisation systems.
Amendment 195 #
Proposal for a regulation
Recital 29
Recital 29
(29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40, Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43, Regulation (EU) 2018/858 of the European Parliament and of the Council44, Regulation (EU) 2018/1139 of the European Parliament and of the Council45, and Regulation (EU) 2019/2144 of the European Parliament and of the Council46, it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. In addition, effective standardisation rules are needed to make the requirements of this Regulation operational. The European institutions, and first and foremost the Commission, should, together with enterprises, identify the AI sectors where there is the greatest need for standardisation, to avoid fragmentation of the market and maintain and further strengthen the integration of our European Standardisation System (ESS) within the International Standardisation System (ISO, IEC). _________________ 39 Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 40 Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1). 41 Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52). 42 Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146). 43Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44). 44 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 45 Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). 46 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).
Amendment 196 #
Proposal for a regulation
Recital 29 a (new)
Recital 29 a (new)
(29a) To demonstrate that the characteristics of a high-risk AI system conform to the requirements set out in Chapter 2 of Title III, it must be possible to conduct internal controls and use harmonised standards based on agreement. It is desirable for the European institutions, and first and foremost the Commission, to do more to promote alignment with existing international standardisation activities and with the certifications issued as part of the EU information security scheme. However, unlike the procedure to assess product conformity, where assessment infrastructure is in place, the relevant competence for auditing autonomous AI systems is still being developed. Moreover, because of the specific technological features of AI, it is possible that the competent authorities may encounter difficulties in verifying the conformity of some AI systems with existing legislation. It is therefore necessary for conformity assessment mechanisms to be developed with flexibility, so that due account may be taken of the infrastructure gaps, and disparities in application may be avoided in the single market.
Amendment 199 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversightsupervision.
Amendment 210 #
Proposal for a regulation
Recital 43
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversightsupervision, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
Amendment 217 #
Proposal for a regulation
Recital 48
Recital 48
(48) Human supervision must remain the basic ethical principle for the development and distribution of high-risk AI, since it guarantees transparency, confidentiality and protection of data and safeguarding against discrimination. However, it is vital to maintain a balance between meaningful human supervision and the efficiency of the system, in order not to compromise the benefits offered by these systems in sectors such as information security analysis, threat analysis and incident response processes. High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversightsupervision measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversightsupervision has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 239 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversightsupervision and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversightsupervision before these systems are placed on the market or otherwise put into service.
Amendment 241 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversightsupervision and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To permit effective participation by these categories in regulatory sandboxes, compliance costs must be kept to an absolute minimum. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 244 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversightsupervision and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 245 #
Proposal for a regulation
Recital 72 a (new)
Recital 72 a (new)
(72a) It is desirable for the establishment of regulatory sandboxes, which is currently left to the discretion of Member States, to be made obligatory, with properly established criteria, to ensure both the effectiveness of the system and easier access for enterprises, particularly SMEs. It is also necessary for research enterprises and institutions to be involved in developing the conditions for the creation of regulatory sandboxes.
Amendment 248 #
Proposal for a regulation
Recital 74
Recital 74
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level and the national cybersecurity agencies should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
Amendment 254 #
Proposal for a regulation
Recital 89 a (new)
Recital 89 a (new)
(89a) As things currently stand, the AI sector has a strategic international dimension. In order to achieve the objectives and ambitions set out in this Regulation and strengthen the European approach to AI internationally, it is a matter of urgency that thinking in this area, including as a result of of this legislation, should not remain solely within the European Union. If the EU wishes to be at the forefront of creating democratic and inclusive regulation that respects the rights of individuals, including those outside Europe’s borders, it should seek to be a benchmark in this sphere for non-EU countries too. That would serve to safeguard the competitiveness of the principal actors of the market and spread practices similar to those in this Regulation on a global scale. This Regulation’s effectiveness would be strengthened if the European Union were able to play a key role at international level too.
Amendment 271 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system ’ (AI system) means software that is developed with one or more ofsystem) means a system that (i) receives machine-based and/or human- based data and inputs (ii) adopts an approach with limited explanations that infers how to achieve a given set of human-defined objectives through learning, reasoning or modelling implemented using the techniques and approaches listed in Annex I, and can, for a given set of human-defined objectives, generate outputs such as content(iii) generates outputs with a very high level of autonomy in the form of content (generative AI systems), predictions, recommendations, or decisions influencing the environments ithey interacts with;
Amendment 278 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 – point a (new)
Article 3 – paragraph 1 – point 1 – point a (new)
(a) ‘AI system used in an advisory capacity’ means an AI system in which the final decision is taken by a human.
Amendment 279 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 – point b (new)
Article 3 – paragraph 1 – point 1 – point b (new)
(b) ‘AI system with decision-making capacity’ means an AI system with the capacity to model decisions in a repeatable manner, without human supervision.
Amendment 309 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
Amendment 353 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk. In the event of uncertainty regarding the classification of the AI system, the supplier must deem the AI system to be high-risk if its use or application poses a risk of physical or non-physical harm to health and safety or a risk of an adverse impact to the fundamental rights of natural persons, groups of individuals or society as a whole, as set out in Article 7(2).
Amendment 370 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
Article 7 – paragraph 2 – point e
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, with no distinctions between AI systems with an advisory or decision- making purpose, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
Amendment 403 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. The training, validation and testing of data sets and the AI applications based on them shall be subject to appropriate data governance and management practices. Those practices shall concern in particular,
Amendment 412 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases that are likely to affect health and safety of persons, lead to discrimination prohibited by Union law or have some other impact on fundamental rights;
Amendment 422 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errors and compl and complete, taking into account the degree of variability within data setes. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 427 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may also process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, ensuring compliance with the highest security and privacy protection standards for data management. Such processing shall also be subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.
Amendment 432 #
Proposal for a regulation
Article 12 – paragraph 4 – subparagraph 1 (new)
Article 12 – paragraph 4 – subparagraph 1 (new)
The retention period must not exceed 10 years at most, unless specific regulations establish otherwise.
Amendment 436 #
Proposal for a regulation
Article 13 – paragraph 3 – point d
Article 13 – paragraph 3 – point d
(d) the human oversightsupervision measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users;
Amendment 437 #
Proposal for a regulation
Article 14 – title
Article 14 – title
Human oversightsupervision
Amendment 438 #
Proposal for a regulation
Article 14 – paragraph 1
Article 14 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. Human supervision should be proportionate to the task carried out by the system and should not compromise its efficiency or effectiveness.
Amendment 443 #
Proposal for a regulation
Article 14 – paragraph 2
Article 14 – paragraph 2
2. Human oversightsupervision shall aim at prevenotecting or minimising the risks to health, safety or fundamental rightsafety and fundamental human rights, preventing or minimising the risks that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
Amendment 444 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
Article 14 – paragraph 3 – introductory part
3. Human oversightsupervision shall be ensured through either one or all of the following measures:
Amendment 446 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
Article 14 – paragraph 4 – introductory part
4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversightsupervision is assigned to do the following, as appropriate to the circumstances:
Amendment 454 #
Proposal for a regulation
Article 15 – paragraph 1
Article 15 – paragraph 1
Amendment 495 #
Proposal for a regulation
Article 29 – paragraph 2
Article 29 – paragraph 2
2. The obligations in paragraph 1 are without prejudice to other user obligations under Union or national law and to the user’s discretion in organising its own resources and activities for the purpose of implementing the human oversightsupervision measures indicated by the provider.
Amendment 598 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The Board mayshall invite external experts and observers, including providers with appropriate skills and proven experience in supporting Member State authorities in the preparation and management of experimentation and test facilities, to attend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
Amendment 615 #
Proposal for a regulation
Article 61 – paragraph 2 a (new)
Article 61 – paragraph 2 a (new)
2a. Since the sensitive nature of some high-risk AI systems, especially systems used by public authorities, agencies and institutions to prevent, investigate, detect or prosecute crimes, could result in significant restrictions on the collection and sharing of data between the end user and the provider, end users must involve the provider in the definition of aspects such as the nature of data made available for post-marketing monitoring and the degree of anonymisation of data. This should take place as early as the system design stage, in order to allow the provider to perform activities under the Regulation with a complete data set that has already been validated by the final user before the activity, and with a level of security that is proportionate to the task carried out by the system. The end user must remain responsible for the disclosure of data contained in such groups of data.
Amendment 631 #
Proposal for a regulation
Annex I – point b
Annex I – point b
Amendment 633 #
Proposal for a regulation
Annex I – point c
Annex I – point c
Amendment 634 #
Proposal for a regulation
Annex I – point c a (new)
Annex I – point c a (new)
(ca) Approaches based on the assessment of behavioural and psychological characteristics of individuals, including activities, interests, opinions, attitudes, values and lifestyles, recognised through automatic means;
Amendment 635 #
Proposal for a regulation
Annex III – paragraph 1 – introductory part
Annex III – paragraph 1 – introductory part
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas, whose use or application poses a risk of harm to health and safety or a negative impact on the fundamental rights of natural persons, groups or society in general.
Amendment 643 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) Following the adoption of common specifications under Article 41 of this Regulation, AI systems intended to be used to evaluate the creditworthiness rating of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use when granting access to credit or other essential services, with the exception of AI systems put into service by providers on a small scale for their own use and AI systems based on autonomous use under human supervision of linear regression, logistic regression, decision trees and other equally transparent, explicable and interpretable techniques;
Amendment 652 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
Annex III – paragraph 1 – point 8 a (new)
8a. Identification and categorisation of behaviour and cognitive bias of natural persons.
Amendment 653 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
Annex IV – paragraph 1 – point 2 – point b
(b) in so far as this is without prejudice to professional secrecy, and only when the request is proportionate to the scale of the interest being preserved, the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
Amendment 655 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point e
Annex IV – paragraph 1 – point 2 – point e
(e) assessment of the human oversightsupervision measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the users, in accordance with Articles 13(3)(d);
Amendment 657 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3
Annex IV – paragraph 1 – point 3
3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversightsupervision measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;