BETA

14 Amendments of Marina KALJURAND related to 2020/2216(INI)

Amendment 6 #
Draft opinion
Paragraph 1 a (new)
1 a. Recommends that Europe must analyse the challenges for consumers created by AI and make the EU’s consumer rights standards fit for the 21st century. Therefore it must establish an AI European Certificate of Compliance with Ethical Principles to ensure European citizens trust on AI; This Certificate should be granted by an independent, public certification organisation after a thorough assessment of compliance with the Ethical Requirements put forward by the High Level Expert Group on AI. The certification criteria and requirements for assessing the compliance will be drawn by this body in cooperation with the Commission and the Member States. Suggests that certification and auditing mechanisms at both the national and EU levels for automated data processing and decision-making techniques should be developed to ensure their compliance with ethical principles and values. Monitoring of compliance should be proportionate to the nature and degree of risk associated with the operation of the artificial intelligence application or system;
2020/12/21
Committee: ITRE
Amendment 38 #
Draft opinion
Paragraph 3
3. Emphasises that the COVID crisis provides an opportunity to speed up digitalisation; calls for financial incentives for SMEs that want to enter new markets; calls for new and open frameworks of access to data for European SMEs and start-ups in order to support their growth by empowering the training, testing and development of AI-enabled systems and applications. Calls for an inclusive digitisation of our societies that will serve the interests of the citizens by taking into account accessibility and affordability considerations. Calls for coordinated actions to address Europe’s digital divide that has been worsened due to the COVID and for a fair and cooperative digital modernisation of the public sector that would aim at a value-based digital transformation by promoting fundamental rights and democratic values.
2020/12/21
Committee: ITRE
Amendment 64 #
Draft opinion
Paragraph 5
5. Calls on the Commission to stop funding big companies and distributing the remaining funds by a shotgun approach; calls for winners to be picked and grown larger; suggests prioritising future areas for digital economic structureshighlights that large technology companies and platforms with strategic market status in the DSM may leverage their positions not only in terms of the market but also in terms of access to and control of data, resulting in possible concentration of AI innovation and future imbalances in the DSM; calls for winners to be picked and grown larger; suggests prioritising future areas for digital economic structures; Highlights the need to support SMEs to master the twin transition to sustainability and digitalisation by safeguarding that they have access to the right skills, expertise and funding. Highlights the need for this support to acquire abroad geographical coverage across Europe, including remote, rural and island areas and aim at strengthening the digital capabilities and infrastructure in smaller places at the periphery of Europe;
2020/12/21
Committee: ITRE
Amendment 67 #
Draft opinion
Paragraph 5 a (new)
5 a. Warns against the use of predictive technologies or perception manipulation techniques for market purposes from Big tech companies and pledges to safeguard that sensitive personal data, transactions data and metadata will not be used for profit by big corporations without citizens awareness and clear consent. Calls for these techniques to be classified in the highest category of the risk level scale proposed by the Commission given their specific and extremely sensitive nature as well as their potential misuses Calls the European Data Protection Board to issue Guidelines on this issue and highlights the need to safeguard algorithmic transparency of AI technologies and applications. Stresses the need for the establishment of a thorough system of traceability of AI systems that will be under human oversight, understandable by the consumers and which meets data subjects’ reasonable expectations;
2020/12/21
Committee: ITRE
Amendment 88 #
Draft opinion
Paragraph 8 a (new)
8 a. Suggests that the EU must ensure minimum standards of fair working conditions for platform workers in line with the European Pillar of social rights as a requirement to allow access of platforms to the EU Digital single market. Suggests that the EU should introduce rules that control the growing digitisation of workplace monitoring and also to introduce mechanisms and methodologies that assess the relevant risks that have been augmented due to the increasing blurring between office and home environments. Calls for the EU to establish collective bargaining agreements and umbrella protection mechanisms for all platform workers;
2020/12/21
Committee: ITRE
Amendment 99 #
Draft opinion
Paragraph 9
9. Recognises that AI deployment is key to European competitiveness in the digital era; highlights that to facilitate the uptake of AI in Europe, a common European approach is needed to avoid internal market fragmentation, ensure the safety of data of Europeans and guarantee that they will not be processed by non-EU bodies for profit-making and/or political purposes or used to train algorithms shared with authoritarian regimes;
2020/12/21
Committee: ITRE
Amendment 115 #
Draft opinion
Paragraph 10
10. Considers that access to big data is key for the development of AI; calls for a new approach to data regulationreiterates the need for a new approach to data ownership by data subjects in the context of AI-enabled systems to ensure privacy and control of aggregated data or metadata built on data points containing information including, but not limited to, time, location, transactions; calls for a new approach to data regulation; stresses that privacy and data protection must be guaranteed at all stages of the AI system’s life cycle and notes that any big data processing operation should be subject to an ex-ante and extensive Data Protection Impact Assessment;
2020/12/21
Committee: ITRE
Amendment 118 #
Draft opinion
Paragraph 10 a (new)
10 a. Suggests that public and private sector actors should develop and document internal processes to ensure that their design, development and ongoing deployment of algorithmic systems is transparent, explainable, auditable and continuously evaluated and tested, not only to detect possible technical errors but also identify possible legal, social and ethical impacts that the systems may generate;
2020/12/21
Committee: ITRE
Amendment 122 #
Draft opinion
Paragraph 10 b (new)
10 b. Demands that any artificial intelligence, robotics and related technologies system, shall be developed, deployed or used with "privacy by default" and in a manner that prevents the possible identification of individuals from data that were previously processed based on anonymity or pseudonymity, and the generation of new, inferred, potentially sensitive data and forms of categorisation through automated means (metadata). Calls the Commission to develop robust anonymisation and pseudonymisation techniques and identify best practices that will meet the processing requirements of the GDPR;
2020/12/21
Committee: ITRE
Amendment 126 #
Draft opinion
Paragraph 10 c (new)
10 c. Strongly emphasises the need to protect consumers from microtargeting practises and suggests that it should be flagged and coupled with their right to request a report on the use of behavioural analytics that were used to achieve consumers targeting. Is of the opinion that targeted advertisement practises should be explainable and offer to consumers options of choosing the desired personalization level/percentage of microtargeting. (ex. on a scale 0-100%). Strongly considers that the use of these practices should be subject to specific safeguards such as the informed and explicit consent of their owner, who should have the right to access effective remedies in case of misuse;
2020/12/21
Committee: ITRE
Amendment 129 #
Draft opinion
Paragraph 11
11. Warns against overregulating AI and discourages any "one-size-fits-all" approach to regulation; recalls that regulation must be balanced, agile, permanently evaluroportionated , and based on soft regulation except for high-risk areasthe current legislative instruments and best practices except for high-risk areas where a new regulatory approach should be devised;
2020/12/21
Committee: ITRE
Amendment 135 #
Draft opinion
Paragraph 11 a (new)
11 a. Recommends that determining the risk level and the classification of sectors as high or low-risk, should always derive from an impartial, regulated, inclusive, independent and external assessment that considers ethical harms that can arise from artificial intelligence, robotics and related technologies in society, either because of poor (unethical) design, inappropriate application, or misuse; Such an assessment needs to balance attention to abstract principles with specificity; Recommends that determining the risk level and the classification of sectors as high or low-risk, should always derive from an impartial, regulated, inclusive, independent and external assessment that considers ethical harms that can arise from artificial intelligence, robotics and related technologies in society, either because of poor (unethical) design, inappropriate application, or misuse; Such an assessment needs to balance attention to abstract principles with specificity; Strongly recommends that a broad and inclusive debate and stakeholder consultation will contribute to creating trust among citizens regarding the assessment and classification of risks;
2020/12/21
Committee: ITRE
Amendment 141 #
Draft opinion
Paragraph 11 b (new)
11 b. Requests the Commission to determine the risk level of sectors by taking into account non-quantifiable risks and pay particular attention to the identification and characterisation of the hazard, the assessment of the likelihood of its occurrence and the characterisation of risk. Asks the Commission to pay particular attention to carefully evaluate all the uncertainties and transparently report on them, even when these cannot be modelled or expressed in quantitative terms. Requests the Commission to apply the Ethical Requirements put forward by the High Level Expert Group at the risk management level and consider the need for introducing a precautionary approach towards high level or potentially irreversible risks;
2020/12/21
Committee: ITRE
Amendment 151 #
Draft opinion
Paragraph 12 a (new)
12 a. Calls on the Commission and the Member States to consider the creation of a European regulatory agency for AI and algorithmic decision-making tasked with 1) Auditing the AIAs of high-level impact systems to approve or reject the proposed uses of algorithmic decision-making in highly sensitive and/or safety-critical application domains (private health-care, for instance) 2) Investigating suspected cases of rights violations by algorithmic decision-making systems, for both individual decision instances (singular aberrant outcomes, for example) and statistical decision patterns (discriminatory bias, for instance) 3) Assessing compliance with the proposed Ethics Requirements and conduct periodical ethics reviews and audits;
2020/12/21
Committee: ITRE