BETA

7 Amendments of Romeo FRANZ related to 2018/2088(INI)

Amendment 11 #
Draft opinion
Paragraph 1
1. Calls on the Commission to collaborate closely with technical and social science researchers to investigate, prevent, and mitigate potential harmful effects of malicious uses of AI, and to develop tools, policies, and norms appropriate to AI applications; notes that best practices should be identified in research areas with more mature methods for addressing dual- use concerns, such as security and privacy, and that they should be applied to the area of AI;
2018/11/09
Committee: LIBE
Amendment 18 #
Draft opinion
Paragraph 2
2. Highlights the fact that malicious use of AI could threaten digital security, physical security, and political security as it could be used to conduct large-scale, finely-targeted and highly-efficient attacks on information society services and connected machinery, as well as disinformation campaigns;
2018/11/09
Committee: LIBE
Amendment 28 #
Draft opinion
Paragraph 4
4. Calls on the Commission to ensure that any EU framework on AI guarantees personal data protection, including the principles of lawfulness, fairness and transparency, data protection by design and default, purpose limitation and, data minimisation, and storage limitation, in compliance with Union data protection law;
2018/11/09
Committee: LIBE
Amendment 39 #
Draft opinion
Paragraph 5
5. Stresses that European standards for AI must be based on the principles of digital ethics, human dignity, respect for fundamental rights, data protection and security, thus contributing to building trust among users; emphasises the importance of capitalising on the EU’s potential for creating a strong infrastructure for AI systems rooted in high standards of data protection and respect for humans;
2018/11/09
Committee: LIBE
Amendment 50 #
Draft opinion
Paragraph 7
7. Underlines that any AI system must be developed with respect for the principles of transparency and for algorithmic accountability, allowing for human understanding of its actions; notes that in order to build trust in and enable the progress of AI, users must be aware of how their data, as well as other data and data inferred from their data, is used and when they are communicating or interacting with an AI system or with humans supported by an AI system; believes that this will contribute to better understanding and confidence among users when dealing with machines; stresses that the explainability of decisions must be an EU standard in accordance with Articles 13, 14 and 15 of the GDPR; recalls that this already foresees a right to be informed about the logic involved in data processing; stresses that individuals have the right to a final determination being made by a person;
2018/11/09
Committee: LIBE
Amendment 64 #
Draft opinion
Paragraph 8
8. Stresses the importance of the quality of data used in the development of algorithms, as the standard of AI systems relies on the data used to train them; notes that the use of low quality, outdated, incomplete or incorrect data may lead to poor predictions and in turn discrimination and bias, and that it is therefore important in the age of big data to ensure that algorithms are trained on representative samples of high quality data in order to achieve statistical parity.; stresses that even high-quality training data can lead to a perpetuation of existing discrimination and injustice if not used carefully and consciously; emphasises that even if such standards are met, predictive analysis based on AI can only offer a statistical probability and by no means can predict individual behaviour; recalls that under the GDPR, the further processing of personal data for statistical purposes, including AI training, may only result in aggregate data which cannot be re-applied to individuals;
2018/11/09
Committee: LIBE
Amendment 67 #
Draft opinion
Paragraph 8 a (new)
8 a. Underlines that the following principles should be applied for overall strategies on AI and robotics: a) Robots and artificial intelligence are multi-use tools. Robots and artificial intelligence should not be designed solely or primarily to kill or harm humans. Individual rights and fundamental freedoms must be guaranteed; particular human integrity (physical and mental integrity), human dignity and identity. We underline the primacy of the human being over the sole interest of science or society; b) Humans are responsible agents. Lawmakers should make sure that emerging technologies comply with existing laws and fundamental rights; c) Robots and artificial intelligence as products should be designed to be safe, secure and fit for purpose, as other products; d) Robots and artificial intelligence are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent; e) A person with legal responsibility for a robot or artificial intelligence should be attributed. In case of gross negligence regarding safety and security, manufacturers shall be held responsible despite non-liability clauses in user agreements may exist; f) In accordance with responsible research and innovation, the precautionary principle should be taken into account;
2018/11/09
Committee: LIBE