BETA

5 Amendments of Nathalie LOISEAU related to 2020/2013(INI)

Amendment 15 #
Draft opinion
Paragraph 3
3. Considers in particular that the use of AI- enabled systems in armed conflicts must, abs provided by the principles of the Martens Clause,Martens Clause, abide by the general principles of IHL and must never breach or be permitted to breach the dictates of the public conscience and humanity; considers that this is the ultimate test forclause should guide the admissibility of an AI- enabled system in warfare; calls on the AI research community to integrate this principle in all AI-enabled systems intended to be used in warfare; considers that no authority can issue a derogation from those principles or certify an AI- enabled system;
2020/06/04
Committee: AFET
Amendment 37 #
Draft opinion
Paragraph 6
6. Stresses the need for robust testing and evaluation systems based on norms to ensure that during the entire lifecycle of AI-enabled systems in the military domain, in particular during the phases of human- machine interaction, machine learning and adjusting and adapting to new circumstances, the systems do not go beyond the intended limits and willmust be used at all times in complyiance with the applicable international law;
2020/06/04
Committee: AFET
Amendment 43 #
Draft opinion
Paragraph 7
7. Highlights that any AI-enabled system used in the military domain must, as a minimum set of requirements, be able to distinguish between combatants and non-combatants on the battlefield, not have indiscriminate effects, not cause unnecessary suffering to persons, not be biased or be trained on biased data, and be in compliance with the IHL general principles of military necessity, and humanity, and the implementing principles of proportionality in the use of force and precaution prior to engagement;
2020/06/04
Committee: AFET
Amendment 48 #
Draft opinion
Paragraph 8
8. Stresses that in the use of AI- enabled systems in security and defence, fullcomprehensive situational understanding of the operator, ability to detect possible changes in circumstances and ability to discontinue an attack are needed to ensure that IHL principles, in particular distinction, proportionality and precaution in attack, are fully applied across the entire chain of command and control; stresses that AI- enabled systems must allow the military leadership to assume its full responsibility at all timthroughout each of their uses;
2020/06/04
Committee: AFET
Amendment 62 #
Draft opinion
Paragraph 10
10. Calls on the HR/VP, in the framework of the ongoing discussions on the international regulation of lethal autonomous weapon systems by states parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW), to remain engaged and help streamline the global debate on core issues and definitions where consensus has not been reached, in particular as regards concepts and characteristics of AI-enabled lethal autonomous weapons and their functions in the identification, selection and engagement of a target, application of the concept of human responsibility in the use of AI-enabled systems in defence, and the degree of human/machine interaction, including the concept of human control and judgment, during the different stages of the lifecycle of an AI-enabled weapon.
2020/06/04
Committee: AFET