Progress: Procedure completed
Role | Committee | Rapporteur | Shadows |
---|---|---|---|
Lead | JURI | VOSS Axel ( EPP) | GEBHARDT Evelyne ( S&D), SCHREINEMACHER Liesje ( Renew), LAGODINSKY Sergey ( Verts/ALE), BECK Gunnar ( ID), ZŁOTOWSKI Kosma ( ECR), MAUREL Emmanuel ( GUE/NGL) |
Committee Opinion | IMCO | HAHN Svenja ( Renew) | Stelios KOULOGLOU ( GUE/NGL), Marcel KOLAJA ( Verts/ALE), Adriana MALDONADO LÓPEZ ( S&D), Geert BOURGEOIS ( ECR) |
Committee Opinion | TRAN | MONTEIRO DE AGUIAR Cláudia ( EPP) | Josianne CUTAJAR ( S&D), Robert ROOS ( ECR) |
Committee Opinion | LIBE |
Lead committee dossier:
Legal Basis:
RoP 47
Legal Basis:
RoP 47Subjects
Events
The European Parliament adopted by 626 votes to 25, with 40 abstentions a resolution containing recommendations to the Commission on a civil liability regime for artificial intelligence (AI).
Parliament called on the Commission to propose a regulation laying down rules for the civil liability claims of natural and legal persons against operators of AI-systems.
Liability and artificial intelligence
The Product Liability Directive has proven its effectiveness as a tool to obtain compensation for damage caused by a defective product for more than 30 years, but it should nevertheless be revised to make it better adapted to the digital world and able to meet the challenges posed by emerging digital technologies.
Members considered it necessary to ensure maximum legal certainty throughout the liability chain, including for the producer, operator, injured parties and any other third parties, in order to respond to the new legal challenges created by developments in artificial intelligence (AI) systems. Civil liability rules for AI should strike a balance between protecting citizens and supporting technological innovation.
Scope of application
The requested proposal for a Regulation should apply on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss.
Parliament considered that operator liability rules should apply to all types of AI system operations, regardless of the location of the operation and whether it is of a physical or virtual nature.
Objective liability for high-risk AI systems
Under the requested proposal, the operator of a high-risk AI-system should be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system. It should not be able to exonerate itself from liability by claiming that it acted with due diligence.
Although high-risk AI technologies are still rare, operators of high-risk AI systems should take out liability insurance similar to that for motor vehicles. The compulsory insurance regime for high-risk Mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation. Uncertainty regarding risks should not make insurance premiums prohibitively high and thereby an obstacle to research and innovation.
Compensation
Under the requested regulation, an operator of a high-risk AI-system that has been held liable for harm or damage under this Regulation should compensate:
- up to a maximum amount of EUR two million in the event of the death of, or in the event of harm caused to the health or physical integrity of, an affected person, resulting from an operation of a high-risk AI-system;
- up to a maximum amount of EUR one million in the event of significant immaterial harm that results in a verifiable economic loss or of damage caused to property.
Civil liability claims based on injury to life, health or limb should be subject to a special limitation period of 30 years from the date on which the injury occurred. This period would be 10 years from the date when the property damage occurred or the verifiable economic loss resulting from the significant immaterial harm.
Fault-based liability for other AI-systems
The operator of an AI-system that does not constitute a high-risk AI-system should be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system. The operator should not be liable if he or she can prove that the harm or damage was caused without his or her fault.
Monitoring developments
The Commission is called on to work closely with the insurance market to develop innovative insurance products that can fill the insurance gap. Any future changes to the Regulation should go hand in hand with the necessary revision of the Product Liability Directive, in order to revise it in a comprehensive and coherent manner and to ensure the rights and obligations of all parties involved throughout the liability chain.
Parliament recommended that an exhaustive list of all high-risk AI systems be set out in an annex to the proposed Regulation. In view of rapid technological developments, the Commission should review this annex at least every six months and, if necessary, amend it by means of a delegated act.
Documents
- Results of vote in Parliament: Results of vote in Parliament
- Decision by Parliament: T9-0276/2020
- Debate in Parliament: Debate in Parliament
- Committee report tabled for plenary, single reading: A9-0178/2020
- Committee report tabled for plenary: A9-0178/2020
- Committee opinion: PE646.911
- Committee opinion: PE648.381
- Amendments tabled in committee: PE652.460
- Amendments tabled in committee: PE652.518
- Committee draft report: PE650.556
- Committee draft report: PE650.556
- Amendments tabled in committee: PE652.460
- Amendments tabled in committee: PE652.518
- Committee opinion: PE648.381
- Committee opinion: PE646.911
- Committee report tabled for plenary, single reading: A9-0178/2020
Votes
A9-0178/2020 - Axel Voss - Résolution #
Amendments | Dossier |
588 |
2020/2014(INL)
2020/05/18
TRAN
70 amendments...
Amendment 1 #
Draft opinion Recital A A. whereas artificial intelligence (“AI”) and other emerging digital technologies have the potential to transform our societies and economies for the better; nonetheless, it is impossible to completely exclude the possibility of damage, injury or loss of life resulting from the
Amendment 10 #
Draft opinion Recital B d (new) B d. Whereas people are far less tolerant for errors caused by machines and algorithms than by people;
Amendment 11 #
Draft opinion Recital C Amendment 12 #
Draft opinion Recital D D. whereas Union and national legislation should ensure high product safety and a sound system management both ex ante and throughout a
Amendment 13 #
Draft opinion Recital D D. whereas Union and national legislation should ensure high product and service safety both ex ante and throughout a product’s life cycle, while facilitating the compensation of victims ex post;
Amendment 14 #
Draft opinion Recital D a (new) D a. whereas technological development in AI should remain human- centric and products and applications using AI should be conducive to human growth and a good quality of life;
Amendment 15 #
Draft opinion Paragraph 1 1. Underlines that AI can be applied at different levels in vehicles and on transport infrastructure and has an important impact on their autonomy and consequently on civil liability; calls for EU wide clear definitions for all types of vehicles and infrastructure running AI software and a corresponding risk classification to support a liability mechanism in clarifying issues of responsibility.
Amendment 16 #
Draft opinion Paragraph 1 1. Underlines that AI
Amendment 17 #
Draft opinion Paragraph 1 1. Underlines that AI can be applied at different levels in
Amendment 18 #
Draft opinion Paragraph 1 a (new) 1 a. Calls on the Commission to develop an EU wide civil liability mechanism for AI applications in transport, with the objective of setting clear criteria for the establishment of liability to avoid a counterproductive fragmented case-by-case approach in different Member States.
Amendment 19 #
Draft opinion Paragraph 1 a (new) 1 a. Underlines that the transport sector constitutes one of the sectors where risks for human safety, health or life are higher and therefore considers that specific liability rules should apply to it to ensure the highest safety and security standards possible;
Amendment 2 #
Draft opinion Recital A a (new) A a. whereas facilitating the development of new AI-based transport technologies, products and services, as well as encouraging AI deployment and uptake in Europe, should be a top priority for Union and an underlying objective for developing the liability framework concerning them.
Amendment 20 #
Draft opinion Paragraph 1 a (new) 1 a. that IA should also incorporate blockchain and distributed ledger technologies (DLT) due to their importance in the transaction, communication and information-sharing;
Amendment 21 #
Draft opinion Paragraph 1 a (new) 1 a. Stresses that safety is of paramount importance in the transport sector and that safety and liability are two faces of the same coin;
Amendment 22 #
Draft opinion Paragraph 1 b (new) 1 b. Recalls that the transport sector has been integrating AI technologies for decades, in particular with the introduction of the automation of train operation (ATO), including in urban areas where fully automated, driverless operations have increased system availability, network capacity and operational efficiency;
Amendment 23 #
Draft opinion Paragraph 2 2. Underlines that automated functionalities can bring significant safety improvements in the medium and long term as well as unintended consequences (e.g. cybersecurity, data privacy); notes that AI could also be used for planning and guiding logistics chains, and for increasing efficiency, resilience, reliability, sustainability and flexibility;
Amendment 24 #
Draft opinion Paragraph 2 2. Underlines that automated functionalities can bring significant safety improvements in the medium and long term; notes that AI could also be used for planning and guiding logistics chains, and for increasing efficiency, resilience, reliability and flexibility. The coexistence of various levels of automation represents a challenge;
Amendment 25 #
Draft opinion Paragraph 2 2. Underlines that automated functionalities can bring significant safety improvements in the medium and long term; notes that AI could also be used for planning and guiding logistics and block chains, and for increasing efficiency, resilience, reliability and flexibility;
Amendment 26 #
2 a. Underlines the risks pertaining from mixed traffic (featuring both traditional and autonomous vehicles) that has shown to bear the highest accident risk, thus calling for more research and development by both public and private means and more testing, to enhance product safety and as a result road safety, but not least to also provide concrete data helping further development and also to adapt civil liability rules;
Amendment 27 #
Draft opinion Paragraph 2 a (new) 2 a. Notes that smart cities will manage traffic through AI based systems, constantly communicating with vehicles, drones, automated machinery, and infrastructure; deems essential that a new civil liability regime for AI is designed to factor in all possible risks emerging from such new digital interactions between infrastructure and vehicles of all categories.
Amendment 28 #
Draft opinion Paragraph 2 b (new) 2 b. Draws attention to the increased use of unmanned aerial vehicles (UAVs) for commercial uses including, but not limited to, surveillance, site inspection, photography and parcel delivery; notes that such increase in use, particularly in urban areas, will continuously test the civil liability regime in place.
Amendment 29 #
Draft opinion Paragraph 2 b (new) 2 b. Underlines that a possible solution to address the existing gaps and shortcomings of rules could be the setting up of a framework for no-fault insurance for damage resulting from autonomous vehicles or the eventual reassessment of the Motor insurance Directive;
Amendment 3 #
Draft opinion Recital A a (new) A a. whereas across different transport industries, a range of scales of automation and artificial intelligence have been applied;
Amendment 30 #
Draft opinion Paragraph 3 3. Stresses the importance of defining a clear division of responsibilities between software developers, manufacturers of various components, service providers
Amendment 31 #
Draft opinion Paragraph 3 3. Stresses the importance of defining a clear division of responsibilities between software developers, manufacturers of various components, service providers and operators and end users; stresses that ultimately a natural person must be responsible for the algorithm that guides ethical decisions of vehicles with high levels of automation;
Amendment 32 #
Draft opinion Paragraph 3 3. Stresses the importance of defining a clear division of responsibilities between software developers, manufacturers of various components, service providers and operators and end users, as well as upholding the rights of consumers and ensuring that they know exactly who to contact;
Amendment 33 #
Draft opinion Paragraph 3 3. Stresses the importance of defining a clear division of responsibilities between software developers, manufacturers
Amendment 34 #
Draft opinion Paragraph 3 a (new) 3 a. Highlights the fundamental role the precautionary principle, enshrined in article 191 of the Treaty on the Functioning of the European Union (TFEU), holds for risk analysis and risk management; urges for the respect of such principle to ensure the highest level of protection for citizens, consumers and users in the deployment of AI systems in high-risk sectors.
Amendment 35 #
Draft opinion Paragraph 3 a (new) 3 a. Stresses that AI systems should not damage nor hurt human physical and psychological integrity; calls therefore on AI systems to be technically robust in order for them not tobe used for harmful purposes;
Amendment 36 #
Draft opinion Paragraph 3 a (new) 3a. Considers that end-users should be given relevant information and instructions for use of products incorporating AI;
Amendment 37 #
Draft opinion Paragraph 4 4. Underlines that for AI-related applications with a specific high-risk profile, there is a need for a risk-based approach depending on the levels of automation;
Amendment 38 #
Draft opinion Paragraph 4 4. Underlines that for AI-related applications with a specific high-risk profile, there is a need for a risk-based approach
Amendment 39 #
Draft opinion Paragraph 4 4. Underlines that for AI-related applications with a specific high-risk profile, there is a need for a risk-based approach depending on the levels of automation; stresses the need for a suitable approach to data protection also;
Amendment 4 #
Draft opinion Recital B B. whereas AI in transport is driving the evolution of the next generation of IT systems and its application involves using many types of technologies such as autonomous vehicles and traffic management solutions, particular attention also needs to be paid to interoperability;
Amendment 40 #
Draft opinion Paragraph 4 4. Underlines that for AI-related applications with a specific high-risk profile, there is a need for a risk-based approach depending on the levels of automation; risks relating to hacking and cybercrime need to be adequately addressed
Amendment 41 #
Draft opinion Paragraph 4 4. Underlines that for AI-related applications in the transport sector with a specific high-risk profile, there is a need for a risk-based approach depending on the levels of automation and self-learning of the system;
Amendment 42 #
Draft opinion Paragraph 4 a (new) 4 a. Emphasises the need of a swift compensation for victims regardless of the chain of liability. The first objective should be to allow victims to be taken care of and compensated, especially if it takes a long time to establish liability.
Amendment 43 #
Draft opinion Paragraph 5 5. Recommends that when an operator has a higher degree of control than the owner or user of an actual product or service equipped with AI, that operator is best positioned to manage the risks and should therefore be held liable; notes that each obligation should rest on the actor who is best placed to address the risk; stresses that fundamental principles for the development of AI transport services is the consent of the consumer and her anonymity, without any mandatory nature of using contact-tracing applications; urges therefore for AI services to be user- based; urges the Commission to set up means to certify these services in order to prevent the proliferation of harmful contact-tracing applications;
Amendment 44 #
Draft opinion Paragraph 5 5. Recommends that when an operator has a higher degree of control than the owner or user of an actual product or service equipped with AI, that operator is best positioned to manage the risks and should therefore be
Amendment 45 #
Draft opinion Paragraph 5 a (new) 5 a. Notes that the protection of EU citizens and businesses using those technologies shall requires the consideration of liabilities of the different involved parties irrespectively of the fact that those organisations are EU-based organizations or not (extra-territorial effect).
Amendment 46 #
Draft opinion Paragraph 6 6. Emphasises the need to guarantee at least the same level of product safety as that currently existing ones, also taking account of the EU vision zero target, to ease the remedy to victims of accidents, to avoid increasing current litigation costs; and to avoid legal uncertainty, especially for businesses that
Amendment 47 #
Draft opinion Paragraph 6 6. Emphasises the need to guarantee the
Amendment 48 #
Draft opinion Paragraph 6 6. Emphasises the need to guarantee at least the same level of product safety as that currently existing, to ease the remedy to
Amendment 49 #
Draft opinion Paragraph 6 a (new) 6 a. Considers that the proposal for a Regulation on promoting fairness and transparency for business users of online intermediation services (COM/2018/238 final) is a step towards a just level playing field for SMEs competing with large corporations in markets for digital services also implementing AI, and asks for its completion after being updated and aligned with the new Digital Strategy set up by the Commission in its Communication on Shaping Europe’s Digital Future of 19 February 2020 (COM(2020)67 final);
Amendment 5 #
Draft opinion Recital B B. whereas AI in transport is driving the evolution of the next generation of IT systems and its application involves using many types of technologies such as autonomous vehicles, unmanned aircraft systems (UAS) and traffic management solutions;
Amendment 50 #
Draft opinion Paragraph 6 a (new) 6 a. Believes that despite the level of automation and of integration of artificial intelligence of the transport systems and vehicles, liability should always lie with natural and legal persons in order to ensure legal certainty and to encourage investment and the correct uptake of the technology;
Amendment 51 #
Draft opinion Paragraph 7 7. Stresses the importance of ensuring that drivers are always fully aware of a vehicle’s level of automation and their level of liability: drivers should be informed about their vehicles’ AI systems and related limitations of such systems such as activation, deactivation, failure; moreover, in-vehicle features should periodically remind the driver that he or she is in charge of monitoring the vehicle status; stresses that drivers cannot be held liable when automatic driving assistance systems err if they are found to have lawfully used these systems.
Amendment 52 #
Draft opinion Paragraph 7 7. Stresses the importance of ensuring that drivers are always fully aware of a vehicle’s level of automation and their level of liability: drivers should be informed about their vehicles’ AI systems
Amendment 53 #
Draft opinion Paragraph 7 7. Stresses the importance of ensuring that drivers are properly trained and always fully aware of a vehicle’s level of automation and their level of liability: drivers should be informed about their vehicles’ AI systems and related limitations of such systems such as activation, deactivation, failure; moreover, in-vehicle features should periodically remind the driver that he or she is in charge of monitoring the vehicle status;
Amendment 54 #
Draft opinion Paragraph 8 8. Notes that there is a need to deploy event recorders for use in the event of severe accidents, in full respect of data protection and privacy law; These recorders should in no circumstances be usable/used as permanent tracing systems, therefore data collected must be deleted in the determined and strictly limited timeframe;
Amendment 55 #
Draft opinion Paragraph 8 8. Notes that there is a need to deploy event record
Amendment 56 #
Draft opinion Paragraph 8 a (new) Amendment 57 #
Draft opinion Paragraph 9 9. Calls for further analysis of the need to adapt the European Driving Licence Directive due to automated functionalities of vehicles; urges moreover the Commission to carry out a periodic assessment of transport European regulatory framework to ensure it can respond to the safety and liability challenges related to the integration of AI technologies;
Amendment 58 #
Draft opinion Paragraph 9 a (new) 9 a. Believes human-centricity should be the basis for any update and development of regulatory framework related to the automation and AI- integration of transport;
Amendment 59 #
Draft opinion Paragraph 10 10. Underlines that liability schemes in the event of an accident or a violation of traffic legislation need to be carefully designed for each level of automation and AI integration and communicated in a clear way to the users in order to ensure a smooth transition between full driver liability to full manufacturer and
Amendment 6 #
Draft opinion Recital B B. whereas AI in transport is driving the evolution of the next generation of IT systems and its application involves using many types of technologies such as autonomous vehicles and smart traffic management solutions;
Amendment 60 #
Draft opinion Paragraph 11 11. Notes that
Amendment 61 #
Draft opinion Paragraph 11 11. Notes that automated vehicles deal with variable signals and conditions; calls as a result for a regular update of digital maps providing a compulsory minimum set of information about the road network and any hazards or obstacles that might be encountered;
Amendment 62 #
Draft opinion Paragraph 11 11. Notes that automated vehicles deal with variable signals and conditions; calls as a result for a regular update of digital maps, traffic management systems and data sharing rules providing a compulsory minimum set of information about the road network;
Amendment 63 #
Draft opinion Paragraph 11 a (new) 11 a. Highlights that AI is one of the most important applications of the data economy; recalls that AI-based systems have a strong data dependency, and rely on data accuracy and relevance; calls therefore the Commission to explore the possibility to include in the Union product safety legislation requirements addressing the risks to safety of faulty data;
Amendment 64 #
Draft opinion Paragraph 11 a (new) 11 a. Stresses that regional and local competences as regards to AI services, where existent, should be guaranteed and that notice-and-action mechanisms should be based on the principle of subsidiarity and therefore recognise these type of competences in order to guarantee that regional administrations do not lose competences;
Amendment 65 #
Draft opinion Paragraph 11 b (new) 11 b. Considers that AI should be sustainably respectful with the environment as well as it should promote research to achieve UN’s Sustainable Development Objectives;
Amendment 66 #
Draft opinion Paragraph 12 12.
Amendment 67 #
Draft opinion Paragraph 12 12. Asks the Commission to present guidelines to avoid fragmented regulatory approaches at national level, taking into consideration the Product Liability Directive and existing national liability regimes; emphasizes how the aforementioned fragmentation will be extremely damaging for the development of such technologies and for the competitiveness of EU businesses and SMEs.
Amendment 68 #
Draft opinion Paragraph 12 12. Asks the Commission to present guidelines to avoid fragmented regulatory approaches at national level, taking into consideration the Product Liability Directive and national liability regimes; stresses the need for a uniform European policy to be adopted by all Member States.
Amendment 69 #
Draft opinion Paragraph 12 a (new) 12 a. Notes the tremendous potential AI vehicles hold for persons with disability and reduced mobility, increasing their participation in individual road transport and improving their quality of life; stresses the need for high scrutiny under an EU civil liability regime for AI products in ensuring the safety of persons with disability and reduced mobility.
Amendment 7 #
Draft opinion Recital B a (new) B a. Whereas there are five levels of autonomy for automated driving systems, ranging from complete driver control to full autonomy;
Amendment 70 #
12 b. Calls for the establishment of a compensation fund pooled in by manufacturers, guaranteeing against damages, especially in relation to AI- related applications with a specific high- risk profile.
Amendment 8 #
Draft opinion Recital B b (new) B b. Whereas surveys have found that up to ninety percent of traffic accidents are caused at least in part by human error;
Amendment 9 #
Draft opinion Recital B c (new) B c. Whereas when vehicles are to be truly autonomous, they will need to replicate the human decision-making process; whereas some decisions are more than just a mechanical application and seem to require a sense of ethics;
source: 652.306
2020/05/27
IMCO
88 amendments...
Amendment 1 #
Draft opinion Recital A A. whereas
Amendment 10 #
Draft opinion Recital B B. whereas the use and development of AI applications in products might also present challenges to the existing legal framework on products
Amendment 11 #
Draft opinion Recital B B. whereas the use and development of
Amendment 12 #
Draft opinion Recital B B. whereas the use and development of AI applications in products might also present challenges to the existing legal framework on products and reduce their effectiveness, thus potentially undermining consumer trust
Amendment 13 #
Draft opinion Recital B B. whereas the use, deployment and development of AI applications in products might also present challenges to the existing legal framework on products and reduce the
Amendment 14 #
Draft opinion Recital B a (new) Ba. whereas the vulnerability to cybersecurity threats, software updates, limited predictability and self-learning operations of AI may hamper compensations for claims in cases where this seems justified;
Amendment 15 #
Draft opinion Recital C C. whereas robust liability mechanisms remedying damage contribute to better protection of c
Amendment 16 #
Draft opinion Recital C C. whereas robust liability mechanisms remedying damage contribute to better protection of consumers, creation of trust in new technologies integrated in products and acceptance for innovation while ensuring legal certainty for business, in particular micro, small and medium enterprises;
Amendment 17 #
Draft opinion Recital C C. whereas robust liability mechanisms remedying damage contribute to better protection of consumers, creation of trust in new technologies integrated in products and acceptance for innovation while ensuring legal certainty for business; underlines that in order to build acceptance, the theoretical benefits of artificial intelligence should also contribute effectively to wellbeing and development;
Amendment 18 #
Draft opinion Recital C a (new) Ca. whereas the Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics (COM (2020) 64) and the White Paper On Artificial Intelligence - A European approach to excellence and trust (COM(2020)65) should be considered as the basis of the future European legislation;
Amendment 19 #
Draft opinion Recital C a (new) Ca. whereas the complexity of AI applications can make it nearly impossible to prove fault or damage in certain cases, presents new challenges as regards the burden of proof;
Amendment 2 #
Draft opinion Recital A A. whereas Artificial Intelligence (AI) plays an increasing role in our everyday lives and has the potential to contribute to the development of innovations in many sectors and offer benefits for consumers through innovative products and services
Amendment 20 #
Draft opinion Recital C b (new) Cb. whereas the Product Liability Directive is the existing regulatory framework on the responsibility for the final product;
Amendment 21 #
Draft opinion Paragraph 1 Amendment 22 #
Draft opinion Paragraph 1 1. Welcomes the Commission’s aim, which is to make the Union legal framework fit the new technological uses, deployments and developments, ensuring a high level of protection for consumers from harm caused
Amendment 23 #
Draft opinion Paragraph 1 1. Welcomes the Commission’s aim, which is to make the Union legal framework fit the new technological developments, ensuring a high level of protection for consumers from possible harm caused by new technologies while maintaining the balance with the
Amendment 24 #
Draft opinion Paragraph 1 1. Welcomes the Commission’s aim, which is to make the Union legal framework fit the new technological developments, ensuring a high level of protection for consumers from possible harm caused by new technologies while maintaining the balance with the needs of technological
Amendment 25 #
Draft opinion Paragraph 1 a (new) 1a. Emphasises that the Product Liability Directive was adopted in 1985 and was revised in 1999 and since then products evolved a lot, therefore the Product Liability Directive is not fit for purpose anymore and needs to be updated;
Amendment 26 #
Draft opinion Paragraph 2 2.
Amendment 27 #
Draft opinion Paragraph 2 2. Stresses the need to assess to what extent the existing liability framework, and in particular the Council Directive 85/374/EEC1 (the Product Liability Directive), needs to be updated in order to guarantee effective consumer protection and legal clarity for
Amendment 28 #
Draft opinion Paragraph 2 2.
Amendment 29 #
Draft opinion Paragraph 2 a (new) 2a. Recognises the challenge of determining liability where consumer harm results from autonomous decision- making processes; calls on the Commission to review that directive and consider adapting concepts as ‘product’ ‘damage’ and ‘defect’, in a way that is coherent with product safety and liability legislation, as well as adapting the rules governing the burden of proof while stressing that the burden of proof shall by no means lie on the consumer;
Amendment 3 #
Draft opinion Recital A A. whereas the use of Artificial Intelligence (AI) plays an increasing role in our everyday lives and has the potential to contribute to the deployment and development of innovations in many sectors and offer benefits for consumers through innovative products and services and, for businesses, in particular micro, small and medium enterprises (SMEs) through optimised performance;
Amendment 30 #
Draft opinion Paragraph 2 a (new) 2a. Highlights that any update of the Product liability framework should go hand in hand with the update of Directive 2001/95/EC (the Product Safety Directive) in order to ensure that AI systems integrate safety and security by design principles;
Amendment 31 #
Draft opinion Paragraph 2 b (new) 2b. Further stresses the need to reassess the timeframe during which the producer is held liable for defects caused by the product, as AI driven products can become unsafe during their lifecycle due to a software update or the lack thereof; simultaneously, and in cases where the supplier cannot be held liable, it might be justified to hold the producer liable for non-supply of a software update, which can fix the safety hazard;
Amendment 32 #
Draft opinion Paragraph 2 b (new) 2b. Highlights incentivisation of increased ex-ante investment in security by developers of AI systems as a suggested approach in order to improve security; highlights that public source code disclosure would incentivise secure software development while making it economically and legally more attractive;
Amendment 33 #
Draft opinion Paragraph 2 c (new) 2c. Points out that the revision of the Product Liability Directive should be aligned with and built on the EU General Data Protection Regulation (GDPR);
Amendment 34 #
Draft opinion Paragraph 3 3. Emphasises that any revision of the existing liability framework should aim to further harmonise liability rules in order to avoid fragmentation of the single market;
Amendment 35 #
Draft opinion Paragraph 3 3. Emphasises that any revision of the existing liability framework should aim to further harmonise liability rules in order to avoid fragmentation of the single market; asks the Commission to assess whether a Regulation on general product liability could contribute to this aim; stresses, however, the importance of ensuring that Union regulation remains limited to clearly identified problems for which feasible solutions exist and leaves room for further technological developments;
Amendment 36 #
Draft opinion Paragraph 3 3. Emphasises that any revision of the existing liability framework should aim to further harmonise liability rules in order to avoid fragmentation of the single market; asks the Commission to assess whether a Regulation on general product liability could contribute to this aim; stresses, however, the importance of ensuring that Union regulation remains limited to clearly identified problems for which feasible solutions exist and leaves room for further technological developments;
Amendment 37 #
Draft opinion Paragraph 3 3. Emphasises that any revision of the existing liability framework should aim to further harmonise liability rules in order to avoid fragmentation of the single market; stresses, however, the importance of ensuring that Union regulation remains limited to clearly identified problems for which feasible solutions exist and leaves room for further technological developments, including free and open source software;
Amendment 38 #
Draft opinion Paragraph 3 3. Emphasises that any revision of the existing liability framework should aim to further harmonise liability, and consumer protection rules in order to avoid fragmentation of the single market; stresses, however, the importance of ensuring that Union regulation remains limited to clearly identified problems for which feasible solutions exist and leaves room for further technological developments;
Amendment 39 #
Draft opinion Paragraph 3 3. Emphasises that any revision of the existing liability framework should aim to further harmonise liability rules in order to
Amendment 4 #
Draft opinion Recital A A. whereas Artificial Intelligence (AI) plays an increasing role in our everyday lives and has the potential to contribute to the development of innovations in many sectors and offer benefits for consumers through innovative products and services and, for businesses, through optimised performance and increased competitiveness;
Amendment 40 #
Draft opinion Paragraph 4 4. Calls on the Commission to assess whether definitions and concepts in the product liability framework need to be updated due to the specific characteristics of AI applications
Amendment 41 #
Draft opinion Paragraph 4 4. Calls on the Commission to
Amendment 42 #
Draft opinion Paragraph 4 4. Calls on the Commission to
Amendment 43 #
Draft opinion Paragraph 4 4. Calls on the Commission to
Amendment 44 #
Draft opinion Paragraph 5 5. Urges the Commission to
Amendment 45 #
Draft opinion Paragraph 5 5. Urges the Commission to scrutinise whether it is necessary to include software in the definition of ‘products’ under the Product Liability Directive and to update concepts such as ‘producer’, ‘damage’ and ‘defect’, and if so, to what extent; recommends that the basic distinction between a producer and its product, in this case, an artificial intelligence application, should remain and AI should not be granted its own autonomous personality; asks the Commission to also examine whether the product liability framework needs to be revised in order to protect injured parties efficiently as regards products that are purchased as a bundle with related services;
Amendment 46 #
Draft opinion Paragraph 5 5. Urges the Commission to scrutinise whether it is necessary to include software in the definition of ‘products’ under the Product Liability Directive and to update concepts such as ‘producer’, ‘damage’ and ‘defect’, and if so, to what extent; asks the Commission to also examine whether the product liability framework needs to be revised in order to protect injured parties efficiently as regards products that are purchased as a bundle with related services; calls on the Commission to also include the liability of platforms operating as online market places in their proposal for an updated Product Liability Directive;
Amendment 47 #
Draft opinion Paragraph 5 5. Urges the Commission to scrutinise
Amendment 48 #
Draft opinion Paragraph 5 5. Urges the Commission to
Amendment 49 #
Draft opinion Paragraph 5 5. Urges the Commission to scrutinise
Amendment 5 #
Draft opinion Recital A a (new) Aa. whereas these emerging digital technologies are transforming the characteristics of many products and services, requiring in turn a clear safety and liability framework, ensuring both consumer protection and legal certainty for businesses;
Amendment 50 #
Draft opinion Paragraph 5 5. Urges the Commission to
Amendment 51 #
Draft opinion Paragraph 5 5. Urges the Commission to scrutinise whether it is necessary to include software in the definition of ‘products’ under the Product Liability Directive and asks the Commission to update concepts such as ‘producer’, ‘damage’ and ‘defect’
Amendment 52 #
Draft opinion Paragraph 5 a (new) 5a. Stresses that the Product Liability Directive considers the moment when products are put into circulation as the decisive moment for the producers liability and that for AI systems the producer retains to some degree control after the product has been put into circulation, therefore asks the Commission to update this concept in its revision of Product Liability Directive;
Amendment 53 #
Draft opinion Paragraph 5 a (new) 5a. Stresses that AI systems and the devices that use them are products and must remain subject to the rules on products and not be treated as an exception;
Amendment 54 #
Draft opinion Paragraph 5 a (new) 5a. Calls on the Commission to clarify that the scope of the new legislation or the update of the Product Liability Directive should apply to all tangible and non- tangible goods, including digital services;
Amendment 55 #
Draft opinion Paragraph 5 a (new) 5a. Asks the Commission to consider the liability of online marketplaces by qualifying them as 'supplier' under the Product Liability Directive;
Amendment 56 #
Draft opinion Paragraph 5 b (new) 5b. Calls on the Commission to consider, in close coordination with corresponding possible adjustments to the Union safety framework, whether the notion of 'time when the product was put into circulation' currently used by the Product Liability Directive, is fit for purpose for emerging digital technologies, taking into account that they may be changed or altered under the producer's control after they have been placed on the market;
Amendment 57 #
Draft opinion Paragraph 5 c (new) 5c. Asks the Commission to consider holding a producer of specific emerging digital technologies liable for unforeseeable defects, in cases where it was predictable that unforeseen developments might occur;
Amendment 58 #
Draft opinion Paragraph 6 6.
Amendment 59 #
Draft opinion Paragraph 6 6. Stresses the importance of ensuring a fair
Amendment 6 #
Draft opinion Recital A a (new) Aa. whereas for the framework to be appropriate, it must cover all AI-based products and their components, including algorithms, software, and data used or produced by them;
Amendment 60 #
Draft opinion Paragraph 6 6. Stresses the importance of ensuring a fair liability system in the chain of commercial transaction that makes it possible for consumers to prove that a defect in a product caused damage, even if third party software is involved or the cause of a defect is
Amendment 61 #
Draft opinion Paragraph 7 7. Calls on the Commission to evaluate whether and to what extent the burden of proof should be reversed in order to empower harmed consumers while preventing abuse and providing legal clarity for businesses, as well as to ensure fairness and to mitigate the informational asymmetries impairing the situation of injured parties;
Amendment 62 #
Draft opinion Paragraph 7 7. Calls on the Commission to evaluate whether and to what extent the burden of proof should be reversed in order to empower harmed consumers while preventing abuse and providing legal clarity for businesses; stresses that any such finding, where demonstrated necessary, should be limited in scope;
Amendment 63 #
Draft opinion Paragraph 7 7. Calls on the Commission to rev
Amendment 64 #
Draft opinion Paragraph 7 7. Calls on the Commission to
Amendment 65 #
Draft opinion Paragraph 7 7. Calls on the Commission to evaluate whether and to what extent the burden of proof should be reversed in order to empower harmed consumers to defend their rights while preventing abuse and providing legal clarity for businesses;
Amendment 66 #
Draft opinion Paragraph 7 7. Calls on the Commission to
Amendment 67 #
Draft opinion Paragraph 7 7. Calls on the Commission to rev
Amendment 68 #
Draft opinion Paragraph 7 7. Calls on the Commission to
Amendment 69 #
Draft opinion Paragraph 7 a (new) 7a. Asks the Commission to assess the introduction of a duty on producers of emerging digital technologies to equip their products with means of recording information about the operation of the technology, in accordance with applicable data protection provisions and the rules concerning the protection of trade secrets, taking into account, amongst others, the likelihood that a risk of the technology materialises, whether such a duty is appropriate and proportionate and the technical feasibility and costs of it; failing to comply with this duty or refusing to give the victim reasonable access to this information would trigger a rebuttable liability presumption of the producer;
Amendment 7 #
Draft opinion Recital A b (new) Ab. whereas a common framework for the development, deployment and use of artificial intelligence, robotics and related technologies within the Union should both protect consumers from their potential risks and promote the trustworthiness of such technologies;
Amendment 70 #
Draft opinion Paragraph 7 a (new) 7a. Highlights that the development- risk principle in line with point (e) of Article 7of Council Directive 85/374/EEC proved to be important and reasonable;
Amendment 71 #
Draft opinion Paragraph 7 b (new) 7b. Underlines that explainability, interpretability and traceability of AI systems are key to ensure that liability mechanisms offer an adequate, efficient and fair allocation of responsibilities; therefore asks the Commission to issue binding rules for companies to publish transparency reports including the existence, functionality, process, main criteria, the logic behind, the data sets used and possible outcome of algorithmic systems and efforts to identify, prevent and mitigate damage caused by AI systems in a timely, accurate, easily- readable, and accessible manner;
Amendment 72 #
Draft opinion Paragraph 7 c (new) 7c. Calls on the Commission to issue binding rules for companies and public bodies to document the development of AI systems; notes in this regard that it is essential for the risk assessment documentation, the software documentation, the algorithms and data sets used to be fully accessible to market surveillance authorities, while respecting Union law; additional prerogatives should be given to market surveillance authorities in this respect;
Amendment 73 #
Draft opinion Paragraph 8 8. Highlights the need for a risk based approach to AI within the existing liability framework, which takes into account different levels of risk for consumers in specific sectors and uses of AI; underlines that such an approach, that might encompass two or more levels of risk, should be based on clear criteria and provide for legal clarity; proposes that these differences in approach be translated into different obligations as regards the manufacture of products, resulting in different liability regimes with clear mechanisms and scope; recommends that these obligations be reflected in safeguards in the configuration of AI systems, particularly as regards their interactions with third- party systems, especially on-line, or with connected objects; calls on the Commission to consider in particular laying down different obligations and liability regimes depending on whether the consumer or user of the AI product is a private individual or a professional, as follows: – As regards professional civil liability, including the provision by an undertaking of subcontracted staff or employees: ○ A simple presumption of liability should apply to the provider of the AI product, whether it is the manufacturer, seller or licensor, the determining factor being, in the case of a product transmission chain, the moment when the capacity or configuration linked to the event giving rise to the liability is defined; ○ The provider in question should be able to exonerate itself from such liability by demonstrating a fault attributable to the professional user, subject to the correct functioning of the technological safeguards in respect of which its liability has been invoked, and provided that the professional user was familiar with the conditions of use of the AI system and that those conditions are readily understandable, for which matters both the obligation and the burden of proof should rest with the provider; – As regards personal civil liability, without prejudice to the liability of professional principals for their employees: ○ Providers, manufacturers or resellers should be required to employ safeguards and configurations commensurate with the highest level of risk when placing AI products intended for private individuals on the market, in particular as regards communication with other systems (such as social networks or the internet) or as regards connected objects (such as security or alarm systems), in accordance with the standards laid down; ○ Any civil damage attributable to an AI product should automatically trigger the liability of the provider, which may exonerate itself by demonstrating compliance with the standards applicable to its product; ○ Given the unforeseeable nature of the effects and damage which may be caused by AI products, consideration could be given to limiting the amount of damages which may be claimed against a person sued for no-fault civil liability in connection with an AI product, without prejudice to the rules applicable to insurance for the excess;
Amendment 74 #
Draft opinion Paragraph 8 8. Highlights the need for a risk based approach to AI within the existing liability framework, which takes into account different levels of risk for consumers in specific sectors and uses of AI; underlines that such an approach, that might encompass two or more levels of risk, should be based on clear criteria and provide for legal clarity; further considers that those involved in the different stages of the development, deployment and use of AI-based systems should be held into account in proportion of their liability; suggests the use of distributed ledger technologies, such as blockchain, to improve product traceability, in order to better identify those involved in the different stages;
Amendment 75 #
Draft opinion Paragraph 8 8. Highlights the need for a risk based approach to AI within the existing liability framework, which takes into account different levels of risk for consumers and society at large in specific sectors and uses of AI; algorithmic systems that may cause physical or material damage, breach fundamental rights and freedoms, impact an individual’s access to critical resources, or impact their participation in society shall not be deemed to be in the lowest risk category; underlines
Amendment 76 #
Draft opinion Paragraph 8 8. Highlights th
Amendment 77 #
Draft opinion Paragraph 8 8. Highlights the need for a risk based approach to AI within the existing liability framework, which takes into account different levels of risk for consumers in specific sectors and uses of AI; underlines that such an approach, that might encompass two or more levels of risk, should be based on clear criteria and provide for legal c
Amendment 78 #
Draft opinion Paragraph 8 a (new) 8a. Calls on the Commission to remove notion such “time at which a product is put on the market” which is no longer relevant given the dynamic features of digital goods; points out that currently the producer continues to have control over the product for a long time after having put it onto the market; urges to review the timelines for bringing a claim under the Product Liability Directive;
Amendment 79 #
Draft opinion Paragraph 8 b (new) 8b. Stresses that the producer shall bear the liability for products from the EU, and for the products from outside EU, that are sold through online marketplace and when the producer cannot be identified, the online marketplace shall be liable as a supplier due to the fact that online marketplaces are no longer a passive intermediary;
Amendment 8 #
Draft opinion Recital A b (new) Ab. whereas the Union's existing safety and liability framework might need to be adapted, as highlighted by the Commission's Report on the safety and liability implications for Artificial Intelligence, the Internet of Things and robotics;
Amendment 80 #
Draft opinion Paragraph 9 9.
Amendment 81 #
Draft opinion Paragraph 9 9. Asks the Commission to carefully assess the
Amendment 82 #
Draft opinion Paragraph 9 9. Asks the Commission to
Amendment 83 #
Draft opinion Paragraph 9 a (new) 9a. Urges that AI systems intended for private individuals should have a limited lifetime, which would not rule out the reinstallation of the same system with identical configurations when the lifetime of the system installed at the time of sale expires; suggests that, during this lifetime, the manufacturer should have an obligation to guarantee conformity, which would be enforceable by means of regular technical inspections, the performance of which would trigger a standard extension of the applicable guarantee;
Amendment 84 #
Draft opinion Paragraph 9 a (new) 9a. Notes that the new legislation about product liability should also address the challenges algorithms present in terms of ensuring non-discrimination, transparency and explainability, as well as liability; points out the need to monitor algorithms and to asses associated risks, to use high quality and unbiased datasets, as well as to help individuals acquire access to high quality products;
Amendment 85 #
Draft opinion Paragraph 9 a (new) 9a. Stresses that the Commission should consider tailored liability rules in sectors where significant risks are likely to arise, which may potentially undermine fundamental rights and result in high costs in both human and social terms, such as where AI applications are deployed for educational purposes;
Amendment 86 #
Draft opinion Paragraph 9 b (new) 9b. Calls on the Commission to study the suitability of compulsory liability insurance for AI applications, which could provide protection to third parties exposed to an increased risk of harm and better access to compensation for victims; notes, however, that insurance offers for certain risks might be difficult to calculate due to missing experience in the particular case of AI; considers therefore that any legal provisions in this regard should be introduced with careful analysis and be balanced enough not to impede the deployment of AI technology in the Single Market and to effectively foster innovation.
Amendment 87 #
Draft opinion Paragraph 9 b (new) 9b. Strongly recommends that the Member States recruit to their judicial services full-time experts to assist those services in establishing the technical materiality of the circumstances of the case in order to determine the applicable liability, so as to enable the judicial authorities to resolve disputes swiftly, in accordance with the proper administration of justice, and without being dependent on external expertise which, given the specialised nature of AI, may only be available from industry professionals.
Amendment 88 #
Draft opinion Paragraph 9 b (new) 9b. Calls on the Commission to propose concreate measures (such a registry of products liability cases) to enhance transparency and to monitor defective product circulating in the EU; it is essential to ensure high consumer protection and a high degree of information about the products that could be purchased.
Amendment 9 #
Draft opinion Recital A c (new) Ac. whereas product safety and product liability are two complementary mechanisms pursuing the same policy goal of a functioning single market for goods and services, and this Opinion suggests possible adjustments to the Union liability frameworks in light of the increased importance of emerging digital technologies;
source: 652.384
2020/05/28
JURI
430 amendments...
Amendment 1 #
Motion for a resolution Citation 1 a (new) - having regard to Article 169 of the Treaty on the Functioning of the European Union,
Amendment 10 #
Motion for a resolution Recital A A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim and receive compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for natural and legal persons to avoid causing harm or damage in the first place or price the risk of having to compensate into their behaviour;
Amendment 100 #
Motion for a resolution Paragraph 13 13. Rec
Amendment 101 #
Motion for a resolution Paragraph 13 13. Recognises that the type of AI- system the deployer is exercising control over is a determining factor regarding liability; notes that an AI-system that entails a high inherent risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;
Amendment 102 #
Motion for a resolution Paragraph 13 13. Recognises that the type of AI- system the
Amendment 103 #
Motion for a resolution Paragraph 13 13. Recognises that the type of AI-
Amendment 104 #
Motion for a resolution Paragraph 13 13. Recognises that the type of AI- system the
Amendment 105 #
Motion for a resolution Paragraph 14 14.
Amendment 106 #
Motion for a resolution Paragraph 14 14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that when determining whether an AI-system is high-risk, the sector in which significant risks can be expected to arise and the nature of the activities undertaken must also be taken into account; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;
Amendment 107 #
Motion for a resolution Paragraph 14 14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is
Amendment 108 #
Motion for a resolution Paragraph 14 14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and
Amendment 109 #
Motion for a resolution Paragraph 14 14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause physical, psychological or mental harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;
Amendment 11 #
Motion for a resolution Recital A A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides
Amendment 110 #
Motion for a resolution Paragraph 15 15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes
Amendment 111 #
Motion for a resolution Paragraph 15 15. Recommends that all high-risk AI- systems be exhaustively listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High- Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders; is also of the opinion that the European Parliament should appoint consultative experts to advise the newly established standing committee.
Amendment 112 #
Motion for a resolution Paragraph 15 15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders, with the addition of ethics experts and anthropologists, sociologists and mental- health specialists;
Amendment 113 #
Motion for a resolution Paragraph 15 15.
Amendment 114 #
Motion for a resolution Paragraph 15 15.
Amendment 115 #
Motion for a resolution Paragraph 15 15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every
Amendment 116 #
Motion for a resolution Paragraph 15 15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that,
Amendment 117 #
Motion for a resolution Paragraph 15 a (new) 15a. Notes that the development of technologies based on artificial intelligence is hugely dynamic and continuously accelerating; stresses that to ensure adequate protection for users, a fast-track approach is needed to analyse new devices and systems using the AI- systems that emerge on the European market, concerning potential risks; recommends Commission that all procedures in this regard should be simplified as much as possible;
Amendment 118 #
Motion for a resolution Paragraph 15 a (new) 15a. Acknowledges that the list thus created could not claim to be exhaustive, particularly with regard to final court rulings which may in the meantime have identified AI systems that do not appear on it;
Amendment 119 #
Motion for a resolution Paragraph 16 16. Believes that the proposed Regulation should set the limitation period and, in line with strict liability systems of the Member States,
Amendment 12 #
Motion for a resolution Recital A a (new) Aa. whereas Artificial Intelligence and algorithmic decision-making create new consumer and societal challenges and further amplify existing challenges (e.g. privacy, behaviour tracking) that deserve particular attention by policy makers, while liability rules play a key role in enabling trust of citizens in Artificial Intelligence technologies and in the business actors involved.
Amendment 120 #
Motion for a resolution Paragraph 16 16. Believes that in line with strict liability systems of the Member States, the
Amendment 121 #
Motion for a resolution Paragraph 16 16. Believes that
Amendment 122 #
Motion for a resolution Paragraph 16 16. Believes that
Amendment 123 #
Motion for a resolution Paragraph 16 16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the
Amendment 124 #
Motion for a resolution Paragraph 17 Amendment 125 #
Motion for a resolution Paragraph 17 17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault- based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer; considers, in this connection, that the national law governing the amount and extent of the compensation and the deadline for claims against injury caused by the AI system remains applicable;
Amendment 126 #
Motion for a resolution Paragraph 17 17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault- based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer and that while that presumption of fault is rebuttable, the burden of proof lies with the deployer;
Amendment 127 #
Motion for a resolution Paragraph 17 17. Determines that
Amendment 128 #
Motion for a resolution Paragraph 17 17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault- based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the
Amendment 129 #
Motion for a resolution Paragraph 17 a (new) 17a. Stresses that the producer's liability for system or device based on artificial intelligence solutions should be consistently linked to the impossibility of contractually excluding civil liability in this respect - including in Businesses-to- Businesses and Businesses-to- Administration relations;
Amendment 13 #
Motion for a resolution Recital B B. whereas any future-orientated liability framework has to instil confidence in the safety, reliability and consistency of products and services, including digital technology, in order to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible, especially in the artificial intelligence sector; whereas ultimately, the goal of any liability framework should be to provide consumer protection and legal certainty for all parties,
Amendment 130 #
Motion for a resolution Paragraph 17 a (new) 17a. Requests the Commission to evaluate the need for regulation on contracts to prevent contractual non- liability clauses.
Amendment 131 #
Motion for a resolution Paragraph 18 18. Considers the liability risk to be one of the key factors that defines the success of new technologies, products and services; observes that proper risk coverage is also essential for assuring the public that it can trust the new technology despite the
Amendment 132 #
Motion for a resolution Paragraph 18 18. Considers the liability
Amendment 133 #
Motion for a resolution Paragraph 18 18. Considers th
Amendment 134 #
Motion for a resolution Paragraph 18 a (new) 18a. Is mindful of the fact that uncertainty regarding risks should not make insurance premiums prohibitively high and thus be an obstacle to research and innovation; proposes that a special mechanism between the Commission and the insurance industry should be developed to address the potential uncertainties in the insurance branch;
Amendment 135 #
Motion for a resolution Paragraph 19 19. Is of the opinion that,
Amendment 136 #
Motion for a resolution Paragraph 19 19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all
Amendment 137 #
Motion for a resolution Paragraph 19 19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into
Amendment 138 #
Motion for a resolution Paragraph 19 19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all
Amendment 139 #
Motion for a resolution Paragraph 20 Amendment 14 #
Motion for a resolution Recital B B. whereas any future-orientated civil liability legal framework has to strike a balance between efficiently protecting potential victims of harm or damage and, at the same time, providing enough leeway to make
Amendment 140 #
Motion for a resolution Paragraph 20 20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that
Amendment 141 #
Motion for a resolution Paragraph 20 20. Believes that
Amendment 142 #
Motion for a resolution Paragraph 20 20.
Amendment 143 #
Motion for a resolution Paragraph 20 20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that, notwithstanding the aforementioned mechanism between the Commission and the insurance branch, bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-
Amendment 144 #
Motion for a resolution Annex I – part A – paragraph 1 – introductory part This Report is addressing an important aspect of digitisation, which itself is shaped by cross-border activities
Amendment 145 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 1 - A genuine Digital Single Market requires a level of full harmonisation
Amendment 146 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 2 - New legal challenges posed by the de
Amendment 147 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 2 - New legal challenges posed by the deployment of Artificial Intelligence (AI)- systems have to be addressed by establishing maximal legal certainty for the
Amendment 148 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 2 - New legal challenges posed by the deployment of Artificial Intelligence (AI)- systems have to be addressed by establishing maximal legal certainty for the producer, the
Amendment 149 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 3 Amendment 15 #
Motion for a resolution Recital B B. whereas
Amendment 150 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 3 - There should be no over-regulation and more red tape must be prevented, as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start-
Amendment 151 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 3 - There should be no over-regulation nor legal uncertainty as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start-
Amendment 152 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 3 a (new) - Civil liability standards for artificial intelligence should seek to strike a balance between the protection of the public on the one hand and business incentives to invest in innovation, especially AI systems, on the other;
Amendment 153 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 4 - Instead of replacing the well- functioning existing liability regimes, we should make
Amendment 154 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 4 - Instead of replacing the well- functioning existing liability regimes, we should make
Amendment 155 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 5 - This Report and the Product Liability Directive are two pillars of a common liability framework for AI- systems and require close coordination
Amendment 156 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 5 - This Report and the Product Liability Directive are two pillars of a common liability framework for AI- systems and require close coordination and alignment between all political actors.
Amendment 157 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 6 - Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI- system or not,
Amendment 158 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 6 - Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI- system or not, or if it takes place physically or virtually so that their confidence in this new technology is strengthened.
Amendment 159 #
Motion for a resolution Annex I – part A – paragraph 1 – indent 6 - Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI- system or not, or if it takes place physically, materially, immaterially or virtually.
Amendment 16 #
Motion for a resolution Recital B B. whereas any future-orientated liability framework has to
Amendment 160 #
Motion for a resolution Annex I – part B – citation 4 a (new) Having regard to Article 169 of the Treaty on the Functioning of the European Union,
Amendment 161 #
Motion for a resolution Annex I – part B – recital 1 (1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place.
Amendment 162 #
Motion for a resolution Annex I – part B – recital 1 (1) The concept of ‘liability’ plays an
Amendment 163 #
Motion for a resolution Annex I – part B – recital 1 (1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the
Amendment 164 #
Motion for a resolution Annex I – part B – recital 1 (1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage, whether material or non- material, in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.
Amendment 165 #
Motion for a resolution Annex I – part B – recital 1 (1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim and get compensation from the party
Amendment 166 #
Motion for a resolution Annex I – part B – recital 1 (2) Especially at the beginning of the life cycle of new products and services, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been
Amendment 167 #
Motion for a resolution Annex I – part B – recital 2 (2)
Amendment 168 #
Motion for a resolution Annex I – part B – recital 2 (2) Especially at the beginning of the life cycle of new products and services, after those were pre-tested, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial- and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.
Amendment 169 #
Motion for a resolution Annex I – part B – recital 3 (3) The rise of Artificial intelligence (AI) and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies however presents a significant
Amendment 17 #
Motion for a resolution Recital B B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the developer, the affected person or any other third party;
Amendment 170 #
Motion for a resolution Annex I – part B – recital 3 (3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their
Amendment 171 #
Motion for a resolution Annex I – part B – recital 3 (3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) and the multitude of actors who intervene in their life-cycle make
Amendment 172 #
Motion for a resolution Annex I – part B – recital 3 (3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks.
Amendment 173 #
Motion for a resolution Annex I – part B – recital 4 (4) At this point, it is important to point out that public and private stakeholders should endeavour to make the advantages of deploying AI-
Amendment 174 #
Motion for a resolution Annex I – part B – recital 4 (4) At this point, it is important to point out that to ensure that the advantages of deploying AI-
Amendment 175 #
Motion for a resolution Annex I – part B – recital 4 (4) At this point, it is important to point out that the advantages of deploying AI- systems will by far outweigh the disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled and ageing persons into the society and to provide tailor-made education courses to all types of students. To exploit
Amendment 176 #
Motion for a resolution Annex I – part B – recital 4 (4)
Amendment 177 #
Motion for a resolution Annex I – part B – recital 4 a (new) (4a) An adequate liability regime is also necessary to counterweight the breach of safety rules. However, the envisaged liability needs to take into consideration all interests at stake. A careful examination of the consequences of any new regulatory framework on small and medium-sized enterprises (SMEs) and start-ups is a prerequisite for further legislative steps. The crucial role that they play in the European economy justifies a strictly proportionate approach in order to enable them to develop and innovate. On the other hand, the victims of damages caused by AI-systems need to have a right to redress and full compensation of the damages and the harms that they have suffered.
Amendment 178 #
Motion for a resolution Annex I – part B – recital 4 a (new) (4a) The most vulnerable actors in the context of AI use and application are the children, a possible comprehensive EU legal act concerning AI should therefore contain sound rules to ensure the protection of children and the safeguarding of the rights of the children.
Amendment 179 #
Motion for a resolution Annex I – part B – recital 5 (5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A
Amendment 18 #
Motion for a resolution Recital B B. whereas any future-orientated liability framework has to strike a balance between efficiently and fairly protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the
Amendment 180 #
Motion for a resolution Annex I – part B – recital 5 (5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software, and developed by human intervention. Ultimately, the large majority of AI- systems are used for handling trivial tasks with
Amendment 181 #
Motion for a resolution Annex I – part B – recital 5 (5) A
Amendment 182 #
Motion for a resolution Annex I – part B – recital 5 (5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI- system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system.
Amendment 183 #
Motion for a resolution Annex I – part B – recital 5 (5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software.
Amendment 184 #
Motion for a resolution Annex I – part B – recital 5 (5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are developed and deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A
Amendment 185 #
Motion for a resolution Annex I – part B – recital 6 (6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges, with a view to preventing regulatory fragmentation and ensuring the harmonisation of Union civil liability legislation in connection with artificial intelligence.
Amendment 186 #
Motion for a resolution Annex I – part B – recital 6 (6)
Amendment 187 #
Motion for a resolution Annex I – part B – recital 6 (6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates or entertains a risk for the public is accountable if that risk materializes, and thus should ex-ante minimise or ex-post compensate that risk. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.
Amendment 188 #
Motion for a resolution Annex I – part B – recital 6 (6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and
Amendment 189 #
Motion for a resolution Annex I – part B – recital 6 (6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the
Amendment 19 #
Motion for a resolution Recital C a (new) Ca. whereas all legislative activities in the Union, related to the explicit assignment of responsibility as regards AI-systems, should be preceded by analysis and consultation with the Member States on the compliance of the proposed regulations with economic, legal and social conditions;
Amendment 190 #
Motion for a resolution Annex I – part B – recital 6 (6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes.
Amendment 191 #
Motion for a resolution Annex I – part B – recital 7 (7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI- system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action subject to situations where the third-party uses the AI system to cause harm. Consequently, this Regulation should focus on claims against the
Amendment 192 #
Motion for a resolution Annex I – part B – recital 7 (7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. H
Amendment 193 #
Motion for a resolution Annex I – part B – recital 7 (7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI- system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during
Amendment 194 #
Motion for a resolution Annex I – part B – recital 7 (7) Council Directive 85/374/EEC3 (the Product Liability Directive) has
Amendment 195 #
Motion for a resolution Annex I – part B – recital 8 (8) The liability of the
Amendment 196 #
Motion for a resolution Annex I – part B – recital 8 (8) The liability of the
Amendment 197 #
Motion for a resolution Annex I – part B – recital 8 (8) The liability of the
Amendment 198 #
Motion for a resolution Annex I – part B – recital 8 a (new) (8a) The more sophisticated and more autonomous a system is, defining and influencing the algorithms, for example by continuous updates, could have a greater impact than just starting the system. As there is often more than one person who could, in a meaningful way, be considered as ‘operating’ the technology, both the backend provider and the frontend operator can be qualified as the ‘operator’ of the AI- system, depending on the degree of the exercised control.
Amendment 199 #
Motion for a resolution Annex I – part B – recital 8 b (new) (8b) Although in general, the frontend operator appears as the person who ‘primarily’ decides on the use of the AI- system, the backend provider, who on a continuous basis, defines the features of the technology and provides data and essential backend support service, could, for example, also have a high degree of control over the operational risks.
Amendment 2 #
Motion for a resolution Citation 3 a (new) - having regard to Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services,
Amendment 20 #
Motion for a resolution Recital C b (new) Cb. whereas the issue of the civil liability regime for artificial intelligence should be the subject of a broad public debate, taking into account ethical, legal, economic and social aspects, to avoid misunderstandings and unjustified fears that this technology may cause among citizens;
Amendment 200 #
Motion for a resolution Annex I – part B – recital 8 c (new) (8c) When there is more than one operator, the strict liability should lie with the one who exercises the highest degree of control over the risks posed by the harmful operation.
Amendment 201 #
Motion for a resolution Annex I – part B – recital 9 (9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should
Amendment 202 #
Motion for a resolution Annex I – part B – recital 9 (9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a
Amendment 203 #
Motion for a resolution Annex I – part B – recital 9 (9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a
Amendment 204 #
Motion for a resolution Annex I – part B – recital 10 (10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually.
Amendment 205 #
Motion for a resolution Annex I – part B – recital 10 (10) This Regulation should cover in principle all AI-systems, no matter where
Amendment 206 #
Motion for a resolution Annex I – part B – recital 10 (10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system
Amendment 207 #
Motion for a resolution Annex I – part B – recital 10 (10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI- system and will not have any contractual or legal relationship towards the
Amendment 208 #
Motion for a resolution Annex I – part B – recital 10 a (new) (10a) In order to protect consumers, ensure due transparency, analyse possible choices made by AI-systems and, lastly, determine the responsibility of the deployer, this Regulation should introduce a requirement to equip at least those AI-systems considered to be of high risk with a recorder similar to mandatory flight recorders for aeroplanes. Those recorders should be fully accessible to the public authority, allowing full verification of liability for damage or injury caused by an AI-system. Following an accident, serious incident or event identified by the investigating authority, the deployer of an AI-system should store the original data from those recorders for a period of 60 days or until otherwise agreed by the investigating authority. The deployer of an AI-system should carry out regular checks and operational assessments on the proper functioning of those recorders.
Amendment 209 #
Motion for a resolution Annex I – part B – recital 11 (11)
Amendment 21 #
Motion for a resolution Recital D D. whereas the legal system of a Member State can exclude liability for certain actors or can make it stricter for certain activities; whereas strict liability
Amendment 210 #
Motion for a resolution Annex I – part B – recital 11 (11) The type of AI-system the
Amendment 211 #
Motion for a resolution Annex I – part B – recital 11 (11) The type of AI-system the
Amendment 212 #
Motion for a resolution Annex I – part B – recital 12 (12)
Amendment 213 #
Motion for a resolution Annex I – part B – recital 12 (12) A
Amendment 214 #
Motion for a resolution Annex I – part B – recital 12 (12) All categories of AI-systems with a high risk
Amendment 215 #
Motion for a resolution Annex I – part B – recital 12 (12) All AI-systems with a high risk should be listed, in a way which does not claim to be exhaustive, in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts.
Amendment 216 #
Motion for a resolution Annex I – part B – recital 12 (12) All AI-systems with a high risk should be exhaustively listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high
Amendment 217 #
Motion for a resolution Annex I – part B – recital 12 (12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every
Amendment 218 #
Motion for a resolution Annex I – part B – recital 13 Amendment 219 #
Motion for a resolution Annex I – part B – recital 13 (13) It is of particular importance that the Commission carry out
Amendment 22 #
Motion for a resolution Recital D D. whereas the legal system of a Member State can
Amendment 220 #
Motion for a resolution Annex I – part B – recital 13 (13) It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance
Amendment 221 #
Motion for a resolution Annex I – part B – recital 14 (14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical
Amendment 222 #
Motion for a resolution Annex I – part B – recital 14 (14) In line with strict liability systems of the Member States, this Regulation should cover
Amendment 223 #
Motion for a resolution Annex I – part B – recital 14 (14)
Amendment 224 #
Motion for a resolution Annex I – part B – recital 14 (14)
Amendment 225 #
Motion for a resolution Annex I – part B – recital 14 (14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the
Amendment 226 #
Motion for a resolution Annex I – part B – recital 15 Amendment 227 #
Motion for a resolution Annex I – part B – recital 15 (15) All physical or virtual activities, devices or processes driven by AI-systems that are not
Amendment 228 #
Motion for a resolution Annex I – part B – recital 15 (15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should
Amendment 229 #
Motion for a resolution Annex I – part B – recital 15 (15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the
Amendment 23 #
Motion for a resolution Recital D a (new) Da. whereas the notion of Artificial Intelligence(AI)-systems comprises a large group of different technologies, including simple statistics, machine learning and deep learning;
Amendment 230 #
Motion for a resolution Annex I – part B – recital 16 (16) The diligence which can be expected from a
Amendment 231 #
Motion for a resolution Annex I – part B – recital 16 (16) The diligence which can be expected from a
Amendment 232 #
Motion for a resolution Annex I – part B – recital 16 (16) The diligence which can be expected from a
Amendment 233 #
Motion for a resolution Annex I – part B – recital 17 (17) In order to enable the
Amendment 234 #
Motion for a resolution Annex I – part B – recital 17 (17) In order to enable the
Amendment 235 #
Motion for a resolution Annex I – part B – recital 17 (17) In order to enable the
Amendment 236 #
Motion for a resolution Annex I – part B – recital 18 (18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies.
Amendment 237 #
Motion for a resolution Annex I – part B – recital 18 (18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from
Amendment 238 #
Motion for a resolution Annex I – part B – recital 18 (18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a
Amendment 239 #
Motion for a resolution Annex I – part B – recital 18 (18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard
Amendment 24 #
Motion for a resolution Recital E E. whereas Artificial Intelligence (AI)- systems and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity, co
Amendment 240 #
Motion for a resolution Annex I – part B – recital 19 Amendment 241 #
Motion for a resolution Annex I – part B – recital 19 Amendment 242 #
Motion for a resolution Annex I – part B – recital 19 (19) Insurance can help to ensure that victims can receive effective compensation as well as to pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment based on access to sufficient historical claim data. A lack of access to, or an insufficient quantity of high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to and optimising the use of data generated by new technologies, coupled with an obligation to provide well-documented information, will enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover
Amendment 243 #
Motion for a resolution Annex I – part B – recital 20 Amendment 244 #
Motion for a resolution Annex I – part B – recital 20 (20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover- by-cover as technology develops. Many insurers specialise in certain market
Amendment 245 #
Motion for a resolution Annex I – part B – recital 20 (20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover- by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of
Amendment 246 #
Motion for a resolution Annex I – part B – recital 20 (20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover- by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap.
Amendment 247 #
Motion for a resolution Annex I – part B – recital 20 (20)
Amendment 248 #
Motion for a resolution Annex I – part B – recital 20 (20) Despite missing historical claim data for reasons such as updating algorithms or anonymising data, there are already insurance products
Amendment 249 #
Motion for a resolution Annex I – part B – recital 20 (20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover- by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, such as an event incurring collective damages, in which the compensation significantly exceeds the maximum amounts set out in this
Amendment 25 #
Motion for a resolution Recital E E. whereas Artificial Intelligence (AI)- systems present significant legal challenges for the existing liability framework and
Amendment 250 #
Motion for a resolution Annex I – part B – recital 21 (21) It is of utmost importance that any future changes to this text go hand in hand with a necessary review of the PLD. The introduction of a new liability regime for the
Amendment 251 #
Motion for a resolution Annex I – part B – recital 21 (21) It is of utmost importance that any future changes to this text go hand in hand with a necessary review of the PLD, in order to review in a comprehensive and consistent manner the rights and obligations of all concerned parties throughout the liability chain. The introduction of a new liability regime for the
Amendment 252 #
Motion for a resolution Annex I – part B – recital 21 (21) It is of utmost importance that any future changes to this text go hand in hand with
Amendment 253 #
Motion for a resolution Annex I – part B – recital 21 a (new) (21a) In the liability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the product has proven to be a risk product. The so- called low-risk applications could equally cause severe harm or damage. Thus, the liability model for products containing AI applications should be approached in a two-step process. Firstly, providing a fault-based liability of the frontend operator against which the affected person should have the right to bring the claim for damages. The frontend operator should be able to prove his lack of fault by complying with the duty of care consisting in the regular installation of all available updates. If this obligation is fulfilled, due diligence should be presumed. Secondly, in the event where no fault of the frontend operator can be established, the producer or the backend operator should be held strictly liable. Such a two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI-driven systems.
Amendment 254 #
Motion for a resolution Annex I – part B – recital 22 (22) Since the objectives of this Regulation, namely to create a future- orientated and unified approach at Union level, which sets common European standards for our citizens and businesses and to ensure the consistency of rights and legal certainty throughout the Union, in order to avoid fragmentation of the Digital Single Market, which would hamper the goal of maintaining digital sovereignty
Amendment 255 #
Motion for a resolution Annex I – part B – recital 22 (22) Since the objectives of this Regulation, namely to create a
Amendment 256 #
Motion for a resolution Annex I – part B – Article 1 – paragraph 1 This Regulation sets out rules for the civil liability claims of natural and legal persons against the
Amendment 257 #
Motion for a resolution Annex I – part B – Article 1 – paragraph 1 This Regulation sets out rules for the civil liability claims of natural and legal persons against the
Amendment 258 #
Motion for a resolution Annex I – part B – Article 1 – paragraph 1 This Regulation sets out rules for the civil liability claims of natural and legal persons against the
Amendment 259 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 1 1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system or autonomous decision- making (ADM) system has caused harm or damage to the life, health, physical integrity
Amendment 26 #
Motion for a resolution Recital E E. whereas Artificial Intelligence (AI)- systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely
Amendment 260 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 1 1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity
Amendment 261 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 1 1. This Regulation applies on the territory of the Union where a
Amendment 262 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 1 1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical, mental or moral integrity or the property of a natural or legal person.
Amendment 263 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 2 2. Any agreement between a
Amendment 264 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 2 2. Any agreement between a
Amendment 265 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 2 2. Any agreement between a
Amendment 266 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 2 2. Any agreement between a deployer of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation,
Amendment 267 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 3 3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships, as well as from regulations on product liability, consumer protection, anti- discrimination, labour and environmental protection between the
Amendment 268 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 3 3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships between the deployer and the natural or legal person who suffered harm or damage because of the AI-system and that may be brought against the deployer according to Union or national law.
Amendment 269 #
Motion for a resolution Annex I – part B – Article 2 – paragraph 3 3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships between the
Amendment 27 #
E. whereas
Amendment 270 #
Motion for a resolution Annex I – part B – Article 3 – point a (a) ‘
Amendment 271 #
Motion for a resolution Annex I – part B – Article 3 – point a (a) ‘AI-system’ means a system that displays
Amendment 272 #
Motion for a resolution Annex I – part B – Article 3 – point a (a) ‘AI-system’ means a system that displays intelligent behaviour by analysing
Amendment 273 #
Motion for a resolution Annex I – part B – Article 3 – point a a (new) (aa) ‘automated decision-making (ADM), decision-support or decision- informing system’ means the procedure in which decisions are initially, partly or completely, delegated to an operator by way of using a software or a service, who then in turn uses automatically executed decision-making models to perform an action;
Amendment 274 #
Motion for a resolution Annex I – part B – Article 3 – point b (b) ‘autonomous’ means an AI-system
Amendment 275 #
Motion for a resolution Annex I – part B – Article 3 – point c Amendment 276 #
Motion for a resolution Annex I – part B – Article 3 – point c (c) ‘high risk’ means a significant potential in an autonomously operating AI- system to cause harm or damage to one or more persons in a manner that is
Amendment 277 #
Motion for a resolution Annex I – part B – Article 3 – point c (c) ‘high risk’ means a significant potential in an autonomously operating AI- system to cause harm or damage to one or more persons in a manner that is random and
Amendment 278 #
Motion for a resolution Annex I – part B – Article 3 – point c (c) ‘high risk’ means a significant potential in an autonomously operating AI- system to cause significant harm or damage to one or more persons in a manner that is random and impossible to predict in advance; the significance of the potential depends on the interplay between the severity of possible harm or damage, the likelihood that the risk materializes and the manner in which the AI-system is being used;
Amendment 279 #
Motion for a resolution Annex I – part B – Article 3 – point d (d) ‘
Amendment 28 #
Motion for a resolution Recital E a (new) Ea. Whereas the diversity of AI applications and the diverse range of risks the technology poses complicates finding a single solution suitable for the entire spectrum of risks; whereas, in this respect, an approach should be adopted in which experiments, pilots and regulatory sandboxes are used to come up with proportional and evidence-based solutions that address specific situations and sectors where needed;
Amendment 280 #
Motion for a resolution Annex I – part B – Article 3 – point d (d) ‘deployer’ means
Amendment 281 #
Motion for a resolution Annex I – part B – Article 3 – point d (d) ‘deployer’ means the
Amendment 282 #
Motion for a resolution Annex I – part B – Article 3 – point d (d) ‘
Amendment 283 #
Motion for a resolution Annex I – part B – Article 3 – point d (d) ‘
Amendment 284 #
Motion for a resolution Annex I – part B – Article 3 – point d a (new) (da) 'control' means influence on the use and operation of the AI-system from start to finish and thus the extent to which it exposes third parties to its potential risks;
Amendment 285 #
Motion for a resolution Annex I – part B – Article 3 – point d a (new) (da) ‘backend operator’ means the person continuously defining the features of the relevant technology and providing essential and ongoing backend support;
Amendment 286 #
Motion for a resolution Annex I – part B – Article 3 – point e (e) ‘affected person’ means any person who suffers harm or damage caused by a physical or virtual activity, device or process driven by an AI-system, and who is not its
Amendment 287 #
Motion for a resolution Annex I – part B – Article 3 – point e (e) ‘affected person’ means any person who suffers harm or damage caused by a physical or virtual activity, device or process driven by an AI-system, and who is not its
Amendment 288 #
Motion for a resolution Annex I – part B – Article 3 – point e (e)
Amendment 289 #
Motion for a resolution Annex I – part B – Article 3 – point f (f)
Amendment 29 #
Motion for a resolution Recital E a (new) Ea. whereas the lack of clear provisions on risk limitation may create legal uncertainty for enterprises offering AI-systems on the EU market and pose a danger to the persons using them;
Amendment 290 #
Motion for a resolution Annex I – part B – Article 3 – point f (f) ‘harm or damage’ means an adverse impact affecting the life, health, physical integrity
Amendment 291 #
Motion for a resolution Annex I – part B – Article 3 – point f (f) ‘harm or damage’ means an
Amendment 292 #
Motion for a resolution Annex I – part B – Article 3 – point f (f) ‘harm or damage’ means an adverse impact affecting the life, health, physical, mental or moral integrity or property of a natural or legal person
Amendment 293 #
Motion for a resolution Annex I – part B – Article 3 – point g Amendment 294 #
Motion for a resolution Annex I – part B – Article 3 – point g (g) ‘producer’ means the
Amendment 295 #
Motion for a resolution Annex I – part B – Article 3 – point g (g) ‘producer’ means the developer, servicer or the backend operator of an AI- system, or the producer as defined in Article 3 of Council Directive 85/374/EEC7 . _________________ 7 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
Amendment 296 #
Motion for a resolution Annex I – part B – Article 3 – point g (g) ‘producer’ means the developer
Amendment 297 #
Motion for a resolution Annex I – part B – Article 3 – point g a (new) (ga) ‘force majeure’ means, in accordance with national rules, exceptional and unforeseeable circumstances beyond the control of the deployer, the consequences of which could not have been avoided even if all due care had been exercised.
Amendment 298 #
Motion for a resolution Annex I – part B – chapter 2 – title Amendment 299 #
Motion for a resolution Annex I – part B – Article 4 – title Amendment 3 #
Motion for a resolution Citation 4 a (new) - having regard to the Interinstitutional Agreement of 13 April 2016 on Better Law-Making and the Better Regulations Guidelines,
Amendment 30 #
Motion for a resolution Recital F F. whereas this difficulty
Amendment 300 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 1 1. The
Amendment 301 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 1 1. The deployer of a high-risk AI- system shall be strictly liable for any harm or damage occasioned to the life, health, physical integrity or possessions of natural or legal persons that was caused by a physical or virtual activity, device or process driven by that AI-system.
Amendment 302 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 1 1. The
Amendment 303 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 1 1. The
Amendment 304 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – introductory part 2. The categories of high-risk AI- systems, as well as the crit
Amendment 305 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – introductory part 2.
Amendment 306 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – introductory part 2. The Commission shall be tasked with drawing up a list of high-risk AI- systems as well as the critical sectors where they are used
Amendment 307 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – introductory part 2. The high-risk AI-systems as well as the critical sectors where they are used shall be listed in the Annex to this Regulation. The Commission is empowered to adopt delegated acts in accordance with Article 13, to amend the
Amendment 308 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point a Amendment 309 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point a Amendment 31 #
Motion for a resolution Recital F a (new) Fa. whereas legal certainty is an essential condition for the dynamic development of AI-based technology and its practical application in everyday life; whereas the user needs to be sure that potential damage caused by systems using the AI is covered by adequate insurance and that there is a defined legal route for redress;
Amendment 310 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point a Amendment 311 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point b Amendment 312 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point b Amendment 313 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point b Amendment 314 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point c Amendment 315 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point c Amendment 316 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – point c Amendment 317 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – subparagraph 2 Amendment 318 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – subparagraph 2 Amendment 319 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 2 – subparagraph 2 Amendment 32 #
Motion for a resolution Recital G G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges and eliminate the risk of users being less willing to accept emerging technology; whereas fair liability procedures means that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-
Amendment 320 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 3 Amendment 321 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 3 3. The
Amendment 322 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 3 3. The
Amendment 323 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 3 a (new) 3a. When the operator is a frontend operator, he or she shall be able to prove his or her fault. He or she shall not be held liable if the harm or damage was caused by force majeure.
Amendment 324 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 3 a (new) 3a. The burden of proof in respect of harm or damage caused by a high-risk AI activity shall lie with the deployer and not with the person concerned.
Amendment 325 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 3 b (new) 3b. The deployer of a high-risk AI-system must ensure that the AI-systems are equipped with a data recorder that uses a digital method of recording and storing data and for which a method of readily retrieving that data from the storage medium is available: The deployer shall make available any data recorder recording that has been conserved, if so decided by the competent authority. The deployer must ensure that the data recorders operate efficiently at all times. Following an accident or an incident occasioning harm or damage to a person concerned, the deployer of a high-risk AI-system must conserve the relevant original data for a period of 60 days unless otherwise instructed by the investigating authority.
Amendment 326 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 4 Amendment 327 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 4 4. The deployer of a high-risk AI- system shall ensure they have liability insurance cover
Amendment 328 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 4 4. The
Amendment 329 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 4 4. The
Amendment 33 #
Motion for a resolution Recital G G. whereas
Amendment 330 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 4 a (new) 4a. The liability insurance system shall be supplemented by a fund in order to ensure that damages can be compensated for in cases where no insurance cover exists.
Amendment 331 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 5 Amendment 332 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 5 5. This Regulation shall prevail over national liability regimes in the event of conflicting strict liability classification of AI-systems, insofar as this Regulation provides for the more favourable rules to the affected persons and to consumer rights.
Amendment 333 #
Motion for a resolution Annex I – part B – Article 4 – paragraph 5 a (new) 5a. The rules provided for in Article 4 shall not to be overridden by contract.
Amendment 334 #
Motion for a resolution Annex I – part B – Article 5 – title Amount and extent of compensation
Amendment 335 #
Motion for a resolution Annex I – part B – Article 5 – title Amendment 336 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – introductory part 1. A
Amendment 337 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – introductory part 1.
Amendment 338 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – introductory part 1. A
Amendment 339 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – introductory part 1. A
Amendment 34 #
Motion for a resolution Recital G G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liability procedures means that each person who suffers
Amendment 340 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point a Amendment 341 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point a Amendment 342 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point a Amendment 343 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point a (a) up to a maximum total amount of EUR ten million in the event of death or of harm caused to the health or physical or mental integrity of one or several persons as the result of the same operation of the same high-risk AI-system;
Amendment 344 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point a (a) up to a
Amendment 345 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point a (a) up to a maximum total amount of EUR t
Amendment 346 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point b Amendment 347 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point b Amendment 348 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point b Amendment 349 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point b (b) up to a maximum total amount of EUR two million in the event of damage – or moral prejudice – caused to property, including when several items of property of one or several persons were damaged as a result of the same operation of the same high-risk AI-system; where the affected person also holds a contractual liability claim against the deployer, no compensation shall be paid under this Regulation if the total amount of the damage to property is of a value that falls below EUR 500.
Amendment 35 #
Motion for a resolution Recital G G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair
Amendment 350 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point b (b) up to a maximum total amount of EUR
Amendment 351 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point b (b) up to a
Amendment 352 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point 2 Amendment 353 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point 2 Amendment 354 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point 2 Amendment 355 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point 2 Amendment 356 #
Motion for a resolution Annex I – part B – Article 5 – paragraph 1 – point 2 Amendment 357 #
Motion for a resolution Annex I – part B – Article 6 Amendment 358 #
Motion for a resolution Annex I – part B – Article 6 Amendment 359 #
Motion for a resolution Annex I – part B – Article 6 – paragraph 1 – introductory part 1. Within the amount set out in Article 5(1)(a), compensation to be paid by the
Amendment 36 #
Motion for a resolution Recital G a (new) Ga. whereas the future regulatory framework needs to take into consideration all the interests at stake; whereas careful examination of the consequences of any new regulatory framework on all actors in an impact assessment should be a prerequisite for further legislative steps; whereas the crucial role of SMEs and start-ups especially in the European economy justifies a strictly proportionate approach to enable them to develop and innovate;
Amendment 360 #
Motion for a resolution Annex I – part B – Article 6 – paragraph 1 – introductory part 1. Within the amount set out in Article 5(1)(a), compensation to be paid by
Amendment 361 #
Motion for a resolution Annex I – part B – Article 6 – paragraph 1 – paragraph 1 If at the time of the incident that caused the harm leading to his or her death, the affected person was in a relationship with a
Amendment 362 #
Motion for a resolution Annex I – part B – Article 6 – paragraph 1 – paragraph 1 If at the time of the incident that caused the harm leading to his or her death, the affected person was in a relationship with a third party and had a legal obligation to support that third party, the
Amendment 363 #
Motion for a resolution Annex I – part B – Article 6 – paragraph 2 2. Within the amount set out in Article 5(1)(b), compensation to be paid by the deployer held liable in the event of harm to the health or the physical or mental integrity of the affected person shall include the reimbursement of the costs of the related medical treatment as well as the payment for any pecuniary prejudice sustained by the affected person, as a result of the temporary suspension, reduction or permanent cessation of his or her earning capacity or the consequent, medically certified increase in his or her needs.
Amendment 364 #
Motion for a resolution Annex I – part B – Article 6 – paragraph 2 2. Within the amount set out in Article 5(1)(b), compensation to be paid by the
Amendment 365 #
Motion for a resolution Annex I – part B – Article 6 – paragraph 2 2. Within the amount set out in Article 5(1)(b), compensation to be paid by the
Amendment 366 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 1 1. Civil liability claims, brought in accordance with Article 4(1),
Amendment 367 #
1. Civil liability claims, brought in accordance with Article 4(1), concerning harm to life, health or physical integrity, shall be subject to a
Amendment 368 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 Amendment 369 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – introductory part 2. Civil liability claims, brought in accordance with Article 4(1), concerning damage to property or significant immaterial damage shall be subject to a special limitation period of:
Amendment 37 #
Motion for a resolution Recital G a (new) Ga. whereas civil liability standards for artificial intelligence must seek to strike a balance between the protection of the public and business incentives to invest in innovation, especially AI systems;
Amendment 370 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – introductory part 2. Civil liability claims, brought in accordance with Article 4(1), concerning damage to property and other rights shall be subject to a special limitation period of:
Amendment 371 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – introductory part 2. Civil liability claims, brought in accordance with Article 4(1), concerning damage to property shall be subject to a
Amendment 372 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – point a (a) 1
Amendment 373 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – point b (b) 30 years from the date on which the presumed causal event of the operation of the high-risk AI-system that subsequently caused the property damage took place.
Amendment 374 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – point b (b) 30 years from the date on which the operation of the high-risk AI-system that subsequently caused the property or significant immaterial damage took place.
Amendment 375 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – point b (b)
Amendment 376 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 2 – subparagraph 1 Amendment 377 #
Motion for a resolution Annex I – part B – Article 7 – paragraph 3 Amendment 378 #
Motion for a resolution Annex I – part B – chapter 3 – title Amendment 379 #
Motion for a resolution Annex I – part B – Article 8 – title Amendment 38 #
Motion for a resolution Recital G b (new) Gb. whereas, on the other hand, the victims of damages caused by AI-systems need to have a right to redress and full compensation of the damages and the harms that they have suffered;
Amendment 380 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 1 1. The
Amendment 381 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 1 1. The
Amendment 382 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 1 1. The
Amendment 383 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – introductory part 2. The
Amendment 384 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – introductory part 2. The
Amendment 385 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – introductory part 2. The
Amendment 386 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – point a Amendment 387 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – point a (a) the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken, or
Amendment 388 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – point b (b) due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities, providing well-documented information and maintaining the operational reliability by regularly installing all available updates.
Amendment 389 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – point b (b) due diligence was strictly observed by selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates.
Amendment 39 #
Motion for a resolution Paragraph 1 1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda, in addition to bolstering the single market in goods and services and providing better safeguards to guarantee risk minimisation and provide adequate compensation for damages sustained; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);
Amendment 390 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – paragraph 1 The deployer shall not be able to escape liability by arguing that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system.
Amendment 391 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – subparagraph 2 The
Amendment 392 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – subparagraph 2 The
Amendment 393 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 2 – subparagraph 2 The
Amendment 394 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 3 3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning, the
Amendment 395 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 3 3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning
Amendment 396 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 3 3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning, the
Amendment 397 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 4 4.
Amendment 398 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 4 4. At the request of the
Amendment 399 #
Motion for a resolution Annex I – part B – Article 8 – paragraph 4 4. At the request of the
Amendment 4 #
Motion for a resolution Citation 23 a (new) - having regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08),
Amendment 40 #
Motion for a resolution Paragraph 1 1. Considers that the challenge related to the introduction of AI-systems into society, the workplace and the economy is one of
Amendment 400 #
Motion for a resolution Annex I – part B – Article 9 Amendment 401 #
Motion for a resolution Annex I – part B – Article 10 – paragraph 1 1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the
Amendment 402 #
Motion for a resolution Annex I – part B – Article 10 – paragraph 1 1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the
Amendment 403 #
Motion for a resolution Annex I – part B – Article 10 – paragraph 1 1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the deployer’s extent of liability under this Regulation shall be reduced accordingly. The deployer shall not be liable if the affected person or the person for whom he or she is responsible is solely
Amendment 404 #
Motion for a resolution Annex I – part B – Article 10 – paragraph 1 1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the
Amendment 405 #
Motion for a resolution Annex I – part B – Article 10 – paragraph 2 2. A
Amendment 406 #
Motion for a resolution Annex I – part B – Article 10 – paragraph 2 2. A
Amendment 407 #
Motion for a resolution Annex I – part B – Article 10 – paragraph 2 2. A
Amendment 408 #
Motion for a resolution Annex I – part B – Article 11 – paragraph 1 If there is more than one
Amendment 409 #
Motion for a resolution Annex I – part B – Article 11 – paragraph 1 If there is more than one
Amendment 41 #
Motion for a resolution Paragraph 1 1. Considers that the challenge related to the introduction of AI-systems into society
Amendment 410 #
Motion for a resolution Annex I – part B – Article 11 – paragraph 1 If there is more than one
Amendment 411 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 1 1. The
Amendment 412 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 1 1. The
Amendment 413 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 1 1. The
Amendment 414 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 2 2. In the event that the
Amendment 415 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 2 2. In the event that the
Amendment 416 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 2 2. In the event that the
Amendment 417 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 3 3. In the event that the
Amendment 418 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 3 3. In the event that the
Amendment 419 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 3 3. In the event that the
Amendment 42 #
Motion for a resolution Paragraph 1 1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere
Amendment 420 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 4 4. In the event that the insurer of the
Amendment 421 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 4 4. In the event that the insurer of the
Amendment 422 #
Motion for a resolution Annex I – part B – Article 12 – paragraph 4 4. In the event that the insurer of the
Amendment 423 #
Motion for a resolution Annex I – part B – Article 13 Amendment 424 #
By 1 January 202X [5 years after the date of application of this Regulation], and every three years thereafter, the Commission shall present to the European Parliament, the Council and the European Economic and Social Committee a detailed report reviewing this Regulation in the light of the further development of Artificial Intelligence. In the context of this report, the Commission shall examine, inter alia, whether the scope of this Regulation should be extended to include economic damage.
Amendment 425 #
Motion for a resolution Annex I – part B – Article 14 – subparagraph 1 By 1 January 202X [
Amendment 426 #
Motion for a resolution Annex I – part B – Article 14 – subparagraph 1 By 1 January 202X [
Amendment 427 #
Motion for a resolution Annex I – part B – Article 14 – subparagraph 2 When preparing the report referred to in the first subparagraph, the Commission shall request relevant information from Member States relating to case law, court settlements as well as accident statistics, such as the number of accidents, damage done, AI applications involved, compensation paid by insurance companies, but also an assessment of the number of claims brought by affected persons, either individually or collectively, and of the delays in which these claims are treated in court.
Amendment 428 #
Motion for a resolution Annex I – part B – Article 14 – subparagraph 3 The Commission’s report shall be accompanied, where appropriate, by legislative proposals, meant to address the identified gaps.
Amendment 429 #
Motion for a resolution Annex I – part B – Annex Amendment 43 #
Motion for a resolution Paragraph 1 1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs, credit provision and court orders) to global challenges (e.g. climate change, hunger and starvation);
Amendment 430 #
Motion for a resolution Annex I – part B – Annex Amendment 44 #
Motion for a resolution Paragraph 1 1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, healthcare, hunger and starvation);
Amendment 45 #
Motion for a resolution Paragraph 2 2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union
Amendment 46 #
Motion for a resolution Paragraph 2 2. Firmly believes that in order to
Amendment 47 #
Motion for a resolution Paragraph 2 2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses and avoid regulatory fragmentation in the Union, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible
Amendment 48 #
Motion for a resolution Paragraph 2 2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, uniform, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union and effectively protect our European values and citizens’ rights;
Amendment 49 #
Motion for a resolution Paragraph 2 2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based
Amendment 5 #
Motion for a resolution Citation 23 a (new) - having regard to Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation1a, ____________________ 1a OJ L 303, 2.12.2000, p. 16.
Amendment 50 #
Motion for a resolution Paragraph 3 3. States that the Digital Single Market needs to be fully harmonized and constantly updated since the digital sphere is characterized by rapid cross-border dynamics and international data flows; considers that the Union will only achieve the objectives of maintaining EU’s digital sovereignty and of boosting digital innovation made in Europe with consistent and common rules in line with a culture of innovation;
Amendment 51 #
Motion for a resolution Paragraph 3 3. States that the Digital Single Market needs to be fully harmonized since the digital sphere is characterized by rapid
Amendment 52 #
Motion for a resolution Paragraph 3 a (new) 3a. Notes that the global Artificial Intelligence race is already underway and that the Union should play in it a leading role by exploiting its scientific and technological potential; strongly emphasises that technology development must not come at the expense of protecting users from damage that can be caused by devices and systems using the AI; encourages the promotion at international level of the standards on civil liability in the context of the AI developed in the Union;
Amendment 53 #
Motion for a resolution Paragraph 3 a (new) 3a. Takes the view that artificial intelligence will create unprecedented opportunities and advantages for society and that the objective of EU decision makers should be to make Europe a world leader in AI; highlights the need in this regard to establish a clear, predictable legal framework that meets technological challenges effectively without hampering innovation;
Amendment 54 #
Motion for a resolution Paragraph 3 b (new) 3b. Considers that, in the global context of digitalisation and emerging digital technologies, international cooperation for the purpose of standardisation is particularly relevant to the competitiveness of European businesses;
Amendment 55 #
Motion for a resolution Paragraph 4 4.
Amendment 56 #
Motion for a resolution Paragraph 4 a (new) 4a. Underlines the key importance of the principle of transparency in the context of liability rules;
Amendment 57 #
Motion for a resolution Paragraph 5 5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge to the effectiveness of national and Union liability framework provisions; considers that specific and coordinated adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;
Amendment 58 #
5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability modification through updates, self- learning and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer
Amendment 59 #
Motion for a resolution Paragraph 5 5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems, as well as the multitude of actors involved, nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;
Amendment 6 #
Motion for a resolution Citation 23 b (new) - having regard to the directives on equal treatment of men and women with regard to employment and access to goods and services,
Amendment 60 #
Motion for a resolution Paragraph 5 5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments, not only to the current legal framework, are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;
Amendment 61 #
Motion for a resolution Paragraph 5 5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer physical, psychological or mental harm or whose property is damaged end up without compensation;
Amendment 62 #
Motion for a resolution Paragraph 5 5. Believes that
Amendment 63 #
Motion for a resolution Paragraph 5 5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and potential autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;
Amendment 64 #
Motion for a resolution Paragraph 6 6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that this constraint has an even greater impact on the affected person for whom it is impossible to establish causality between the damage and a prior act or omission; stresses that, in accordance with widely-
Amendment 65 #
Motion for a resolution Paragraph 6 6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; notes in this respect that it is not necessary to give legal personality to AI-systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even
Amendment 66 #
Motion for a resolution Paragraph 6 6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity, connectivity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-
Amendment 67 #
Motion for a resolution Paragraph 6 6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that
Amendment 68 #
Motion for a resolution Paragraph 6 6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are
Amendment 69 #
Motion for a resolution Paragraph 7 7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive;
Amendment 7 #
Motion for a resolution Citation 23 c (new) - having regard to various consumer protection rules such as the Unfair Commercial Practices Directive (Directive 2005/29/EC) and the Consumer Rights Directive (Directive 2011/83/EC),
Amendment 70 #
Motion for a resolution Paragraph 7 7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer, notwithstanding its proportionate liability according to their contribution of risk to the harm regulated under these provisions;
Amendment 71 #
Motion for a resolution Paragraph 7 7. Considers that the Product Liability Directive (PLD)
Amendment 72 #
Motion for a resolution Paragraph 7 7. Considers that the Product Liability Directive (PLD) has for over 30 years proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that because it should also be used with regard to civil liability claims against the producer of a defective AI-system,
Amendment 73 #
Motion for a resolution Paragraph 7 7. Considers that the Product Liability
Amendment 74 #
Motion for a resolution Paragraph 8 8. Considers that the existing fault- based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, additional liability rules seem necessary; nuances, however, that, this is notwithstanding a malicious intent or gross negligence on behalf of the user of the application, and it must be accounted for in addition to the strict liability of operator or manufacturer;
Amendment 75 #
Motion for a resolution Paragraph 8 8. Considers that the existing fault- based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, or cases in which it would be disproportionately difficult for the injured party to bear the burden of proof, additional liability rules seem necessary;
Amendment 76 #
Motion for a resolution Paragraph 8 8. Considers that the existing fault- based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third
Amendment 77 #
Motion for a resolution Paragraph 9 9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the
Amendment 78 #
Motion for a resolution Paragraph 9 9. Considers it, therefore, appropriate
Amendment 79 #
Motion for a resolution Paragraph 9 9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the
Amendment 8 #
Motion for a resolution Citation 23 d (new) - having regard to Directive 2009/48/EC on the safety of toys,
Amendment 80 #
Motion for a resolution Subheading 3 Liability of the
Amendment 83 #
Motion for a resolution Paragraph 10 (10
Amendment 84 #
Motion for a resolution Paragraph 10 10. Opines that liability rules involving the
Amendment 85 #
Motion for a resolution Paragraph 10 10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system and thus corresponding liability claims might fail;
Amendment 86 #
Motion for a resolution Paragraph 10 10. Opines that liability rules involving the
Amendment 87 #
Motion for a resolution Paragraph 10 10. Opines that liability rules involving the
Amendment 88 #
Motion for a resolution Paragraph 10 10. Opines that liability rules involving the deployer should
Amendment 89 #
Motion for a resolution Paragraph 11 11. Considers it appropriate to define the deployer as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI- system; takes the view that those tasked with deployment should monitor the good intentions of the developers throughout the value chain in order to ensure the protection of consumers through trustworthy AI;
Amendment 9 #
Motion for a resolution Citation 23 e (new) - having regard to European Council Decision 2017/745 on medical devices amending Directive 2001/83/EC and applicable from 26 May 2020,
Amendment 90 #
Motion for a resolution Paragraph 11 11. Considers it appropriate to define the
Amendment 91 #
Motion for a resolution Paragraph 11 11. Considers it appropriate to define the deployer as the
Amendment 92 #
Motion for a resolution Paragraph 11 11. Considers it appropriate to define the
Amendment 93 #
Motion for a resolution Paragraph 11 11. Considers it appropriate to define the
Amendment 94 #
Motion for a resolution Paragraph 12 12. Notes that there could be situations in which there is more than one
Amendment 95 #
Motion for a resolution Paragraph 12 12. Notes that there could be situations in which there is more than one deployer; considers that in that event, all deployers should be jointly and severally liable while having the right to recourse proportionally against each other in line with the level of operational risk control;
Amendment 96 #
Motion for a resolution Paragraph 12 12. Notes that there could be situations in which there is more than one
Amendment 97 #
Motion for a resolution Paragraph 12 12. Notes that there could be situations in which there
Amendment 98 #
Motion for a resolution Paragraph 12 a (new) 12a. Notes that giving legal personality to artificial intelligence is neither necessary nor desirable, as the damage caused by the functioning of this new technology can and should be attributed to the responsible producer or deployer;
Amendment 99 #
Motion for a resolution Subheading 4 source: 652.518
|
History
(these mark the time of scraping, not the official date of the change)
committees/0 |
|
committees/0 |
|
docs/0/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE650.556New
https://www.europarl.europa.eu/doceo/document/JURI-PR-650556_EN.html |
docs/1/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.460New
https://www.europarl.europa.eu/doceo/document/JURI-AM-652460_EN.html |
docs/2/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.518New
https://www.europarl.europa.eu/doceo/document/JURI-AM-652518_EN.html |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381New
https://www.europarl.europa.eu/doceo/document/IMCO-AD-648381_EN.html |
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911New
https://www.europarl.europa.eu/doceo/document/TRAN-AD-646911_EN.html |
docs/6 |
|
events/0/type |
Old
Committee referral announced in Parliament, 1st reading/single readingNew
Committee referral announced in Parliament |
events/1/type |
Old
Vote in committee, 1st reading/single readingNew
Vote in committee |
events/2/type |
Old
Committee report tabled for plenary, single readingNew
Committee report tabled for plenary |
events/3/docs |
|
events/4 |
|
events/4 |
|
events/5 |
|
procedure/Modified legal basis |
Rules of Procedure EP 159
|
procedure/Other legal basis |
Rules of Procedure EP 159
|
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381 |
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911 |
docs/6 |
|
events/3 |
|
events/4 |
|
events/5 |
|
forecasts |
|
procedure/stage_reached |
Old
Awaiting Parliament's voteNew
Procedure completed |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02 |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381 |
forecasts/1 |
|
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02 |
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911&secondRef=02 |
forecasts/0/title |
Old
Indicative plenary sitting date, 1st reading/single readingNew
Debate in plenary scheduled |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381 |
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911 |
docs/5 |
|
events/2/docs |
|
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911&secondRef=02 |
events/2 |
|
procedure/stage_reached |
Old
Awaiting committee decisionNew
Awaiting Parliament's vote |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02 |
events/1 |
|
procedure/Modified legal basis |
Rules of Procedure EP 159
|
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381 |
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911 |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02 |
forecasts |
|
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381 |
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911&secondRef=02 |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02 |
docs/4/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE646.911 |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381 |
docs/4 |
|
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02 |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.381 |
docs/3 |
|
docs/1/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.460
|
docs/2/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.518
|
docs/1/date |
Old
2020-05-26T00:00:00New
2020-05-28T00:00:00 |
docs/2 |
|
docs/1 |
|
committees/0/shadows/3 |
|
committees/0/shadows/1 |
|
docs/0/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE650.556
|
docs |
|
committees/0/shadows/1 |
|
committees/1/rapporteur |
|
committees/0 |
|
committees/0 |
|
committees/0/rapporteur |
|
committees/3/opinion |
False
|