Activities of Sergey LAGODINSKY related to 2020/2014(INL)
Legal basis opinions (0)
Amendments (110)
Amendment 1 #
Motion for a resolution
Citation 1 a (new)
Citation 1 a (new)
- having regard to Article 169 of the Treaty on the Functioning of the European Union,
Amendment 2 #
Motion for a resolution
Citation 3 a (new)
Citation 3 a (new)
- having regard to Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services,
Amendment 10 #
Motion for a resolution
Recital A
Recital A
A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim and receive compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for natural and legal persons to avoid causing harm or damage in the first place or price the risk of having to compensate into their behaviour;
Amendment 12 #
Motion for a resolution
Recital A a (new)
Recital A a (new)
Aa. whereas Artificial Intelligence and algorithmic decision-making create new consumer and societal challenges and further amplify existing challenges (e.g. privacy, behaviour tracking) that deserve particular attention by policy makers, while liability rules play a key role in enabling trust of citizens in Artificial Intelligence technologies and in the business actors involved.
Amendment 18 #
Motion for a resolution
Recital B
Recital B
B. whereas any future-orientated liability framework has to strike a balance between efficiently and fairly protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployeoperator, the affected person or any other third party;
Amendment 26 #
Motion for a resolution
Recital E
Recital E
E. whereas Artificial Intelligence (AI)- systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensivedifficult or even impossible to identify who was in control of the risk associated with the AI- system or which code or, input or data has ultimately caused the harmful operation, which should be avoided thanks to relevant transparency and information obligations to be borne by the accountable persons;
Amendment 30 #
Motion for a resolution
Recital F
Recital F
F. whereas this difficulty is compounded by the connectivity between an AI-system and otherresults from the fact that AI-systems and non-AI- systems, by the dependency on external data, by the and are subject to imperfection, resulting in the possibility for vulnerability to cybersecurity breaches, as well as byfrom the design of increasingly autonomy ofous AI- systems triggered byusing, inter alia, machine-learning and deep- learning capabilititechniques;
Amendment 35 #
Motion for a resolution
Recital G
Recital G
G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liabilitycompensation procedures means that each person who suffers harm caused by AI- systems or whose property damage is caused by AI- systems should have the same level of protection compared to cases without involvement of an AI-system.
Amendment 40 #
Motion for a resolution
Paragraph 1
Paragraph 1
1. Considers that the challenge related to the introduction of AI-systems into society, the workplace and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could, and should endeavour to improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs)assistance to vulnerable persons, fitness programs), to the working environment (e.g. alleviation from tedious and repetitive tasks) and to global challenges (e.g. climate changeemergency, hunger and starvation);
Amendment 51 #
Motion for a resolution
Paragraph 3
Paragraph 3
3. States that the Digital Single Market needs to be fully harmonized since the digital sphere is characterized by rapid cross-border dynamics and international data flows; considers that the Union will only achievcan contribute theo objectives of maintaining EU’s digital sovereigntysuch as building capacity and capability within the EU and of boosting digital innovation made in Europe with consistent and common rules;
Amendment 63 #
Motion for a resolution
Paragraph 5
Paragraph 5
5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and potential autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;
Amendment 67 #
Motion for a resolution
Paragraph 6
Paragraph 6
6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in pracAI- systems designed to be auditable and open for the agency of a human operator at any ticme very difficult or even imcan support the possibleility to trace back specific harmful actions of the AI- systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one isbstacles to this can nevertheless able tobe circumvent this obstacled by making the persons who create, maintain or control the risk associated with the AI-system, accountable;
Amendment 70 #
Motion for a resolution
Paragraph 7
Paragraph 7
7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer, notwithstanding its proportionate liability according to their contribution of risk to the harm regulated under these provisions;
Amendment 74 #
Motion for a resolution
Paragraph 8
Paragraph 8
8. Considers that the existing fault- based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, additional liability rules seem necessary; nuances, however, that, this is notwithstanding a malicious intent or gross negligence on behalf of the user of the application, and it must be accounted for in addition to the strict liability of operator or manufacturer;
Amendment 79 #
Motion for a resolution
Paragraph 9
Paragraph 9
9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the deployeoperator of an AI- system; affirms that the deployeoperator’s liability is justified by the fact that he or she is controlling a risk associated with the AI- system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the deployeoperator will be in many cases the first visible contact point for the affected person;
Amendment 82 #
Motion for a resolution
Subheading 3
Subheading 3
Liability of the deployeoperator
Amendment 84 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. Opines that liability rules involving the deployeoperator should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration, however, require further consideration, while not overlooking other kinds of risk to be potentially caused by AI-systems; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployeoperator; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployeoperator of the AI-system;
Amendment 93 #
Motion for a resolution
Paragraph 11
Paragraph 11
11. Considers it appropriate to define the deployeoperator as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployeoperator that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI- system;
Amendment 96 #
Motion for a resolution
Paragraph 12
Paragraph 12
12. Notes that there could be situations in which there is more than one deployeoperator; considers that in that event, all deployoperators and, if applicable, users should be jointly and severally liable while having the right to recourse proportionally against each other;
Amendment 102 #
Motion for a resolution
Paragraph 13
Paragraph 13
13. Recognises that the type of AI- system the deployeoperator is exercising control over is a determining factor; notes that an AI-system that entails a high risk and act autonomously potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk autonomous AI- systems;
Amendment 107 #
Motion for a resolution
Paragraph 14
Paragraph 14
14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random andbased on an autonomous decision- making of the technology and thus impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;
Amendment 123 #
Motion for a resolution
Paragraph 16
Paragraph 16
16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation period;
Amendment 127 #
Motion for a resolution
Paragraph 17
Paragraph 17
17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but arethe criteria defining the level of risk of harm or damage caused by AI-systems but not listed in the Annex to the proposed Regulation should remain subject to fault- based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployeroperator, who can exculpate itself by proof of abiding by duty of care;
Amendment 130 #
Motion for a resolution
Paragraph 17 a (new)
Paragraph 17 a (new)
17a. Requests the Commission to evaluate the need for regulation on contracts to prevent contractual non- liability clauses.
Amendment 134 #
Motion for a resolution
Paragraph 18 a (new)
Paragraph 18 a (new)
18a. Is mindful of the fact that uncertainty regarding risks should not make insurance premiums prohibitively high and thus be an obstacle to research and innovation; proposes that a special mechanism between the Commission and the insurance industry should be developed to address the potential uncertainties in the insurance branch;
Amendment 136 #
Motion for a resolution
Paragraph 19
Paragraph 19
19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployeoperators of high-risk AI- systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation; is mindful of the fact that such technology is currently still very rare, since it presupposes a high degree of autonomous decision making and that, thus, the current proposals are mostly future oriented; _________________ 7 OJ L 263, 7.10.2009, p. 11.
Amendment 143 #
Motion for a resolution
Paragraph 20
Paragraph 20
20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that, notwithstanding the aforementioned mechanism between the Commission and the insurance branch, bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI- systems;
Amendment 144 #
Motion for a resolution
Annex I – part A – paragraph 1 – introductory part
Annex I – part A – paragraph 1 – introductory part
This Report is addressing an important aspect of digitisation, which itself is shaped by cross-border activities and, global competition and core societal considerations. The following principles should serve as guidance:
Amendment 145 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 1
Annex I – part A – paragraph 1 – indent 1
- A genuine Digital Single Market requires a level of full harmonisation by a Regulatwith the objective of not lowering the legal protection of citizens throughout the Union.
Amendment 146 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 2
Annex I – part A – paragraph 1 – indent 2
- New legal challenges posed by the depveloypment of Artificial Intelligence (AI)- systems have to be addressed by establishing maximal legal certainty forthroughout the liability chain, including the producer, the deployeoperator, the affected person and any other third party.
Amendment 151 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 3
Annex I – part A – paragraph 1 – indent 3
- There should be no over-regulation nor legal uncertainty as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start- ups.
Amendment 155 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 5
Annex I – part A – paragraph 1 – indent 5
- This Report and the Product Liability Directive are two pillars of a common liability framework for AI- systems and require close coordination between all political actors, at Union and national levels.
Amendment 159 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 6
Annex I – part A – paragraph 1 – indent 6
- Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI- system or not, or if it takes place physically, materially, immaterially or virtually.
Amendment 160 #
Motion for a resolution
Annex I – part B – citation 4 a (new)
Annex I – part B – citation 4 a (new)
Having regard to Article 169 of the Treaty on the Functioning of the European Union,
Amendment 165 #
Motion for a resolution
Annex I – part B – recital 1
Annex I – part B – recital 1
(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim and get compensation from the party proven to beheld liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.
Amendment 168 #
Motion for a resolution
Annex I – part B – recital 2
Annex I – part B – recital 2
(2) Especially at the beginning of the life cycle of new products and services, after those were pre-tested, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial- and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.
Amendment 172 #
Motion for a resolution
Annex I – part B – recital 3
Annex I – part B – recital 3
(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life willSome regulatory safeguards should prevent that AI-systems’ complexity leads to situations in which their opacity (“black box” element) makes it extremely expensive or even impossiopacity for their users, and should enable to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other, while acknowledging the difficulty in this task. This difficulty results from the fact that AI-systems and non-AI-systems, by its dependency on external data, by itsand are subject to imperfection, resulting in the possibility for vulnerability to cybersecurity breaches, as well as byfrom the design of increasingly autonomy ofous AI-systems triggered byusing, inter alia machine-learning and deep- learning capabilititechniques. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values, rights and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.
Amendment 173 #
Motion for a resolution
Annex I – part B – recital 4
Annex I – part B – recital 4
(4) At this point, it is important to point out that public and private stakeholders should endeavour to make the advantages of deploying AI- systems will by far outweighing the disadvantages. Theyo this purpose, any regulatory intervention should ensure that AI-systems will abide by Union law and ethical standards, will help to fight the climate changeemergency more effectively, to improve medical examinahuman well-being, notably with respect to, medical examinations and working conditions, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI- systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.
Amendment 180 #
Motion for a resolution
Annex I – part B – recital 5
Annex I – part B – recital 5
(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software, and developed by human intervention. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any minimum risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on technologies such as neuronal networks and deep-learning processes. Their opaccomplexity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI-systemThe person expected to be in control of such an AI-system, like the manufacturer or the operator, might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI- system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim if not brought in connection with other elements supporting such a claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployeoperator and ends up without compensation.
Amendment 187 #
Motion for a resolution
Annex I – part B – recital 6
Annex I – part B – recital 6
(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates or entertains a risk for the public is accountable if that risk materializes, and thus should ex-ante minimise or ex-post compensate that risk. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.
Amendment 191 #
Motion for a resolution
Annex I – part B – recital 7
Annex I – part B – recital 7
(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI- system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action subject to situations where the third-party uses the AI system to cause harm. Consequently, this Regulation should focus on claims against the deployeoperator of an AI- system. _________________ 3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
Amendment 194 #
Motion for a resolution
Annex I – part B – recital 7
Annex I – part B – recital 7
(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage trigger, for over 30 years, provided a valuable safety net to protect consumers from harm caused by a defective product. Hence, it should also be used with regard tos and needs to be updated to take account of civil liability claims of a party who suffers harm or damage against the producer of a defective AI- system. In line with the better regulation principles of the Union, anyAll necessary legislative adjustments should be discussed during a review of that Directive. The existing fault- based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regulbut does not necessarily constitutes a fault- based actiontake account of technological developments. Consequently, this Regulation should focus on claims against the deployefrontend operator and backend operator of an AI-system. _________________ 3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
Amendment 197 #
Motion for a resolution
Annex I – part B – recital 8
Annex I – part B – recital 8
(8) The liability of the deployeoperator under this Regulation is based on the fact that he or she controls a risk by operating an AI- system. Comparable to an owner of a car or pet, the deployeoperator is able to exercise a certain level of control over the risk that the item poses. Exercising control thereby should be understood as meaning any actioncess of the deployeoperator that can affect how the risk can materialise, such as the manner of the operation from start to finish or that can changes specific functions or processes within the AI-system.
Amendment 201 #
Motion for a resolution
Annex I – part B – recital 9
Annex I – part B – recital 9
(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a deployer. This Regulation should not consider the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to be a deployer and thus, its provisions should not apply to him or her. For the purpose of legal certainty throughout the Union, the backend operator should fall under the same liability rules as the producer, manufacturer and developern operator, otherwise the extent of the user’s grossly negligent or intentional contribution to the risk will lead to the user’s fault-based liability to the claimant. Applicable consumer rights of the user should remain unaffected.
Amendment 204 #
Motion for a resolution
Annex I – part B – recital 10
Annex I – part B – recital 10
(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the deployer. ConsequIt should contribute to bring legal certainty, inasmuch as possible and without pre-empting on future technological developmently, the operation of the AI-system puts them into a situation in which, in the event of harm or damagedifferent liability claims that the affected persons can bering caused, they only have fault-basedthroughout the liability clhaims against the deployer of the AI-system, while facing severe difficulties to prove fault on the part of the deployern and throughout the lifecycle of an AI- system.
Amendment 210 #
Motion for a resolution
Annex I – part B – recital 11
Annex I – part B – recital 11
(11) The type of AI-system the deployeoperator is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the user or the public to a much higher degree and in a manner that is random and impossibledifficult to predict in advance. This means that at the start of the autonomous operation of the AI-system, the majority of the potentially affected persons are unknown and not identifiable (e.g. persons on a public square or in a neighbouring house), compared to the operation of an AI-system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop)it is impossible to predict and probably to control the possible malicious behaviour of the software while its impact can be extremely high due to the extent of harm or the nature of the goods or rights that are exposed to the risk. Determining how significant the potential to cause harm or damage by a high-risk AI- system should depend on the interplay between, the purpose of use for which the AI system is put on the market, the manner in which the AI- system is being used, the severity of the potential harm or damage and, the likelihood that the risk materialises. The degree of severitydegree of autonomy of decision making that can result in harm. The degree of severity determining the level of compensation of the affected persons should be determinassessed based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value, material and immaterial, for the potential damage as well as the harm to society as a whole. The likelihood for the harm or damage to occur should be determined based on the role of the algorithmic calculations in the decision- making process, the possibility of human intervention in this process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sectorcontext in which the AI-system operates, if it could have legal or factual effects on important legally protected rights and goods of the affected person, and whether the effects can reasonably be avoided, and according to the level of information provided to the users.
Amendment 214 #
Motion for a resolution
Annex I – part B – recital 12
Annex I – part B – recital 12
(12) All categories of AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation,, as well as the criteria allowing to assess such risk, should be determined according to a structured process involving a Commission expert committee and a constant consultation between the Commission and all involved stakeholders, including researchers, individual experts, free software community members, scientists, engineers, as well as the Ccommission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-systempetent supervisory authorities. Given the rapid technical and market developments worldwide, as well as the technical expertise which is required for an adequate review of AI-systems, a list of categories of AI-systems based on criteria able to define their autonomy and to assess the level of risk over time should be updated on a regular basis, while giving businesses and research organisations enough planning and investment security.
Amendment 219 #
Motion for a resolution
Annex I – part B – recital 13
Annex I – part B – recital 13
(13) It is of particular importance that the Commission carry out appropriateregular consultations of all relevant stakeholders during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making4 . A standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should support the Commission in its regular review under this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI- committee, when dealing with the preparation of delegated acts. _________________ 4 OJ L 123, 12.5.2016, p. 1associations representing the victims of harm or damage, businesses representatives from different sectors and sizes, as well as individual experts, researchers and scientists.
Amendment 222 #
Motion for a resolution
Annex I – part B – recital 14
Annex I – part B – recital 14
(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determ, includineg the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI- system, while the former refers to a number of products or even a producffected person’s data and digital environment, as well as other ergo- omnes legal goods, provided their infringement can result line with the same defectto material damage.
Amendment 224 #
Motion for a resolution
Annex I – part B – recital 14
Annex I – part B – recital 14
(14) In line wiDue to the strict liability pecial characteristics of AI-systems of, the Member States, thisproposed Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of productmaterial as well as non- material harm, including damage to intangible property, and data, such as loss or leak of data and should ensure that damage is fully compensated in compliance with the fundamental right of redress for even a product line with the same defectdamage suffered.
Amendment 227 #
Motion for a resolution
Annex I – part B – recital 15
Annex I – part B – recital 15
(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulatqualified as posing a high-risk, under criteria to be determined through a clear process under the auspices of the Commission, should remainbe subject to fault- based liability, notwithstanding stricter national laws and consumer protection legislation in force. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should howeverneeds to prove that the claimed harm has been caused by the AI, but should benefit from the presumption of fault of the deployeoperator.
Amendment 230 #
Motion for a resolution
Annex I – part B – recital 16
Annex I – part B – recital 16
(16) The diligence which can be expected from a deployen operator should be commensurate with (i) the nature of the AI system, (ii) the information on the nature of the AI system provided to the operator and to the public, (iii) the legally protected right potentially affected, (iiiv) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the deployeoperator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deploye, even though a sufficient level of information should be ensured, providing for the relevant documentation on the use and design instructions, including the source code and the data used by the AI system, made easily accessible through a mandatory legal deposit. It should be presumed that the operator has observed due care in selecting a suitable AI-system, if the deployeoperator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployeoperator has observed due care during the operation of the AI- system, if the deployeoperator can prove to have actually and regularly monitored the AI- system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the deployeoperator has observed due care as regards maintaining the operational reliability, if the deployeoperator installed all available updates provided by the producer of the AI-system. according to the conditions laid down in Directive (EU) 2019/770. Due account should be given to the possibility for the largely volunteer- based free software community to produce software for the general public to use which can be integrated, in whole or in part, into AI systems, without automatically becoming subject to obligations designed for businesses providing digital content in a professional capacity.
Amendment 234 #
Motion for a resolution
Annex I – part B – recital 17
Annex I – part B – recital 17
(17) In order to enable the deployeoperator to prove that he or she was not at fault, the producers should have the duty to collaborate with the deployeroperator, including by providing well-documented information. European as well as non-European producers should furthermore have the obligation to designate an AI-liability- representative within the Union as a contact point for replying to all requests from deployeoperators, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation 2019/1020 of the European Parliament and of the Council6 (manufacturer's representative) into account. _________________ 5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 6Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).
Amendment 237 #
Motion for a resolution
Annex I – part B – recital 18
Annex I – part B – recital 18
(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life, including the waste and recycling management. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their Research and Development projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate, research, and ultimately offer new products and services contributing to the well-being of society, as well as whether the customers are willing to usetrust in such products and services and are willing to use them despite the potential risks and legal claims being brought against them. In consideration of the dynamic nature of the risk pertaining to AI systems, to which the whole public is potentially exposed, civil liability rules should be governed by a high-level of protection of potentially affected persons and public goods.
Amendment 242 #
Motion for a resolution
Annex I – part B – recital 19
Annex I – part B – recital 19
(19) Insurance can help to ensure that victims can receive effective compensation as well as to pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment based on access to sufficient historical claim data. A lack of access to, or an insufficient quantity of high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to and optimising the use of data generated by new technologies, coupled with an obligation to provide well-documented information, will enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover .
Amendment 244 #
Motion for a resolution
Annex I – part B – recital 20
Annex I – part B – recital 20
(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover- by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this RegulatiMember States should be encouraged to set up a special compensation fund to supplement the liability insurance cover in order to ensure that damages can be effectively compensated for in cases where no insurance cover exists. In order to ensure legal certainty and to fulfil the obligation to inform all potential affected persons, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those caseexistence of the relevant insurance and fund shall be made publicly visible by an individual registration number appearing in a specific Union register, which would allow anyone interacting with the AI system to be informed about the ways of action when a harm or a damage occurs, the limits of liability attached to it, the names and the functions of the operator and all other relevant details.
Amendment 251 #
Motion for a resolution
Annex I – part B – recital 21
Annex I – part B – recital 21
(21) It is of utmost importance that any future changes to this text go hand in hand with a necessary review of the PLD, in order to review in a comprehensive and consistent manner the rights and obligations of all concerned parties throughout the liability chain. The introduction of a new liability regime for the deployeoperator of AI-systems requires that the provisions of this Regulation and the review of the PLD should be closely coordinated in terms of substance as well as approach so that they together constitute a consistent liability framework for AI- systems, balancing the interests of producer, deployoperator, consumer and the affected person, as regards the liability risk and the relevant compensation modalities. Adapting and streamlining the definitions of AI-system, deployeoperator, producer, developer, defect, product and service throughout all pieces of legislation is therefore necessary.
Amendment 254 #
Motion for a resolution
Annex I – part B – recital 22
Annex I – part B – recital 22
(22) Since the objectives of this Regulation, namely to create a future- orientated and unified approach at Union level, which sets common European standards for our citizens and businesses and to ensure the consistency of rights and legal certainty throughout the Union, in order to avoid fragmentation of the Digital Single Market, which would hamper the goal of maintaining digital sovereignty and, of fostering digital innovation and of ensuring a high-level protection of citizen and consumer rights in Europe, require that the liability regimes for AI- systems are fully harmonized. Since this cannot be sufficiently achieved by the Member States due to the rapid technological change, the cross-border development as well as the usage of AI- systems and eventually, the conflicting legislative approaches across the Union, but can rather, by reason of the scale or effects of the action, be achieved at Union level. The Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve these objectives,.
Amendment 258 #
Motion for a resolution
Annex I – part B – Article 1 – paragraph 1
Annex I – part B – Article 1 – paragraph 1
This Regulation sets out rules for the civil liability claims of natural and legal persons against the deployeoperator of AI-systems.
Amendment 259 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1
Annex I – part B – Article 2 – paragraph 1
1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system or autonomous decision- making (ADM) system has caused harm or damage to the life, health, physical integrity or theand property of a natural or legal person, including the affected person’s data and digital environment, as well as other ergo-omnes legal rights, provided their infringement can result into material damage.
Amendment 263 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 2
Annex I – part B – Article 2 – paragraph 2
2. Any agreement between a deployen operator of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation, whether concluded before or after the harm or damage has been caused, shall be deemed null and voidineffective regarding rights and obligations set out under this Regulation.
Amendment 267 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 3
Annex I – part B – Article 2 – paragraph 3
3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships, as well as from regulations on product liability, consumer protection, anti- discrimination, labour and environmental protection between the deployeoperator and the natural or legal person who suffered harm or damage because of the AI-system.
Amendment 271 #
Motion for a resolution
Annex I – part B – Article 3 – point a
Annex I – part B – Article 3 – point a
(a) ‘AI-system’ means a system that displays intelligent behaviourbehaviour simulating intelligence by analysing certain input and taking action, with some degree of autonomy, to achieve specific goals. AI-systems can be purely software- based, acting in the virtual world, or can be embedded in hardware devices;
Amendment 273 #
Motion for a resolution
Annex I – part B – Article 3 – point a a (new)
Annex I – part B – Article 3 – point a a (new)
(aa) ‘automated decision-making (ADM), decision-support or decision- informing system’ means the procedure in which decisions are initially, partly or completely, delegated to an operator by way of using a software or a service, who then in turn uses automatically executed decision-making models to perform an action;
Amendment 274 #
Motion for a resolution
Annex I – part B – Article 3 – point b
Annex I – part B – Article 3 – point b
(b) ‘autonomous’ means an AI-system that operates byor ADM system that operates on its owner’s behalf but without any interference from that ownership entity, perceiving certain input and without needing to followgoing beyond a set of pre- determined instructions, despite its behaviour being constrained by and targeted at fulfilling the goal it was given and other relevant design choices made by its developer;
Amendment 276 #
Motion for a resolution
Annex I – part B – Article 3 – point c
Annex I – part B – Article 3 – point c
(c) ‘high risk’ means a significant potential in an autonomously operating AI- system to cause harm or damage to one or more persons in a manner that is random and impossibledifficult to predict in advance; the significance of the potential depends on the interplay between the severity of possible harm or damage, the likelihood that the risk materializesdegree of autonomy of decision-making and the manner and context in which the AI-system is being used, especially the value of goods and rights exposed to the risk;
Amendment 279 #
Motion for a resolution
Annex I – part B – Article 3 – point d
Annex I – part B – Article 3 – point d
(d) ‘deployeoperator’ means the person who decides on the use of the AI-system, exercises control over the associated risk and benefits from its operation, including the frontend and the backend operator, the latter being the person continuously defining the features of the relevant technology and providing essential and ongoing backend support;
Amendment 288 #
Motion for a resolution
Annex I – part B – Article 3 – point e
Annex I – part B – Article 3 – point e
(e) ‘affected person’ means any person who suffers harm or damage causse goods or rights are injured by a physical or virtual activity, device or process driven by an AI-system, and who is not its deployer;
Amendment 289 #
Motion for a resolution
Annex I – part B – Article 3 – point f
Annex I – part B – Article 3 – point f
(f) ‘'harm or damage’ means an adverse impact affecting the life, health, physical integrity or property of a natural or legal person, with the exception of non- material harmon the aforementioned goods and rights;
Amendment 296 #
Motion for a resolution
Annex I – part B – Article 3 – point g
Annex I – part B – Article 3 – point g
(g) ‘producer’ means the developer or the backend operator of an AI-system, or the producer as defined in Article 3 of Council Directive 85/374/EEC7 . _________________ 7Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
Amendment 303 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1
Annex I – part B – Article 4 – paragraph 1
1. The deployeoperator of a high-risk AI- system shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system.
Amendment 304 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – introductory part
Annex I – part B – Article 4 – paragraph 2 – introductory part
2. The categories of high-risk AI- systems, as well as the critical sectors where they are used shall be listed in the Annex to this Regeria used to define such high risk, shall be determined under a structured consultation. The Commission is empowered to adopt delegated acts in accordance with Article 13, to amend the exhaustive list in the Annex, by: process between the Commission, competent supervisory authorities and all involved stakeholders, including civil society representatives, with regular updates in order to reflect the rapid path of technological change.
Amendment 309 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point a
Annex I – part B – Article 4 – paragraph 2 – point a
Amendment 313 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point b
Annex I – part B – Article 4 – paragraph 2 – point b
Amendment 315 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point c
Annex I – part B – Article 4 – paragraph 2 – point c
Amendment 318 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – subparagraph 2
Annex I – part B – Article 4 – paragraph 2 – subparagraph 2
Amendment 321 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 3
Annex I – part B – Article 4 – paragraph 3
3. The deployeoperator of a high-risk AI- system shall not be able to exonerate himself or herself by arguing that he or she acted with due diligence or that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The deployer shall not be held liable if the harm or damage was caused by force majeure.
Amendment 323 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 3 a (new)
Annex I – part B – Article 4 – paragraph 3 a (new)
3a. When the operator is a frontend operator, he or she shall be able to prove his or her fault. He or she shall not be held liable if the harm or damage was caused by force majeure.
Amendment 328 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 4
Annex I – part B – Article 4 – paragraph 4
4. The deployeoperator of a high-risk AI- system shall ensure tshey have or he has liability insurance cover that is adequate in relation to the amounts and extent of compensation provided for in Article 5 and 6 of this Regulation. If compulsory insurance regimes already in force pursuant to other Union or national law are considered to cover the operation of the AI-system, the obligation to take out insurance for the AI- system pursuant to this Regulation shall be deemed fulfilled, as long as the relevant existing compulsory insurance covers the amounts and the extent of compensation provided for in Articles 5 and 6 of this Regulation.
Amendment 330 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 4 a (new)
Annex I – part B – Article 4 – paragraph 4 a (new)
4a. The liability insurance system shall be supplemented by a fund in order to ensure that damages can be compensated for in cases where no insurance cover exists.
Amendment 332 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 5
Annex I – part B – Article 4 – paragraph 5
5. This Regulation shall prevail over national liability regimes in the event of conflicting strict liability classification of AI-systems, insofar as this Regulation provides for the more favourable rules to the affected persons and to consumer rights.
Amendment 333 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 5 a (new)
Annex I – part B – Article 4 – paragraph 5 a (new)
5a. The rules provided for in Article 4 shall not to be overridden by contract.
Amendment 334 #
Motion for a resolution
Annex I – part B – Article 5 – title
Annex I – part B – Article 5 – title
Amount and extent of compensation
Amendment 336 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – introductory part
Annex I – part B – Article 5 – paragraph 1 – introductory part
1. A deployen operator of a high-risk AI- system that has been held liable for harm or damage under this Regulation shall compensate: in accordance with the national rules for calculation of damages.
Amendment 341 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point a
Annex I – part B – Article 5 – paragraph 1 – point a
Amendment 346 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point b
Annex I – part B – Article 5 – paragraph 1 – point b
Amendment 354 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point 2
Annex I – part B – Article 5 – paragraph 1 – point 2
Amendment 358 #
Motion for a resolution
Annex I – part B – Article 6
Annex I – part B – Article 6
Amendment 370 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2 – introductory part
Annex I – part B – Article 7 – paragraph 2 – introductory part
2. Civil liability claims, brought in accordance with Article 4(1), concerning damage to property and other rights shall be subject to a special limitation period of:
Amendment 373 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2 – point b
Annex I – part B – Article 7 – paragraph 2 – point b
(b) 30 years from the date on which the presumed causal event of the operation of the high-risk AI-system that subsequently caused the property damage took place.
Amendment 376 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2 – subparagraph 1
Annex I – part B – Article 7 – paragraph 2 – subparagraph 1
Amendment 381 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1
Annex I – part B – Article 8 – paragraph 1
1. The deployeoperator of an AI-system that is not defined as a high-risk AI-system, in accordance to Article 3(c) and, as a result is not listed in the Annex to this Regulation, shall be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system.
Amendment 383 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – introductory part
Annex I – part B – Article 8 – paragraph 2 – introductory part
2. The deployer shall not be liable iffault of the operator shall be presumed, unless he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds’:
Amendment 387 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – point a
Annex I – part B – Article 8 – paragraph 2 – point a
(a) the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken, or
Amendment 388 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – point b
Annex I – part B – Article 8 – paragraph 2 – point b
(b) due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities, providing well-documented information and maintaining the operational reliability by regularly installing all available updates.
Amendment 392 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – subparagraph 2
Annex I – part B – Article 8 – paragraph 2 – subparagraph 2
The deployeoperator shall not be able to escape liability by arguing that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The deployeoperator shall not be liable if the harm or damage was caused by force majeure.
Amendment 395 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 3
Annex I – part B – Article 8 – paragraph 3
3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning, the deploye or its effects, the operator shall nonetheless be liable for the payment of compensation if such third party is untraceable or impecunious.
Amendment 398 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 4
Annex I – part B – Article 8 – paragraph 4
4. At the request of the deployeoperator, the producer of an AI-system shall have the duty of collaborating with the deployeand providing information to the operator to the extent warranted by the significance of the claim in order to allow the deployeoperator to prove that he or she acted without fault.
Amendment 401 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 1
Annex I – part B – Article 10 – paragraph 1
1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the deployeoperator’s extent of liability under this Regulation shall be reduced accordingly. The deployer shall not be liable if the affected person or the person for whom he or she is responsible is solely or predominantly accountable for the harm or damage caused.
Amendment 405 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2
Annex I – part B – Article 10 – paragraph 2
2. A deployen operator held liable may use the data generated by the AI-system to prove contributory negligence on the part of the affected person, in accordance with Regulation (EU) 2016/679 and other relevant data protection laws. The affected person may also use these data as a way of proof or clarification in the liability claim.
Amendment 408 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 1
Annex I – part B – Article 11 – paragraph 1
If there is more than one deployeoperator of an AI- system, they shall be jointly and severally liable. If any of the deployeoperators is also the producer of the AI-system, this Regulation shall prevail over the Product Liability Directive, provided that the level of protection of the consumer and of the affected person is not lower than the one provided under the Product Liability Directive.
Amendment 413 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 1
Annex I – part B – Article 12 – paragraph 1
1. The deployeoperator shall not be entitled to pursue a recourse action unless the affected person, who is entitled to receive compensation under this Regulation, has been paid in full.
Amendment 415 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 2
Annex I – part B – Article 12 – paragraph 2
2. In the event that the deployeoperator is held jointly and severally liable with other deployeoperators in respect of an affected person and has fully compensated that affected person, in accordance with Article 4(1) or 8(1), that deployeoperator may recover part of the compensation from the other deployeoperators, in proportion to his or her liability. DeployeOperators, that are jointly and severally liable, shall be obliged in equal proportions in relation to one another, unless otherwise determined. If the contribution attributable to a jointly and severally liable deployeoperator cannot be obtained from him or her, the shortfall shall be borne by the other deployeoperators. To the extent that a jointly and severally liable deployeoperator compensates the affected person and demands adjustment of advancements from the other liable deployeoperators, the claim of the affected person against the other deployeoperators shall be subrogated to him or her. The subrogation of claims shall not be asserted to the disadvantage of the original claim.
Amendment 418 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 3
Annex I – part B – Article 12 – paragraph 3
3. In the event that the deployeoperator of a defective AI-system fully indemnifies the affected person for harm or damages in accordance with Article 4(1) or 8(1), he or she may take action for redress against the producer of the defective AI-system according to Directive 85/374/EEC and to national provisions concerning liability for defective products.
Amendment 421 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 4
Annex I – part B – Article 12 – paragraph 4
4. In the event that the insurer of the deployeoperator indemnifies the affected person for harm or damage in accordance with Article 4(1) or 8(1), any civil liability claim of the affected person against another person for the same damage shall be subrogated to the insurer of the deployeoperator to the amount the insurer of the deployeoperator has compensated the affected person.
Amendment 423 #
Motion for a resolution
Annex I – part B – Article 13
Annex I – part B – Article 13
Amendment 426 #
Motion for a resolution
Annex I – part B – Article 14 – subparagraph 1
Annex I – part B – Article 14 – subparagraph 1
By 1 January 202X [53 years after the date of application of this Regulation], and every threewo years thereafter, the Commission shall present to the European Parliament, the Council and the European Economic and Social Committee a detailed report reviewing this Regulation in the light of the further development of Artificial Intelligence.
Amendment 427 #
Motion for a resolution
Annex I – part B – Article 14 – subparagraph 2
Annex I – part B – Article 14 – subparagraph 2
When preparing the report referred to in the first subparagraph, the Commission shall request relevant information from Member States relating to case law, court settlements as well as accident statistics, such as the number of accidents, damage done, AI applications involved, compensation paid by insurance companies, but also an assessment of the number of claims brought by affected persons, either individually or collectively, and of the delays in which these claims are treated in court.
Amendment 428 #
Motion for a resolution
Annex I – part B – Article 14 – subparagraph 3
Annex I – part B – Article 14 – subparagraph 3
The Commission’s report shall be accompanied, where appropriate, by legislative proposals, meant to address the identified gaps.
Amendment 430 #
Motion for a resolution
Annex I – part B – Annex
Annex I – part B – Annex