BETA

3 Amendments of Giusi PRINCI related to 2024/0035(COD)

Amendment 41 #
Proposal for a directive
Recital 11
(11) Research has shown that limiting the dissemination of child sexual abuse material is not only crucial to avoid the re- victimisation linked to the circulation of images and videos of the abuse but is also essential as a form of offender-side prevention, as accessing child sexual abuse material is often the first step towards hands-on abuse, regardless of whether it depicts real or simply realistic abuse and exploitation. The ease with which artificial intelligence (AI) can now be used to generate such material has heightened the urgency of addressing this issue. AI image-generators, trained on datasets that may have contained child sexual abuse imagery, are likely to have enabled the production of 'new' child sexual abuse material. The ability to create such content with minimal technical expertise has led to a scenario where child sexual abuse material can potentially be produced on an industrial scale. Moreover, the ongoing development of artificial intelligence applications capable of creating realistic images that are indistinguishable from real images, the number of so-called ‘deep-fake’ images and videos depicting child sexual abuse is expected to grow exponentially in the coming years. In addition, the development of augmented, extended and virtual reality settings making use of avatars including sensory feedback, e.g. through devices providing a perception of touch are not fully covered by the existing definition. To ensure the regulation of generative AI technologies that could create harmful content, including child sexual abuse material, strict oversight mechanisms should be taken into account, including safety protocols implemented at both developer and deployer levels.The inclusion of an explicit reference to ‘reproductions and representations’ should ensure that the definition of child sexual abuse material covers these and future technological developments in a sufficiently technology-neutral and hence future-proof way.
2025/01/20
Committee: FEMM
Amendment 48 #
Proposal for a directive
Recital 11 a (new)
(11 a) A risk-based regulatory approach, as foreseen by the AI Act, has the potential to address the misuse of generative AI to produce child sexual abuse material by establishing enforceable liability obligations and implementing safeguards such as transparency measures, risk assessment processes, and watermarking of generated content. However, the success of this approach relies on the effective implementation of these regulations, the development of common standards, and the capacity to adapt to emerging risks. Another complementary approach that should be considered involves the establishment of principle-based frameworks that guide the ethical development and deployment of AI technologies, emphasizing core principles, such as human rights and gender equality, and aiming to ensure that AI technologies are developed and used in ways that promote societal well-being while mitigating potential harms.
2025/01/20
Committee: FEMM
Amendment 51 #
Proposal for a directive
Recital 11 b (new)
(11 b) Children are increasingly connected from a young age, and girls are particularly vulnerable to encountering and being subjected to cyber violence. Studies show that one in 10 women has experienced some form of gender-based cyber violence since the age of 15, with 58% of girls reporting having faced online harassment. The digital sphere presents disproportionate risks for girls and women, who are especially impacted by gender-based cyber violence. Online sexual violence, including sexual harassment, abuse, and grooming, has reached unprecedented levels, disproportionately affecting girls and young women. Women and girls are more likely to be the targets of cyber violence on digital platforms, experiencing significant physical, sexual, and psychological distress as well as financial difficulties as a result. Child sexual abuse is largely an expression of gender-based violence targeting girls and young women. Therefore, it is crucial to integrate a gender perspective into all measures designed to prevent and combat online child sexual abuse and the interception of online solicitation of children, while also addressing the root causes of gender-based violence. However, there is limited provision in digital platforms’ standards and trust and safety policies for keeping users safe from gender-related cyber violence online, despite the high incidence of this phenomenon. Existing standards often lack references to relevant human rights acts or recent legislative advances in combating gender-based and cyber violence. Digital platforms face significant challenges in addressing acts and behaviors of cyber violence, and greater collaboration across platforms is essential. Such collaboration could enable cross-platform reporting and the harmonization of diverse definitions of cyber violence, ensuring a more coherent and effective response. However, the lack of disaggregated data by sex in incident reporting, response, and follow-up practices hinders a comprehensive understanding of the true scale of cyber violence against women and girls. Additionally, more transparency is needed regarding moderation and follow-up practices to ensure accountability.This lack of a gender-sensitive approach in reporting, recording, and responding to cyber violence renders much of the phenomenon against women and girls invisible. To effectively combat these issues, a gender-sensitive framework must be prioritized, ensuring digital platforms incorporate robust measures to protect vulnerable users while addressing the systemic and disproportionate risks faced by girls and women.
2025/01/20
Committee: FEMM