Back to publications
    INFLUXIO- Raphaël Molina

    Bill to distinguish AI-generated content on social media.

    Analysis of the bill requiring the labelling of AI-generated content.

    The rise of generative artificial intelligence technologies is disrupting the legal framework applicable to digital content, particularly on social networks. The increasing use of tools capable of producing artificial images, videos, and texts raises major concerns regarding transparency, manipulation of public opinion, and the responsibility of the actors involved.

    In this context, several legislative texts have recently been adopted or proposed to regulate these practices. On the one hand, a bill introduced in the National Assembly on December 3, 2024, aims to impose an obligation to explicitly mention images generated or modified by artificial intelligence on social networks.

    On the other hand, the European Artificial Intelligence Regulation (AI Act), some provisions of which will come into force in August 2026, provides a harmonized framework within the European Union, notably through its Article 50 which establishes transparency obligations for artificial content.

    A national framework anticipating European law: the bill aimed at identifying AI content on social networks.

    Bill n°675 of December 3, 2024, introduced in the National Assembly, aiming to identify images generated by artificial intelligence published on social networks, is part of a legislative movement to combat disinformation and enhance transparency regarding digital content. Its main objective is to impose on social network users an explicit obligation to flag images generated or modified by artificial intelligence.

    The explanatory memorandum highlights the need for such regulation in the face of the proliferation of _deepfakes_ and manipulated content, which can be used for political, economic, or social disinformation. It also emphasizes the role of large technology companies in the dissemination of such content and the need to hold them accountable by imposing detection measures.

    The text of the bill provides for the introduction of an Article 6-6 into the Law of June 21, 2004, on confidence in the digital economy \\ [1\")\\].

    This article would require any person publishing an image generated or modified by an artificial intelligence system to explicitly mention its origin, through a clear and visible warning. Furthermore, it would require online platform services to implement technical means for detecting such content and verifying the conformity of their labeling. A reporting mechanism is also planned to allow users to report suspicious content.

    The penalties for non-compliance with these obligations are significant, particularly for users who fail to mention the artificial origin of their images, who would incur a fine of 3,750 euros.

    This proposal is consistent with other measures already adopted in French law.

    The Law of June 9, 2023 \\ [2\")\\] impose on influencers an obligation to mention "_virtual images_" for images that have undergone AI processing modifying their face or silhouette.

    Furthermore, the Penal Code now prohibits non-consensual sexual _deepfakes_ \\ [3\\].

    However, the December 2024 bill goes further by extending the scope of obligations to any social network user, regardless of their influencer status.

    The European framework for regulating AI content: Article 50 of the AI Regulation.

    The European Regulation on Artificial Intelligence \\ [4 2024/1689 of the European Parliament and of the Council of June 13, 2024 (...)\")\\], some provisions of which will come into force in August 2026, introduces a harmonized framework for the transparency of AI-generated content.

    Its Article 50 imposes several obligations on providers and deployers of AI systems, with a view to ensuring better traceability and identification of artificial content.

    The first part of Article 50 requires providers of generative AI to ensure that the content produced by their systems is marked in a machine-readable format, in order to allow its identification as artificial content. This obligation aims notably to limit the spread of misleading content and to ensure that platforms and other actors in the digital ecosystem can implement appropriate detection tools.

    The second part of the article requires deployers of AI systems generating _deepfakes_ to explicitly indicate that such content has been artificially generated or manipulated. An exception is however provided for artistic, satirical or fictional works, for which the transparency obligation must not hinder the dissemination or reception of the work.

    Article 50 does not provide an explicit obligation for social platforms such as Instagram or Facebook as hosts of user-published content.

    However, it allows the European Commission to adopt delegated acts to harmonize the obligations for detecting and reporting AI content. This suggests the possibility that platforms may eventually be required to implement mechanisms similar to those envisaged by the French bill.

    An still uncertain articulation between national and European law.

    The French bill and the European regulation pursue a common objective of transparency and combating disinformation but present some notable differences.

    From a temporal perspective, the bill anticipates the entry into force of European provisions by establishing obligations upon its promulgation, whereas Article 50 of the AI Regulation will only apply from August 2026.

    This anticipation is justified by France's desire to take a leading position in AI regulation and to offer a legal framework before European harmonization.

    From a substantive perspective, the French bill goes beyond the AI Regulation in several aspects.

    While Article 50 primarily imposes obligations on AI providers and deployers, the French text establishes a direct obligation for end-users and provides specific penalties for non-compliance.

    Furthermore, it requires platforms to implement tools for detecting and reporting AI content, a requirement that, as it stands, is not directly imposed by the European regulation but could be in the future through implementing acts.

    In the event of a divergence between the two frameworks, France might be compelled to adapt its national legislation to avoid any conflict with upcoming European obligations, as was recently the case with the modification of the influencer law of June 9, 2023, by ordinance \\ [5\")\\].

    Conclusion.

    The regulation of artificial intelligence-generated content on social networks is now a legislative priority, both nationally and Europeanly.

    The French bill of December 2024 anticipates certain obligations of the AI Regulation but may require adjustments once the European framework is fully defined.

    France seems willing to gain a regulatory head start, but the supremacy of European law could impose future modifications.

    Raphaël Molina

    About the author

    Raphaël Molina

    Partner

    Admitted to the Paris Bar, Maître Raphaël MOLINA is a co-founding partner of INFLUXIO and has specialized in intellectual property law and digital law for several years.

    Contact

    Contact INFLUXIO.

    Would you like to schedule a meeting or get a quote?

    We respond within 24 hours.