Back to publications
    INFLUXIO- Raphaël Molina

    US bill: a new legal liability for chatbots.

    Analysis of an American bill on AI chatbot liability.

    Conversational AI is now ubiquitous. Tools like ChatGPT (OpenAI), Gemini (Google), Claude (Anthropic), and even banking and legal chatbots are used by millions of people every day.

    However, what happens when a chatbot provides incorrect, misleading, or harmful information, leading to financial losses, risky medical decisions, or psychological harm?

    Faced with these risks, the State of New York has introduced bill A00222, aiming to impose legal liability on companies operating chatbots.

    1\\. Introduction to New York Bill A00222.

    New York Bill A00222 \\ [1\\] aims to apply to any business, organization, or governmental entity that:

    **1.** Owns, operates, or deploys a chatbot to interact with users in New York.

    **2.** Uses a chatbot as an alternative to a human representative for customer service, information service, or automated decision-making.

    The text introduces several precise definitions:

    - Chatbot: an artificial intelligence system designed to provide textual or vocal interaction with users. - Companion chatbot: conversational AI that simulates an interpersonal relationship, including friendly, romantic relationships, etc. Affected user: any person using a chatbot in the State of New York. - Owner: any entity operating a chatbot, excluding third-party developers who license technology.

    The owner of a chatbot cannot disclaim responsibility when their AI provides materially misleading or incorrect information, resulting in financial or demonstrable harm to the user.

    Any affected company must:

    - Correct errors and repair the damage suffered within 30 days of being informed. - Ensure the information provided by the chatbot complies with the company's official policies and terms of service. - Not circumvent this responsibility by simply informing users that they are interacting with an AI.

    Furthermore, chatbots designed to simulate a relational interaction with the user, "_companion_" chatbots, will be subject to specific obligations:

    - Self-harm prevention: the company must actively prevent the AI from encouraging, facilitating, or normalizing self-harm behaviors. - Temporary interaction ban in case of suicidal risk: if a chatbot detects that the user expresses suicidal thoughts, it must suspend the interaction for 24 hours and provide contacts for a suicide prevention organization. - Parental control obligation: any companion chatbot detecting a minor user must obtain verifiable parental consent before continuing the interaction.

    2\\. Legal and Economic Implications for Businesses.

    Businesses should strengthen their chatbot control mechanisms to avoid liability, especially in sensitive sectors such as:

    - Banking and finance (financial advisory chatbots, loan simulators). - Health and medicine (diagnostic chatbots, therapeutic assistants). - Law and taxation (legal aid or tax declaration chatbots).

    They should implement monitoring and rapid correction protocols to avoid litigation and sanctions.

    Companies would thus be exposed to:

    - Lawsuits in case of financial or personal harm caused by a chatbot. - Obligations to filter and audit content generated by their AIs. - Stricter regulation of interactions with minors.

    Conclusions.

    New York Bill A00222 marks a turning point in the regulation of conversational technologies and raises fundamental questions about the future of artificial intelligence.

    If adopted, it could redefine the responsibilities of companies operating chatbots, obliging them to guarantee the accuracy of provided information and to prevent self-harm risks.

    But this legislative initiative goes far beyond the strict framework of New York. It opens a broader debate on how societies should regulate artificial intelligence:

    - How far should we go in holding AIs accountable? Should they be considered mere tools, or should companies assume responsibility for the damages they may cause?

    - How to balance innovation and consumer protection? Conversational AI offers immense economic and technological opportunities, but possible abuses (misinformation, manipulation, psychological risks) require increased vigilance.

    - What regulatory harmonization at the international level? New York legislation is part of a global context of increasing regulation: the AI Act in Europe, reforms in California and China, as well as OECD recommendations show that governments are trying to establish common frameworks to prevent abuses.

    The evolution of this bill will be a crucial test to measure the ability of legislators to anticipate and regulate technological innovations, without hindering their development.

    The outcome of this debate will influence the entire AI sector, far beyond the borders of New York State.

    Raphaël Molina

    About the author

    Raphaël Molina

    Partner

    Admitted to the Paris Bar, Maître Raphaël MOLINA is a co-founding partner of INFLUXIO and has specialized in intellectual property law and digital law for several years.

    Contact

    Contact INFLUXIO.

    Would you like to schedule a meeting or get a quote?

    We respond within 24 hours.