News | National
1 Nov 2024 22:28
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > National

    Deaths linked to chatbots show we must urgently revisit what counts as ‘high-risk’ AI

    Chatbot personas can offer emotional support and companionship, but they also pose high risks to vulnerable users.

    Henry Fraser, Research Fellow in Law, Accountability and Data Science, Queensland University of Technology
    The Conversation


    Last week, the tragic news broke that US teenager Sewell Seltzer III took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot on the Character.AI website.

    As his relationship with the companion AI became increasingly intense, the 14-year-old began withdrawing from family and friends, and was getting in trouble at school.

    In a lawsuit filed against Character.AI by the boy’s mother, chat transcripts show intimate and often highly sexual conversations between Sewell and the chatbot Dany, modelled on the Game of Thrones character Danaerys Targaryen. They discussed crime and suicide, and the chatbot used phrases such as “that’s not a reason not to go through with it”.

    A screenshot of a chat exchange between Sewell and the chatbot Dany. 'Megan Garcia vs. Character AI' lawsuit

    This is not the first known instance of a vulnerable person dying by suicide after interacting with a chatbot persona. A Belgian man took his life last year in a similar episode involving Character.AI’s main competitor, Chai AI. When this happened, the company told the media they were “working our hardest to minimise harm”.

    In a statement to CNN, Character.AI has stated they “take the safety of our users very seriously” and have introduced “numerous new safety measures over the past six months”.

    In a separate statement on the company’s website, they outline additional safety measures for users under the age of 18. (In their current terms of service, the age restriction is 16 for European Union citizens and 13 elsewhere in the world.)

    However, these tragedies starkly illustrate the dangers of rapidly developing and widely available AI systems anyone can converse and interact with. We urgently need regulation to protect people from potentially dangerous, irresponsibly designed AI systems.

    How can we regulate AI?

    The Australian government is in the process of developing mandatory guardrails for high-risk AI systems. A trendy term in the world of AI governance, “guardrails” refer to processes in the design, development and deployment of AI systems. These include measures such as data governance, risk management, testing, documentation and human oversight.

    One of the decisions the Australian government must make is how to define which systems are “high-risk”, and therefore captured by the guardrails.

    The government is also considering whether guardrails should apply to all “general purpose models”. General purpose models are the engine under the hood of AI chatbots like Dany: AI algorithms that can generate text, images, videos and music from user prompts, and can be adapted for use in a variety of contexts.

    In the European Union’s groundbreaking AI Act, high-risk systems are defined using a list, which regulators are empowered to regularly update.

    An alternative is a principles-based approach, where a high-risk designation happens on a case-by-case basis. It would depend on multiple factors such as the risks of adverse impacts on rights, risks to physical or mental health, risks of legal impacts, and the severity and extent of those risks.

    Chatbots should be ‘high-risk’ AI

    In Europe, companion AI systems like Character.AI and Chai are not designated as high-risk. Essentially, their providers only need to let users know they are interacting with an AI system.

    It has become clear, though, that companion chatbots are not low risk. Many users of these applications are children and teens. Some of the systems have even been marketed to people who are lonely or have a mental illness.

    Chatbots are capable of generating unpredictable, inappropriate and manipulative content. They mimic toxic relationships all too easily. Transparency – labelling the output as AI-generated – is not enough to manage these risks.

    Even when we are aware that we are talking to chatbots, human beings are psychologically primed to attribute human traits to something we converse with.

    The suicide deaths reported in the media could be just the tip of the iceberg. We have no way of knowing how many vulnerable people are in addictive, toxic or even dangerous relationships with chatbots.

    Guardrails and an ‘off switch’

    When Australia finally introduces mandatory guardrails for high-risk AI systems, which may happen as early as next year, the guardrails should apply to both companion chatbots and the general purpose models the chatbots are built upon.

    Guardrails – risk management, testing, monitoring – will be most effective if they get to the human heart of AI hazards. Risks from chatbots are not just technical risks with technical solutions.

    Apart from the words a chatbot might use, the context of the product matters, too. In the case of Character.AI, the marketing promises to “empower” people, the interface mimics an ordinary text message exchange with a person, and the platform allows users to select from a range of pre-made characters, which include some problematic personas.

    The front page of the Character.AI website for a user who has entered their age as 17. C.AI

    Truly effective AI guardrails should mandate more than just responsible processes, like risk management and testing. They also must demand thoughtful, humane design of interfaces, interactions and relationships between AI systems and their human users.

    Even then, guardrails may not be enough. Just like companion chatbots, systems that at first appear to be low risk may cause unanticipated harms.

    Regulators should have the power to remove AI systems from the market if they cause harm or pose unacceptable risks. In other words, we don’t just need guardrails for high risk AI. We also need an off switch.

    If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.

    The Conversation

    Henry Fraser receives funding from the Australian Research Council.

    This article is republished from The Conversation under a Creative Commons license.
    © 2024 TheConversation, NZCity

     Other National News
     01 Nov: Witnesses of an alleged assault in Auckland's Papatoetoe say they're in shock after seeing the violence
     01 Nov: Smoke from a fire in Auckland's Otahuhu this afternoon - is no longer posing a danger to nearby residents
     01 Nov: Max Verstappen in the spotlight for driving against Lando Norris as F1 championship fight heats up
     01 Nov: An Auckland businessman's appeared in court in relation to the smuggling of around two-million cigarettes inside shipping containers
     01 Nov: Otahuhu locals are being warned to stay inside and close their windows while fire crews battle a factory fire in the Auckland suburb
     01 Nov: A fire's broken out in Auckland's Otahuhu - sending plumes of black smoke into the air
     01 Nov: Two people are in hospital, one with critical injuries, after an alleged assault with a weapon in Auckland's Papatoetoe
     Top Stories

    RUGBY RUGBY
    Beauden Barrett will have the keys to the All Blacks' attack at Twickenham, after being confirmed to start at 10 against England on Sunday morning More...


    BUSINESS BUSINESS
    An Auckland businessman's appeared in court in relation to the smuggling of around two-million cigarettes inside shipping containers More...



     Today's News

    Law and Order:
    Witnesses of an alleged assault in Auckland's Papatoetoe say they're in shock after seeing the violence 21:57

    Entertainment:
    Michael Strahan was afraid his daughter would die after she was diagnosed with a brain tumour 21:49

    Entertainment:
    Newly-wed Lana Del Rey wants a "simple" marriage and needs her partner to believe that "love is enough" 21:19

    Environment:
    Smoke from a fire in Auckland's Otahuhu this afternoon - is no longer posing a danger to nearby residents 21:17

    Hamilton:
    Max Verstappen in the spotlight for driving against Lando Norris as F1 championship fight heats up 21:07

    Entertainment:
    Kelly Osbourne believes becoming a mother saved her life because she was previously convinced she wouldn't live to see 40 20:49

    Entertainment:
    Bruce Willis' wife won't "shield" the realities of his condition from their young children 20:19

    Entertainment:
    Hugh Grant got "absolutely smashed" on tequila shots with Travis Kelce at a Taylor Swift show in London over the summer 19:49

    Entertainment:
    Beyonce has branded Willie Nelson "the coolest 19:19

    Education:
    A university lecturer says a concerning level of students are entering tertiary study illiterate 18:57


     News Search






    Power Search


    © 2024 New Zealand City Ltd