News | Politics
19 Sep 2024 14:21
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > Politics

    Global powers are grappling with ‘responsible’ use of military AI. What would that look like?

    At a recent global summit, 2,000 government officials and experts met to discuss the responsible development and use of AI by militaries.

    Zena Assaad, Senior Lecturer, School of Engineering, Australian National University, Lauren Sanders, Adjunct Associate Professor, The University of Queensland, Rain Liivoja, Professor, School of Law, The University of Queensland
    The Conversation


    Last week, some 2,000 government officials and experts from around the world met for the REAIM (Responsible Artificial Intelligence in the Military Domain) summit in Seoul, South Korea. This was the second event of its kind, with the first one held in the Netherlands in February 2023.

    During this year’s summit, 61 countries endorsed a “Blueprint for Action” for governing the development and use of artificial intelligence (AI) by the military.

    However, 30 countries sent a government representative to the summit but didn’t endorse the blueprint, including China.

    The blueprint is an important, if modest, development. But there is still a gap in the understanding of what constitutes responsible use of AI and how this translates into concrete actions in the military domain.

    How is AI currently used in military contexts?

    Military use of AI has increased over the last few years, notably in the Russia-Ukraine and Israel-Palestine conflicts.

    Israel has used AI-enabled systems known as “Gospel” and “Lavender” to help it make key military decisions, such as which locations and people to target with bombs. The systems use large amounts of data, including people’s addresses, phone numbers and membership of chat groups.

    The “Lavender” system in particular made headlines earlier this year when critics questioned its efficacy and legality. There was particular concern around its training data and how it classified targets.

    Both Russia and Ukraine also use AI to support military decision making. Satellite imagery, social media content and drone surveillance are just some of the many information sources which generate copious volumes of data.

    AI can analyse this data much more quickly than humans could. The results are incorporated into existing “kill chains” – the process of locating, tracking, targeting and engaging targets.

    It means military officials can make faster decisions during active armed conflict, providing tactical advantages. However, the misuse of AI systems can also result in potential harm.

    Civil society and non-governmental organisations such as the International Committee of the Red Cross have warned about the risks. For example, algorithmic bias can exacerbate the risk to civilians during active warfare.

    What is responsible AI in the military domain?

    There is no consensus on what constitutes “responsible” AI.

    Some researchers argue the technology itself can be responsible. In this case, “responsible” would mean having built-in fairness and freedom from bias.

    Other studies refer to the practices around AI – such as design, development and use – being responsible. These would mean practices that are lawful, traceable, reliable and focused on mitigating bias.

    The blueprint endorsed at the recent summit in Seoul aligns with the latter interpretation. It advocates that anyone using AI in the military must comply with relevant national and international laws.

    It also highlights the importance of human roles in the development, deployment and use of AI in the military domain. This includes ensuring human judgement and control over the use of force are responsibly and safely managed.

    This is an important distinction, because many narratives around AI falsely imply an absence of human involvement and responsibility.

    What can governments do to use military AI responsibly?

    Discussions at the summit focused heavily on concrete steps governments can take to support responsible use of military AI.

    As military AI use is currently increasing, we need interim steps to deal with it. One suggestion was to strike AI regulation agreements within different regions, rather than taking longer to reach a global, universal consensus.

    To improve global cooperation on military AI, we could also heed lessons from previous global challenges – such as nuclear non-proliferation, saving the ozone layer and keeping outer space and Antarctica demilitarised.

    Eighteen months since the inaugural summit last year, governments and other responsible parties have started putting into place risk-mitigation processes and toolkits for military AI.

    The blueprint reflects the progress since then, and the ideas discussed at the summit. It proposes a number of tangible steps, which include:

    • universal legal reviews for AI-enabled military capabilities
    • promoting dialogue on developing measures to ensure responsible AI in the military domain at the national, regional and international levels
    • maintaining appropriate human involvement in the development, deployment and use of AI in the military domain.

    However, progress is slow because we still don’t have a universal understanding of what responsible military AI actually looks like.

    The need to cut thorough these issues is now putting pressure on the next summit (not yet announced). The Netherlands has also set up an expert body to further a globally consistent approach to military AI.

    Humanity can benefit from AI tools. But we urgently need to ensure the risks they pose don’t proliferate, especially in the military domain.

    The Conversation

    Lauren Sanders has previously received funding from the Australian Government's Next Generation Technologies Fund, through Trusted Autonomous Systems, a Defence Cooperative Research Centre. She is affiliated with the Asia-Pacific Institute for Law and Security, and a legal practice that provides international law advice. The views in this article are her own and do not reflet those of any institutions she is affiliated with.

    Rain Liivoja has previously received funding from the Australian Government's Next Generation Technologies Fund, through Trusted Autonomous Systems, a Defence Cooperative Research Centre. He is affiliated with the Asia-Pacific Institute for Law and Security.

    Zena Assaad does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    This article is republished from The Conversation under a Creative Commons license.
    © 2024 TheConversation, NZCity

     Other Politics News
     19 Sep: The Commerce Commission is filing criminal charges against the Warehouse for selling a toy it says presents a safety risk to children
     19 Sep: A warning to Canterbury renters, to stay vigilant when looking for rentals
     18 Sep: Health New Zealand's CEO will be reviewing the policy on non-disclosure agreements
     18 Sep: Assurances closing a South Auckland paper mill won't affect kerbside recycling
     18 Sep: A new report has found Government vastly overestimated the cost of new home insulation standards
     17 Sep: Little sympathy for Wellington's Mayor Tory Whanau over her financial woes
     17 Sep: Saved from extinction? New modelling suggests a hopeful future for te reo Maori
     Top Stories

    RUGBY RUGBY
    Will Jordan's move to All Blacks fullback has been shortlived More...


    BUSINESS BUSINESS
    All eyes on the Reserve Bank - as New Zealand's economy shrinks in its second quarter More...



     Today's News

    Education:
    Clones in the classroom: why universities must be wary of embracing AI-driven teaching tools 14:17

    Soccer:
    Annalie Longo will be back to captain the Wellington Phoenix women in the upcoming A-League season 14:07

    Entertainment:
    Scooter Braun has defended Taylor Swift against Donald Trump 13:51

    Business:
    All eyes on the Reserve Bank - as New Zealand's economy shrinks in its second quarter 13:47

    Rugby:
    Will Jordan's move to All Blacks fullback has been shortlived 13:47

    Entertainment:
    Sir Ian McKellen believes coming out as gay back in the 1980s helped him forge meaningful connections with other people 13:21

    Soccer:
    The football community in New Zealand is mourning the passing of former All White defender Sam Malcomson, a member of the great 1982 team that created history in making the FIFA World Cup 13:07

    National:
    Sudan’s civilians urgently need protection: the options for international peacekeeping 13:07

    Education:
    Inside the Ukrainian classrooms under constant fear of bombing 13:07

    Entertainment:
    Jennifer Lopez is reportedly "trying to be friendly" with her estranged husband Ben Affleck for the sake of their kids 12:51


     News Search






    Power Search


    © 2024 New Zealand City Ltd