News | Business
7 Feb 2025 2:50
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > Business

    ‘AI agents’ promise to arrange your finances, do your taxes, book your holidays – and put us all at risk

    AI systems that can autonomously make decisions on our behalf will be a huge time saver – but we must deploy them with care.

    Uri Gal, Professor in Business Information Systems, University of Sydney
    The Conversation


    Over the past two years, generative artificial intelligence (AI) has captivated public attention. This year signals the beginning of a new phase: the rise of AI agents.

    AI agents are autonomous systems that can make decisions and take actions on our behalf without direct human input. The vision is that these agents will redefine work and daily life by handling complex tasks for us. They could negotiate contracts, manage our finances, or book our travel.

    Salesforce chief executive Marc Benioff has said he aims to deploy a billion AI agents within a year. Meanwhile Meta chief Mark Zuckerberg predicts AI agents will soon outnumber the global human population.

    As companies race to deploy AI agents, questions about their societal impact, ethical boundaries and long-term consequences grow more urgent. We stand on the edge of a technological frontier with the power to redefine the fabric of our lives.

    How will these systems transform our work and our decision-making? And what safeguards do we need to ensure they serve humanity’s best interests?

    AI agents take the control away

    Current generative AI systems react to user input, such as prompts. By contrast, AI agents act autonomously within broad parameters. They operate with unprecedented levels of freedom – they can negotiate, make judgement calls, and orchestrate complex interactions with other systems. This goes far beyond simple command–response exchanges like those you might have with ChatGPT.

    For instance, imagine using a personal “AI financial advisor” agent to buy life insurance. The agent would analyse your financial situation, health data and family needs while simultaneously negotiating with multiple insurance companies’ AI agents.

    It would also need to coordinate with several other AI systems: your medical records’ AI for health information, and your bank’s AI systems for making payments.

    The use of such an agent promises to reduce manual effort for you, but it also introduces significant risks.

    The AI might be outmanoeuvred by more advanced insurance company AI agents during negotiations, leading to higher premiums. Privacy concerns arise as your sensitive medical and financial information flows between multiple systems.

    The complexity of these interactions can also result in opaque decisions. It might be difficult to trace how various AI agents influence the final insurance policy recommendation. And if errors occur, it could be hard to know which part of the system to hold accountable.

    Perhaps most crucially, this system risks diminishing human agency. When AI interactions grow too complex to comprehend or control, individuals may struggle to intervene in or even fully understand their insurance arrangements.

    A tangle of ethical and practical challenges

    The insurance agent scenario above is not yet fully realised. But sophisticated AI agents are rapidly coming onto the market.

    Salesforce and Microsoft have already incorporated AI agents into some of their corporate products, such as Copilot Actions. Google has been gearing up for the release of personal AI agents since announcing its latest AI model, Gemini 2.0. OpenAI is also expected to release a personal AI agent in 2025.

    The prospect of billions of AI agents operating simultaneously raises profound ethical and practical challenges.

    These agents will be created by competing companies with different technical architectures, ethical frameworks and business incentives. Some will prioritise user privacy, others speed and efficiency.

    They will interact across national borders where regulations governing AI autonomy, data privacy and consumer protection vary dramatically.

    This could create a fragmented landscape where AI agents operate under conflicting rules and standards, potentially leading to systemic risks.

    What happens when AI agents optimised for different objectives – say, profit maximisation versus environmental sustainability – clash in automated negotiations? Or when agents trained on Western ethical frameworks make decisions that affect users in cultural contexts for which they were not designed?

    The emergence of this complex, interconnected ecosystem of AI agents demands new approaches to governance, accountability, and the preservation of human agency in an increasingly automated world.

    How do we shape a future with AI agents in it?

    AI agents promise to be helpful, to save us time. To navigate the challenges outlined above, we will need to coordinate action across multiple fronts.

    International bodies and national governments must develop harmonised regulatory frameworks that address the cross-border nature of AI agent interactions.

    These frameworks should establish clear standards for transparency and accountability, particularly in scenarios where multiple agents interact in ways that affect human interests.

    Technology companies developing AI agents need to prioritise safety and ethical considerations from the earliest stages of development. This means building in robust safeguards that prevent abuse – such as manipulating users or making discriminatory decisions.

    They must ensure agents remain aligned with human values. All decisions and actions made by an AI agent should be logged in an “audit trail” that’s easy to access and follow.

    Importantly, companies must develop standardised protocols for agent-to-agent communication. Conflict resolution between AI agents should happen in a way that protects the interests of users.

    Any organisation that deploys AI agents should also have comprehensive oversight of them. Humans should still be involved in any crucial decisions, with a clear process in place to do so. The organisation should also systematically assess the outcomes to ensure agents truly serve their intended purpose.

    As consumers, we all have a crucial role to play, too. Before entrusting tasks to AI agents, you should demand clear explanations of how these systems operate, what data they share, and how decisions are made.

    This includes understanding the limits of agent autonomy. You should have the ability to override agents’ decisions when necessary.

    We shouldn’t surrender human agency as we transition to a world of AI agents. But it’s a powerful technology, and now is the time to actively shape what that world will look like.

    The Conversation

    Uri Gal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    This article is republished from The Conversation under a Creative Commons license.
    © 2025 TheConversation, NZCity

     Other Business News
     06 Feb: It's been a busy first half for one of the Wellington Phoenix's Japanese imports ... and the Video Assistant Referee ... in their A-League football home game against the Brisbane Roar
     06 Feb: How Trump's tariffs on China delivered a 'double-whammy' to Temu and Shein
     06 Feb: Kiwifruit growers are hoping to make the most of a new variety
     06 Feb: KiwiRail and Fonterra are celebrating a full return of rail volume to Northland
     06 Feb: Mark Zuckerberg wants business to ‘man up’, but what it really needs is more women entrepreneurs
     06 Feb: An economist says the unemployment rate climbing over five-percent should be an economic wake up call to the Government
     05 Feb: Strikes today and Friday by medical lab workers - could lead to five straight days of testing site shortages countrywide
     Top Stories

    RUGBY RUGBY
    Bragging rights for the Highlanders over their near neighbours in Super Rugby Pacific More...


    BUSINESS BUSINESS
    It's been a busy first half for one of the Wellington Phoenix's Japanese imports ... and the Video Assistant Referee ... in their A-League football home game against the Brisbane Roar More...



     Today's News

    Rugby League:
    Warriors utility Roger Tuivasa-Sheck has been ruled out of tonight's NRL trial clash against the Cronulla Sharks in Sydney 1:17

    Politics:
    Police say they're pleased with Waitangi Day celebrations, with no major problems and no arrests 21:57

    Entertainment:
    Sir Brian May feared he'd never play guitar again after suffering a stroke 21:34

    Cricket:
    In domestic cricket ... some tight bowling at the death has taken Canterbury to a one-run win over the Otago Volts in Dunedin, in the Ford Trophy one-day men's competition 21:17

    Rugby:
    Bragging rights for the Highlanders over their near neighbours in Super Rugby Pacific 21:07

    Entertainment:
    Christie Brinkley is releasing a memoir 21:04

    Entertainment:
    Allison Holker believes Stephen 'tWitch' Boss "wasn't the same" after embarking on an "ayahuasca journey" 20:34

    Law and Order:
    Five people have been arrested at a Wellington property, after a person was reportedly threatened with a firearm 20:27

    Entertainment:
    Travis Kelce played coy when quizzed on his plans to propose to Taylor Swift 20:04

    Entertainment:
    Marcus Jordan has been arrested and charged with cocaine possession, driving while intoxicated and resisting arrest 19:34


     News Search






    Power Search


    © 2025 New Zealand City Ltd