News | National
27 Feb 2026 16:44
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > National

    AI can slowly shift an organisation’s core principles. How to spot ‘value drift’ early

    When new technology can produce the texts companies or departments use to explain themselves, core values can shift incrementally without anyone really noticing.

    Guy Bate, Professional Teaching Fellow, Management and International Business, University of Auckland, Waipapa Taumata Rau, Rhiannon Lloyd, Senior Lecturer, Management and International Business, University of Auckland, Waipapa Taumata Rau
    The Conversation


    The steady embrace of artificial intelligence (AI) in the public and private sectors in Australia and New Zealand has come with broad guidance about using the new technology safely and transparently, with good governance and human oversight.

    So far, so sensible. Aligning AI use with existing organisational values makes perfect sense.

    But here’s the catch. Most references to “responsible AI” assume values are like a set of house rules you can write down once, translate into checklists and enforce forever.

    But generative AI (Gen AI) does not simply follow the rules of the house. It changes the house. GenAI’s distinctive power is not that it automates calculations, but that it automates plausible language.

    It writes the summary, the rationale, the email, the policy draft and the performance feedback. In other words, it produces the texts organisations use to explain themselves.

    When a system can generate confident, professional-sounding reasons instantly, it can quietly change what counts as a “good reason” to do something.

    This is where “value drift” begins – a gradual shift in what feels normal, reasonable or acceptable as people adapt their work to what the technology makes easy and convincing.

    Invisible ethical shifts

    In the workplace, for example, a manager might use GenAI to draft performance feedback to avoid a hard conversation. The tone is smoother, but the judgement is harder to locate, as is the accountability.

    Or a policy team uses GenAI to produce a balanced justification for a contested decision. The prose is polished, but the real trade-offs are less visible.

    For small businesses, the appeal of GenAI lies in speed and efficiency. A sole trader can use it to respond to customers, write marketing copy or draft policies in seconds.

    But over time, responsiveness may come to mean instant, AI-generated replies rather than careful, human judgement. The meaning of good service quietly shifts.

    None of this requires an ethical breach. The drift happens precisely because the new practice feels helpful.

    The biggest ethical effects of GenAI don’t often show up as a single shocking scandal. They are slower and quieter. A thousand small decisions get made a little differently.

    Explanations get a little smoother. Accountability becomes a little harder to point to. And before long, we are living with a new normal we did not consciously choose.

    If responsible AI use is about more than good intentions and tidy documentation, we need to stop treating values as fixed targets. We need to pay attention to how values shift once AI becomes part of everyday work.

    Hidden assumptions

    Much of today’s responsible-AI guidance follows a straightforward model: identify the values you care about, embed them in GenAI systems and processes, then check compliance.

    This is necessary but also incomplete. Values are not “fixed” once written down in strategy documents or policy templates. They are lived out in practice.

    They show up in how people talk, what they notice, what they prioritise and how they justify trade-offs. When technologies change those routines, values get reshaped.

    An emerging line of research on technology and ethics shows that values are not simply applied to technologies from the outside. They are shaped from within everyday use, as people adapt their practices to what technologies make easy, visible or persuasive.

    In other words, values and technologies shape each other over time, each influencing how the other develops and is understood.

    We have seen this before. Social media did not just test our existing ideas about privacy. It gradually changed them. What once felt intrusive or inappropriate now feels normal to many younger users.

    The value of privacy did not disappear, but its meaning shifted as everyday practices changed. Generative AI is likely to have similar effects on values such as fairness, accountability and care.

    In our research on leadership development, we are exploring how we teach emerging leaders to recognise and reflect on these shifts.

    The challenge is not only whether leaders apply the right values to AI, but whether they are equipped to notice how working with these systems may gradually reshape what those values mean in practice.

    Constant vigilance

    The emphasis in New Zealand and Australia on responsible AI guidance is sensible and pragmatic. It covers governance, privacy, transparency, skills and accountability.

    But it still tends to assume that once the right principles and processes are in place, responsibility has been secured.

    If values move as AI reshapes practice, though, responsible AI needs a practical upgrade. Principles still matter, but they should be paired with routines that keep ethical judgement visible over time.

    Organisations should periodically review AI-mediated decisions in high-stakes areas such as hiring, performance management or customer communication.

    They should pay attention not just to technical risks, but to how the meaning of fairness, accountability or care may be changing in practice. And they should make it clear who owns the reasoning behind AI-shaped decisions.

    Responsible AI is not about freezing values in place. It is about staying responsible as values shift.

    The Conversation

    Guy Bate is Chair of the AI in Education Technology Stewardship Group of EdTechNZ.

    Rhiannon Lloyd does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    This article is republished from The Conversation under a Creative Commons license.
    © 2026 TheConversation, NZCity

     Other National News
     27 Feb: Two people have been arrested in connection to the death of Rotorua woman Sharlene Smith - one charged with her murder
     27 Feb: A mere 48 hours after Wellington's Mayor went for a swim prove the sea was safe, the southern coast has again been declared unsuitable for swimming
     27 Feb: One person's in a serious condition after an alleged road-rage incident in Hamilton this morning
     27 Feb: It's expected to be months before storm-ravaged highways on the Coromandel Peninsula are fully restored
     27 Feb: Politicians say immigration threatens ‘Australian values’, but our research shows no one knows exactly what that means
     27 Feb: Queenstown's Millbrook Resort's had a record year for property sales, hotel stays, and golf rounds
     27 Feb: Michael Caine’s voice is iconic. Why would he sell that to AI?
     Top Stories

    RUGBY RUGBY
    Portia Woodman-Wickcliffe has retired from rugby for a second time, aged 34 More...


    BUSINESS BUSINESS
    Rocket Lab's continuing to thunder to new heights, posting a record revenue of 300 million New Zealand dollars in the last three months of 2025 More...



     Today's News

    Golf:
    A 20-year-old Kiwi amateur's seized the clubhouse 12-under-par lead at the New Zealand Golf Open in Central Otago 16:37

    International:
    Crown prince Reza Pahlavi on US military intervention in Iran and how a post Islamic Republic 'transition' would work 16:37

    Entertainment:
    Katherine Short spoke publicly about living with "mental illness" and the support she received from her service dog in the years before her death 16:21

    International:
    Crew-11 NASA astronaut behind mission-ending medical issues identifies himself 16:07

    International:
    Hillary Clinton slams Epstein committee for grilling her instead of Donald Trump 15:57

    Entertainment:
    Donald Faison admits JD and Turk's bromance on Scrubs is "way stronger" than his and Zach Braff's 15:51

    Law and Order:
    Two people have been arrested in connection to the death of Rotorua woman Sharlene Smith - one charged with her murder 15:27

    Entertainment:
    Paul Anthony Kelly has become a dad 15:21

    Cricket:
    Kiwi Daniel Hillier shares the lead at the New Zealand Golf Open after he finished his second round with a bogey in Central Otago 15:07

    Environment:
    A mere 48 hours after Wellington's Mayor went for a swim prove the sea was safe, the southern coast has again been declared unsuitable for swimming 14:57


     News Search






    Power Search


    © 2026 New Zealand City Ltd