News | National
13 Sep 2024 3:35
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > National

    A new ‘AI scientist’ can write science papers without any human input. Here’s why that’s a problem

    AI systems mass-producing cheap research would be bad news for an already struggling scientific ecosystem.

    Karin Verspoor, Dean, School of Computing Technologies, RMIT University, RMIT University
    The Conversation


    Scientific discovery is one of the most sophisticated human activities. First, scientists must understand the existing knowledge and identify a significant gap. Next, they must formulate a research question and design and conduct an experiment in pursuit of an answer. Then, they must analyse and interpret the results of the experiment, which may raise yet another research question.

    Can a process this complex be automated? Last week, Sakana AI Labs announced the creation of an “AI scientist” – an artificial intelligence system they claim can make scientific discoveries in the area of machine learning in a fully automated way.

    Using generative large language models (LLMs) like those behind ChatGPT and other AI chatbots, the system can brainstorm, select a promising idea, code new algorithms, plot results, and write a paper summarising the experiment and its findings, complete with references. Sakana claims the AI tool can undertake the complete lifecycle of a scientific experiment at a cost of just US$15 per paper – less than the cost of a scientist’s lunch.

    These are some big claims. Do they stack up? And even if they do, would an army of AI scientists churning out research papers with inhuman speed really be good news for science?

    How a computer can ‘do science’

    A lot of science is done in the open, and almost all scientific knowledge has been written down somewhere (or we wouldn’t have a way to “know” it). Millions of scientific papers are freely available online in repositories such as arXiv and PubMed.

    LLMs trained with this data capture the language of science and its patterns. It is therefore perhaps not at all surprising that a generative LLM can produce something that looks like a good scientific paper – it has ingested many examples that it can copy.

    What is less clear is whether an AI system can produce an interesting scientific paper. Crucially, good science requires novelty.

    But is it interesting?

    Scientists don’t want to be told about things that are already known. Rather, they want to learn new things, especially new things that are significantly different from what is already known. This requires judgement about the scope and value of a contribution.

    The Sakana system tries to address interestingness in two ways. First, it “scores” new paper ideas for similarity to existing research (indexed in the Semantic Scholar repository). Anything too similar is discarded.

    Second, Sakana’s system introduces a “peer review” step – using another LLM to judge the quality and novelty of the generated paper. Here again, there are plenty of examples of peer review online on sites such as openreview.net that can guide how to critique a paper. LLMs have ingested these, too.

    AI may be a poor judge of AI output

    Feedback is mixed on Sakana AI’s output. Some have described it as producing “endless scientific slop”.

    Even the system’s own review of its outputs judges the papers weak at best. This is likely to improve as the technology evolves, but the question of whether automated scientific papers are valuable remains.

    The ability of LLMs to judge the quality of research is also an open question. My own work (soon to be published in Research Synthesis Methods) shows LLMs are not great at judging the risk of bias in medical research studies, though this too may improve over time.

    Sakana’s system automates discoveries in computational research, which is much easier than in other types of science that require physical experiments. Sakana’s experiments are done with code, which is also structured text that LLMs can be trained to generate.

    AI tools to support scientists, not replace them

    AI researchers have been developing systems to support science for decades. Given the huge volumes of published research, even finding publications relevant to a specific scientific question can be challenging.

    Specialised search tools make use of AI to help scientists find and synthesise existing work. These include the above-mentioned Semantic Scholar, but also newer systems such as Elicit, Research Rabbit, scite and Consensus.

    Text mining tools such as PubTator dig deeper into papers to identify key points of focus, such as specific genetic mutations and diseases, and their established relationships. This is especially useful for curating and organising scientific information.

    Machine learning has also been used to support the synthesis and analysis of medical evidence, in tools such as Robot Reviewer. Summaries that compare and contrast claims in papers from Scholarcy help to perform literature reviews.

    All these tools aim to help scientists do their jobs more effectively, not to replace them.

    AI research may exacerbate existing problems

    While Sakana AI states it doesn’t see the role of human scientists diminishing, the company’s vision of “a fully AI-driven scientific ecosystem” would have major implications for science.

    One concern is that, if AI-generated papers flood the scientific literature, future AI systems may be trained on AI output and undergo model collapse. This means they may become increasingly ineffectual at innovating.

    However, the implications for science go well beyond impacts on AI science systems themselves.

    There are already bad actors in science, including “paper mills” churning out fake papers. This problem will only get worse when a scientific paper can be produced with US$15 and a vague initial prompt.

    The need to check for errors in a mountain of automatically generated research could rapidly overwhelm the capacity of actual scientists. The peer review system is arguably already broken, and dumping more research of questionable quality into the system won’t fix it.

    Science is fundamentally based on trust. Scientists emphasise the integrity of the scientific process so we can be confident our understanding of the world (and now, the world’s machines) is valid and improving.

    A scientific ecosystem where AI systems are key players raises fundamental questions about the meaning and value of this process, and what level of trust we should have in AI scientists. Is this the kind of scientific ecosystem we want?

    The Conversation

    Karin Verspoor receives funding from the Australian Research Council, the Medical Research Future Fund, the National Health and Medical Research Council, and Elsevier BV. She is affiliated with BioGrid Australia and is a co-founder of the Australian Alliance for Artificial Intelligence in Healthcare.

    This article is republished from The Conversation under a Creative Commons license.
    © 2024 TheConversation, NZCity

     Other National News
     12 Sep: A person has died following a two vehicle crash at Beaconsfield, Manawatu
     12 Sep: One person has critical injuries after a paraglider's crashed on a cliff at Auckland's Muriwai Beach
     12 Sep: A 22 year old woman has pleaded guilty over the hit-and-run death of a 65-year-old man on a rural road in Auckland's Albany last December
     12 Sep: With a million home batteries, we could build far fewer power lines. We just need the right incentives
     12 Sep: Police are asking for witnesses or video footage related to a fatal crash at Tapapa in Waikato, on State Highway 5 yesterday afternoon
     12 Sep: How the oil and gas industry influences higher education
     12 Sep: The scene where a body was found in a burnt out car north of Taupo is described as a remote region with no sign of civilisation
     Top Stories

    RUGBY RUGBY
    Kiwi golfer Ryan Fox is looking to get back into the swing of things in tonight's opening round of the Irish Open More...


    BUSINESS BUSINESS
    Six UN workers were killed in an Israeli air strike on a Gaza school compound. Here's what we know More...



     Today's News

    Accident and Emergency:
    A person has died following a two vehicle crash at Beaconsfield, Manawatu 21:57

    Entertainment:
    Tyla feels as if she has "changed a lot" in her first year of global fame 21:42

    Politics:
    Labour's Deputy Leader's standing by yesterday implying in the House - that Winston Peters is sexist 21:17

    Entertainment:
    Jamie Dornan has become the new brand ambassador for Diet Coke 21:12

    Business:
    Six UN workers were killed in an Israeli air strike on a Gaza school compound. Here's what we know 21:07

    Entertainment:
    Melissa Gilbert thinks it was "inevitable" that she would marry Timothy Busfield 20:42

    Entertainment:
    Mel B was bullied at school because of her hair 20:12

    Entertainment:
    Trace Adkins has had so many false promises from Hollywood 19:42

    Entertainment:
    Paris Hilton thought the fire that broke out on the set of her music video was a "prank" 19:12

    Business:
    Treasury warned the Finance Minister - for caution over National's campaign cancer drugs promise 18:57


     News Search






    Power Search


    © 2024 New Zealand City Ltd