AI in the classroom is hard to detect – time to bring back oral tests
A chatbot can produce text, but it can’t sustain a probing conversation about your reasoning. Our work suggests oral assessment has a role in the age of AI.
30 September 2025
News that several New Zealand universities have given up using detection software to expose student use of artificial intelligence (AI) underlines the challenge higher education is facing.
With AI tools such as ChatGPT now able to produce essays, reports and case studies in seconds, the old assessment model is breaking down. For decades, that model was valued for testing not just knowledge, but also analysis, argumentation and communication.
Now, however, its reliability is under pressure. If a machine can generate a plausible essay on demand, how can we be sure we are assessing a student’s own understanding and reasoning?
We have been exploring another way forward. Instead of doubling down on plagiarism software, we have gone back to something surprisingly simple: talking to students.
For the past two years, we have been running “interactive oral assessments” (IOAs). They are proving to be one of the most effective and authentic ways to see what students really know in the age of AI.
Think of it as a structured conversation. Students meet with a lecturer or tutor, individually or in a small group, and answer questions about work they have already submitted.
Examiners do not just check for memorised facts. Using the Socratic method of questioning, they probe the reasoning behind students’ answers, drawing out genuine understanding rather than rehearsed responses.
It is not a performance or a speech. Because the questions are tailored to each student and unfold in real time, IOAs are difficult to outsource: a chatbot may produce text, but it cannot sustain a probing conversation about your own work.
Face-to-face assessment
We first trialled IOAs in a postgraduate marketing course with 42 students. Each sat a seven-minute conversation based on their course work. The grading guide covered both content (do they understand the concepts?) and communication (can they explain clearly and logically?).
The results were encouraging. Where grades had previously skewed toward the upper range under written assessment, likely reflecting increased AI assistance on take-home assignments, IOAs produced a more balanced spread of marks across grade bands.
Students reported the process felt fairer, and lecturers heard richer demonstrations of understanding and critical thinking. One lecturer put it neatly:
The dialogue revealed what students actually understood, rather than what they could memorise or outsource.
To ensure nerves did not get in the way, we built practice runs into tutorials during the semester so expectations were clear long before the final assessment.
Running one-on-one conversations for hundreds of students isn’t realistic, so we adapted the format. In larger undergraduate courses with over 200 students, we run IOAs in group settings: students attend together, but each answers individually.
We also use multiple assessors running simultaneous IOA sessions. This lets us assess large cohorts in the same timeframe as a traditional exam without overloading a single lecturer or tutor.
This model has two big advantages: logistics are manageable and anxiety is reduced. Seeing peers go through the same process normalises the experience. The group format still preserves the essence of the IOA.
Back to the future?
Two years in, clear patterns have emerged. IOAs reveal qualities written exams and essays often mask. Students must explain, apply and defend their ideas in real time, so we can see whether they truly grasp the material, not just whether they can structure an essay or reproduce text.
Importantly, IOAs also develop work-ready skills: clear communication, critical thinking and defending a position under questioning. These abilities are needed in interviews, client meetings and professional discussions. As one student said:
It felt like a job interview, not just an exam.
IOAs are not effort-free. Examiners benefit from training in how to ask probing yet fair questions, and in applying grading guides consistently, especially when student and session numbers increase.
Scheduling and recording at scale requires careful planning, from coordinating rooms to examiner availability and recording options. With the right support, however, these challenges are manageable.
IOAs are not a silver bullet, but they are a promising response to the realities of AI. They make it harder to outsource work, help staff see genuine understanding, and give students practice in the kinds of discussions that dominate modern workplaces.
In many ways, IOAs take us back to the future: they revive the oldest oral form of examination, reimagined for today’s classrooms. Crucially, they do more than safeguard academic integrity, they build the capabilities employers expect.
If universities want to prepare students for the real world while protecting the credibility of their courses, it may be time to do what seems counterintuitive: stop writing and start talking.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
© 2025 TheConversation, NZCity