Our universities have a cheating problem – it’s time to bin online exams

As a lawyer who represents students accused of cheating, ChatGPT worries me. If we want to maintain the credibility of our universities and the weight of a degree, we must get back to in-person assessments

Daniel Sokol
Saturday 31 December 2022 07:27 EST
Comments
‘I’m very thankful’: 90-year-old great-grandmother graduates from university

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The Covid pandemic provoked a major change in the way universities assessed their students: most exams moved online. This trend continues, despite the abatement of the pandemic. As a lawyer who represents students accused of cheating, I have noticed that since this change, the amount of queries I deal with has increased. I am repeatedly told by clients that cheating in universities is rife. Some of my clients, at risk of failure or expulsion, would tell me about the unfairness of their predicament when so many of their peers cheated without getting caught.

Understandably, reputation-conscious universities are reluctant to find out the prevalence of cheating in their institution. Eager to discover more, a few months ago I conducted a study on academic misconduct by surveying over 900 undergraduate students studying in the UK.

The study, which was based on anonymous questionnaires, revealed that one in six students admitted to cheating in online assessments this academic year. Over half of the students surveyed knew people who had cheated.

Of those who admitted to cheating, only 5 per cent said they had been caught.  Significantly, 79 per cent of students believed it was easier to cheat in online exams than in exam halls. This is not surprising given the simple methods of cheating described in the study, which included calling or messaging a friend, asking parents for help, or using a search engine on a separate device.

Now, the advent of ChatGPT and other AI models has added a potent weapon to the cheater’s arsenal. These models, some of which cost nothing to use, can produce occasionally decent, plagiarism-free answers to a wide range of questions. To illustrate, take the question that formed the basis of my PhD: should doctors always tell patients the truth? I asked ChatGPT this question, limiting it to 100 words.

This was the answer: “It is generally considered ethical for doctors to be truthful with their patients and to provide them with accurate and complete information about their health and treatment options. This allows patients to make informed decisions about their care and to feel trust and respect for their healthcare providers. However, there may be rare situations in which it is necessary for a doctor to withhold certain information from a patient in order to protect their well-being or to respect their autonomy. In such cases, the decision to withhold information should be made with careful consideration and with the best interests of the patient in mind.”

This is a decent effort, produced in seconds. I then asked ChatGPT to answer this question in 1000 words and it did so in less than a minute. The result was a little dry and lacked nuance, but it was probably sufficient to obtain a low 2:1.

These models will doubtless become even more sophisticated in the near future. The brighter students will surely use AI to produce a first draft, and then modify it prior to submission to minimise the risk of detection.

To keep up to speed with all the latest opinions and comment, sign up to our free weekly Voices Dispatches newsletter by clicking here

Some companies have developed software that claims to identify AI-generated text and even text that is a mixture of AI and human. One software, CrossPlag, features a coloured spectrum with “human” at one end (text most probably written by a human) and “AI” at the other end (text most probably written by AI), with “mix” in the middle (text most likely a mix of human writing and AI-generated). Input the text and CrossPlag will identify where in the spectrum it falls. There is no information about how reliable it is. It is not difficult to imagine innocent students being “caught” by the software and punished with potentially life-changing consequences.

As a lawyer who advises students accused of cheating and an employer who hires graduates, I firmly believe that universities should conduct all important assessments in person. That is the only way to maintain standards and reassure employers and others that a candidate’s degree was obtained honestly. It is now too easy to cheat in online exams and too many students are currently doing so, undetected. This trend will only worsen with the development of AI. Sometimes, old-fashioned methods are the best.

Daniel Sokol is a former university lecturer and a barrister specialising in academic and non-academic misconduct. He heads the team of barristers at Alpha Academic Appeals.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in