Is AI turning students into dullards?
Is AI turning students into dullards?
"I think one of my main concerns is there's increasing research which shows that it enables a cognitive offloading, instead of reading a complex text and analysing it, it's much easier to put a complex text into generative AI and get that generative AI to do that complex work for them," says Prof Sian Bayne of the University of Edinburgh. For all of its great prospects, AI has continued to unsettle the academic world as it is perceived as turning students into academic dullards and learning into a superficial engagement.
The Scottish report
Professor Bayne was reacting to a recently released report of a significant increase in the number of Scottish students using generative artificial intelligence to cheat in their school work. The report showed that 600 Scottish students misused AI in their school work in 2024. In relation to the 292,240 student population in Scotland, 600 may seem insignificant but when one considers that the 2024 figure is a 121% increase over the 2023 figure, then the trend becomes grave. What the report termed “misuse” is simply euphemism for cheating. Prof Bayne’s fear of a gradual loss of academic and intellectual rigour by students for a quick fix AI solution becomes justified. This is not a new debate; it has been ongoing since the 2022 release of OpenAI’s ChatGPT. The recent Scottish report showed that the trend is becoming pervasive and may require urgent action to address it by academics and other key stakeholders.
Global phenomenon
And the Scottish situation is not an isolated case. Worldwide, from the Outback, the Americas, Europe to Asia and Africa there is an increasing use of AI by students to short circuit the good old habit of bending over academic materials in classes, libraries and homes to gain invaluable knowledge and insight. This trend begins to call to question the integrity of academic certifications, if not quickly addressed.
In February this year, for instance, Prof. Isaac Nwaogwugwu of the University of Lagos expressed his frustration that students are becoming lazy, less analytic and creative in their thinking due to AI use. “The benefits of AI may be peripheral, but it is making students dependent and less analytical,” Prof. Nwaogwugwu says. “Many students copy from ChatGPT and submit polished assignments, but when asked basic questions, they go blank. It’s disappointing because education is about learning, not just passing courses,” says Dr Felix Echekoba of the Nnamdi Azikiwe University. Dr John Ereke of the Ebonyi State University calls the problem a cycle of laziness with both students and lecturers guilty in many cases.
According to him, “It’s not just students using AI lazily. Some lecturers, out of their own laziness, generate lesson notes, course outlines, marking schemes, and even exam questions with AI without reviewing them. Students, in turn, use AI to generate answers. It’s a cycle of laziness, and it is killing real learning.” Schools such as Oxford University forbids the use of AI by its students for assignments and other writing projects. In New York City, public schools also frown at their students using ChatGPT or other AI tools. Washington, Los Angeles, and other US states are similarly inclined to prohibit the use of OpenAI by their students.
Dorcas Akintade, a cybersecurity expert, believes that academics’ resistance to AI tools may be due to plagiarism and security risks. According to her, AI responses which are based on pre-existing data may not align with educators’ expectations and often lack proper attribution. She cited cases of fabricated information or “hallucination” by AI when the tool lacks clear data to draw answers from. Students who are not diligent enough may miss this “hallucination”.
Government regulation of AI
The United Nations Education and Scientific Cultural Organisation (UNESCO) had called on learning institutions to regulate AI use in education. The body proposed the auditing of algorithms and data sources to ensure ethical standards. It also encouraged governments to regulate AI in education as well as establish standards to assess the long-term impact of AI use on creativity and critical thinking.
Recommended by LinkedIn
The sky isn’t falling
While acknowledging the AI trend in education, many experts believe it is not as critical as some feared. They cited data from Turnitin’s AI detection tool to back their claim. Annie Chechitelli of Turnitin, for instance, admitted that “there are students who are leaning on AI too much. But it’s not pervasive. It wasn’t this, ‘the sky is falling.’” Turnitin reports that of the over 200 million writing assignments it reviewed in 2024, the tool detected that 1 out of 10 assignments used AI while only 3 out of 100 assignments were largely AI generated. Turnitin seemed to suggest that the change over a year, 2023-2024, is negligible and is no cause for alarm. Researchers in Stanford University supported this. The university reported that the percentage of students who cheated since the release of ChatGPT to 2024 had remained flat.
Is AI truly making students dull, uncreative, less critical?
It is too early to make this call. Data on the effect of AI on cognition, creativity, and critical thinking are still scanty to draw any real conclusions. However, policymakers, governments, school administrators and boards must recognise that AI is now a part of modern living and will impact education, one way or another. As such, they need to devise policies and strategies to optimise the immense possibilities of AI tools while safeguarding against unethical uses. Minister of Education, Dr Tunji Alausa, stressed the need to maintain human oversight as the country integrates AI into its education system. In August 2024, the government released what it called a draft National AI Strategy that will guide the ethical use of AI, including in education, to ensure transparency, privacy, and fairness.
How to ensure AI and education integrate seamlessly
Experts have suggested that AI and education can function very well together to enhance learning outcomes and warned academics to stop their antagonism. They warned that too much focus on plagiarism and cheating by educators, who are becoming fixated on using AI detection tools, could hamper wholesale learning as it could create friction and distrust between educators and learners, thereby hampering robust engagement.
Experts have also proposed AI literacy that will highlight how AI can be beneficial to learning and how it can be detrimental. Also, schools are encouraged to develop clear guidelines and policies on AI use by their students. Some have suggested including students in the drafting of AI guidelines. They also argued against reverting to traditional assessment methods to counter AI use as that may not be sustainable in the long run.
Forward-looking governments, institutions strategising on AI
Governments and education institutions are already running with some of these proposals by remodeling their education sector to incorporate artificial intelligence. For instance, schools in Asia are already integrating AI into their curricula. In Hong Kong, the International Baccalaureate, a governing body for international schools in that country, said it would “not ban” the use of AI in its schools but would “support their students” to use AI ethically and with integrity.
AI deployment has become widespread because of its huge potential to positively transform society. This has continued to energise organisations worldwide to spend billions of dollars to enhance the tool. By resisting AI, educators and the education sector run the risk of losing out on the potential gains of generative AI with dire consequences. The way forward is for education and AI to find common ground and build from there.
By Afolabi Abiodun.