*/
The key challenge for both students is and universities to integrate AI technologies thoughtfully and responsibly into learning and assessment, says Dr Thomas Lancaster
As a computer scientist, I see that incorporating competent use of AI more formally into higher education brings advantages and opportunities for students. However, this also introduces risks that extend beyond students simply gaining an unfair advantage. If left unchecked, students could become deskilled in core areas like writing and critical analysis. Having researched academic integrity and misconduct for 25 years, while I have no legal background, I recognise the complex policy challenges any new technology presents. It is not easy for universities to craft effective policy while AI systems develop at a breakneck pace.
AI assistance appeals to students and educators facing workload pressures. Educators might use generative AI to draft communications or generate lesson ideas, while students might want help with structuring assignments or understanding complex research. Tools like ChatGPT can offer efficiency gains to all, but they risk becoming shortcuts that bypass critical thinking. Students and educators alike need careful guidance to use these tools effectively.
Crucially, however, AI tools are unreliable. Their probabilistic outputs, shaped by training data, can be biased, inconsistent and inaccurate. Asking the same question twice may lead to different responses. Thus, while undeniably a helpful additional resource, AI cannot replace the need for students to develop fundamental academic skills and ethical working practices.
The core, and perhaps most vexing, challenge for education policy lies in differentiating legitimate, expected and even encouraged use of AI from uses that constitute academic misconduct. Drawing this line clearly and fairly is proving exceptionally difficult across all disciplines.
Plagiarism, traditionally understood as using another’s words or ideas without acknowledgement, becomes conceptually harder to define when the ‘source’ is an algorithm bringing together information from potentially millions of uncredited online texts. Furthermore, AI often struggles with accurate sourcing in an academic sense, sometimes fabricating references or misrepresenting information from sources.
How should we define unacceptable assistance when AI offers legitimate support? Using AI to check grammar and style may well be acceptable. After all, such support is already available in popular Word processing software. What about using AI to generate ideas? Is this akin to a back-and-forth discussion with another human? How about checking a written document for logical fallacies? The helpful part of this process seems reasonable.
Where a greater challenge lies is when AI is used to write substantive sections of an essay in place of a human. That is almost certainly unacceptable if not acknowledged. What about if the AI system was not writing from scratch, but merely rewording? To me, it depends a lot on whether learning goals were met during this process.
Perhaps the intent behind AI use needs consideration. Was AI used in a deceptive manner or to bypass effort? But intent is difficult to prove reliably without intrusive surveillance or interrogation methods, which themselves raise ethical concerns. The lack of clear, consistent definitions creates anxiety for students unsure of the rules, and significant risk for institutions facing potential appeals and complaints based on ambiguous regulations. Clear policies are essential, alongside assessment methods that require students to demonstrate their process and understanding.
UK university responses to AI are fragmented, lacking a unified strategy. Initial bans on tools like ChatGPT proved unworkable and unhelpful, as students need AI competency for future careers. Ignoring the technology is not a viable option.
AI detection software presents further complexity. Its reliability is questionable, since tools offer probabilistic indications, not definitive proof. It is also difficult to ever assess the accuracy of AI detection tools when the generation methods themselves are in a continual state of development.
Crucially, AI detection engines cannot prove AI use. The ‘black box’ nature of many tools hinders fair procedures, risks false allegations, and challenges the ‘balance of probabilities’ standard for misconduct findings based on opaque scores. This necessitates greater reliance on resource-intensive investigation methods like vivas, increasing institutional strain.
The implications for shaping university policies are profound and urgent. Institutions must proactively define, and likely redefine, academic integrity in the age of AI, clearly outlining acceptable and unacceptable uses of these powerful tools. Generic, university-wide statements may be insufficient. Further clarity might be needed down to the module or even assessment level. Disciplinary expectations will necessarily vary. There may be a stage during their studies when a lawyer-to-be has to show they can direct an AI agent to find legal precedents, but there may be other times when a student must demonstrate when they can work unaided.
Academic misconduct policies themselves require careful review and updating to ensure investigation processes remain fundamentally just, transparent, and fair considering new technological challenges. Policies must be written in plain, accessible English, ensuring they are clearly understood by students who may be unfamiliar with UK academic norms, not obscured by dense legal or technical jargon.
Crucially, developing effective, practical, and defensible policies requires genuine consultation with all stakeholders. Students bring the essential user, academic staff offer pedagogical expertise, and experts in academic integrity and law provide guidance on best practice and regulatory alignment. Furthermore, given the speed of AI development, policies cannot be static. They require regular review and adaptation to remain relevant and effective.
Assessment strategy is a key lever for managing AI use and ensuring academic integrity. The idea of assessment security in education focuses on making sure that assessments are fit for purpose, reliable and fair, whilst minimising the potential for cheating. Methods of in-person assessment, such as supervised exams, practical assessments, and vivas, provide higher levels of assessment security, but no assessment type is ever completely immune to academic misconduct.
Beyond immediate assessment changes lie wider complex issues. There may be copyright infringement risks from AI training data and output. The Skills and Post-16 Education Act 2022 made it illegal to offer contract cheating services to students in England, but consideration needs to be made as to how this applies to automated AI providers offering similar services. The potential digital divide, where some students have access to more powerful AI tools than others, raises issues of equity. Professional body accreditation requirements may also have to change to reflect AI’s role in future practice.
What is clear is that the educational landscape must adapt, and quickly. Institutions must find a route through the policy maze. For university degrees to retain their value, credibility and public trust, assessment frameworks must evolve to ensure students genuinely demonstrate individual achievement, critical understanding, and core competencies within a system that is fundamentally fair and equitable to all. Institutions failing to proactively and thoughtfully address AI misuse risk not only damage to their academic reputation and potential legal challenges but also a significant loss of student confidence in the value of their education.
Ultimately, there’s a profound risk that students, caught between ambiguous expectations and restrictive policies, may themselves feel unfairly treated. In a world where responsible, critical, and ethical AI use is fast becoming an essential skill for life and work, the challenge to university infrastructure is immense, but universities can and must now lead on how AI is ethically integrated into learning and assessment.
As a computer scientist, I see that incorporating competent use of AI more formally into higher education brings advantages and opportunities for students. However, this also introduces risks that extend beyond students simply gaining an unfair advantage. If left unchecked, students could become deskilled in core areas like writing and critical analysis. Having researched academic integrity and misconduct for 25 years, while I have no legal background, I recognise the complex policy challenges any new technology presents. It is not easy for universities to craft effective policy while AI systems develop at a breakneck pace.
AI assistance appeals to students and educators facing workload pressures. Educators might use generative AI to draft communications or generate lesson ideas, while students might want help with structuring assignments or understanding complex research. Tools like ChatGPT can offer efficiency gains to all, but they risk becoming shortcuts that bypass critical thinking. Students and educators alike need careful guidance to use these tools effectively.
Crucially, however, AI tools are unreliable. Their probabilistic outputs, shaped by training data, can be biased, inconsistent and inaccurate. Asking the same question twice may lead to different responses. Thus, while undeniably a helpful additional resource, AI cannot replace the need for students to develop fundamental academic skills and ethical working practices.
The core, and perhaps most vexing, challenge for education policy lies in differentiating legitimate, expected and even encouraged use of AI from uses that constitute academic misconduct. Drawing this line clearly and fairly is proving exceptionally difficult across all disciplines.
Plagiarism, traditionally understood as using another’s words or ideas without acknowledgement, becomes conceptually harder to define when the ‘source’ is an algorithm bringing together information from potentially millions of uncredited online texts. Furthermore, AI often struggles with accurate sourcing in an academic sense, sometimes fabricating references or misrepresenting information from sources.
How should we define unacceptable assistance when AI offers legitimate support? Using AI to check grammar and style may well be acceptable. After all, such support is already available in popular Word processing software. What about using AI to generate ideas? Is this akin to a back-and-forth discussion with another human? How about checking a written document for logical fallacies? The helpful part of this process seems reasonable.
Where a greater challenge lies is when AI is used to write substantive sections of an essay in place of a human. That is almost certainly unacceptable if not acknowledged. What about if the AI system was not writing from scratch, but merely rewording? To me, it depends a lot on whether learning goals were met during this process.
Perhaps the intent behind AI use needs consideration. Was AI used in a deceptive manner or to bypass effort? But intent is difficult to prove reliably without intrusive surveillance or interrogation methods, which themselves raise ethical concerns. The lack of clear, consistent definitions creates anxiety for students unsure of the rules, and significant risk for institutions facing potential appeals and complaints based on ambiguous regulations. Clear policies are essential, alongside assessment methods that require students to demonstrate their process and understanding.
UK university responses to AI are fragmented, lacking a unified strategy. Initial bans on tools like ChatGPT proved unworkable and unhelpful, as students need AI competency for future careers. Ignoring the technology is not a viable option.
AI detection software presents further complexity. Its reliability is questionable, since tools offer probabilistic indications, not definitive proof. It is also difficult to ever assess the accuracy of AI detection tools when the generation methods themselves are in a continual state of development.
Crucially, AI detection engines cannot prove AI use. The ‘black box’ nature of many tools hinders fair procedures, risks false allegations, and challenges the ‘balance of probabilities’ standard for misconduct findings based on opaque scores. This necessitates greater reliance on resource-intensive investigation methods like vivas, increasing institutional strain.
The implications for shaping university policies are profound and urgent. Institutions must proactively define, and likely redefine, academic integrity in the age of AI, clearly outlining acceptable and unacceptable uses of these powerful tools. Generic, university-wide statements may be insufficient. Further clarity might be needed down to the module or even assessment level. Disciplinary expectations will necessarily vary. There may be a stage during their studies when a lawyer-to-be has to show they can direct an AI agent to find legal precedents, but there may be other times when a student must demonstrate when they can work unaided.
Academic misconduct policies themselves require careful review and updating to ensure investigation processes remain fundamentally just, transparent, and fair considering new technological challenges. Policies must be written in plain, accessible English, ensuring they are clearly understood by students who may be unfamiliar with UK academic norms, not obscured by dense legal or technical jargon.
Crucially, developing effective, practical, and defensible policies requires genuine consultation with all stakeholders. Students bring the essential user, academic staff offer pedagogical expertise, and experts in academic integrity and law provide guidance on best practice and regulatory alignment. Furthermore, given the speed of AI development, policies cannot be static. They require regular review and adaptation to remain relevant and effective.
Assessment strategy is a key lever for managing AI use and ensuring academic integrity. The idea of assessment security in education focuses on making sure that assessments are fit for purpose, reliable and fair, whilst minimising the potential for cheating. Methods of in-person assessment, such as supervised exams, practical assessments, and vivas, provide higher levels of assessment security, but no assessment type is ever completely immune to academic misconduct.
Beyond immediate assessment changes lie wider complex issues. There may be copyright infringement risks from AI training data and output. The Skills and Post-16 Education Act 2022 made it illegal to offer contract cheating services to students in England, but consideration needs to be made as to how this applies to automated AI providers offering similar services. The potential digital divide, where some students have access to more powerful AI tools than others, raises issues of equity. Professional body accreditation requirements may also have to change to reflect AI’s role in future practice.
What is clear is that the educational landscape must adapt, and quickly. Institutions must find a route through the policy maze. For university degrees to retain their value, credibility and public trust, assessment frameworks must evolve to ensure students genuinely demonstrate individual achievement, critical understanding, and core competencies within a system that is fundamentally fair and equitable to all. Institutions failing to proactively and thoughtfully address AI misuse risk not only damage to their academic reputation and potential legal challenges but also a significant loss of student confidence in the value of their education.
Ultimately, there’s a profound risk that students, caught between ambiguous expectations and restrictive policies, may themselves feel unfairly treated. In a world where responsible, critical, and ethical AI use is fast becoming an essential skill for life and work, the challenge to university infrastructure is immense, but universities can and must now lead on how AI is ethically integrated into learning and assessment.
The key challenge for both students is and universities to integrate AI technologies thoughtfully and responsibly into learning and assessment, says Dr Thomas Lancaster
Chair of the Bar sets out a busy calendar for the rest of the year
AlphaBiolabs has announced its latest Giving Back donation to RAY Ceredigion, a grassroots West Wales charity that provides play, learning and community opportunities for families across Ceredigion County
Rachel Davenport, Co-founder and Director at AlphaBiolabs, outlines why barristers, solicitors, judges, social workers and local authorities across the UK trust AlphaBiolabs for court-admissible testing
A £500 donation from AlphaBiolabs is helping to support women and children affected by domestic abuse, thanks to the company’s unique charity initiative that empowers legal professionals to give back to community causes
Casey Randall of AlphaBiolabs discusses the benefits of Non-Invasive Prenatal Paternity testing for the Family Court
Philip N Bristow explains how to unlock your aged debt to fund your tax in one easy step
Come in with your eyes open, but don’t let fear cloud the prospect. A view from practice by John Dove
Timothy James Dutton CBE KC was known across the profession as an outstanding advocate, a dedicated public servant and a man of the utmost integrity. He was also a loyal and loving friend to many of us
Lana Murphy and Francesca Perera started their careers at the Crown Prosecution Service before joining chambers. They discuss why they made the move and the practicalities of setting up self-employed practice as qualified juniors
As threats and attacks against lawyers continue to rise, a new international treaty offers a much-needed safeguard. Sarah Kavanagh reports on the landmark convention defending the independence of lawyers and rule of law
Author: Charlotte Proudman Reviewer: Stephanie Hayward