campaign
AI writing now includes AI paraphrasing detection!
Learn more
cancel
Blog   ·  

Why you need to include AI writing in your honor code and curriculum

Amanda De Amicis
Amanda De Amicis
Content Marketing Lead
Turnitin

Subscribe

 

 

 

 

By completing this form, you agree to Turnitin's Privacy Policy. Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

 

Talk of AI writing generators has exploded in the media thanks to ChatGPT’s release in late 2022, and it’s a conversation that is only gaining traction. AI writing as a dual threat and opportunity to society is perhaps no more significant than in the arena of education, where knowledge and lifelong skills are developed according to traditional markers of authorship and originality.

The notion that auto-generated text could soon become a fixture of students’ daily work has prompted a cavalcade of questions: will written essays be rendered useless as an assessment tool by artificial intelligence, or will they take on new life as a more authentic assessment? Might students fail to develop the capacity to write well on their own, or will the cognitive offloading through AI allow for greater higher-order thinking?

In this post, we explore the importance of incorporating AI in your honor code and curriculum to establish responsible AI use, and to offset risks to the teaching and learning process in order to allow AI to become a beneficial learning tool as opposed to a form of misconduct.

Creating a space for AI writing

AI is poised to be pivotal in the future of work, and with a responsibility to prepare students for what’s expected of them in their professional lives, it’s now generally accepted that allowing use of AI writing generators by students is no longer a question of ‘if’, but when, and under what conditions. Embracing degrees of AI writing needn’t signal any demise of academic integrity, but structures are needed to identify its presence, govern its use, and ensure it doesn’t compound existing drivers of misconduct.

The current incarnation of ChatGPT is already leaving its mark as educators and students test its limits (bemusement over its fake references, case in point). Subsequently, there is urgency in educators taking decisive steps in defining both permitted and unauthorized use of AI writing technology in their courses and at a department level; to be on the front foot in getting students to use AI writing responsibly. Putting clear expectations and standards in place around AI assistance means stipulating it in your honor code. It should give students explicit direction as to how/when AI can be used, so that boundaries can be established for academic integrity. This will also help ensure that students still learn ideation and other key elements of producing written work.

AI writing poses a number of benefits in relation to cognitive offloading, and theoretically, students could harness it to free up study time and move further up the learning taxonomy. Another opportunity for AI is to assist learners with additional needs, such as those with language processing or expression disabilities, or those for whom English is their second language. When creating a space for AI writing, inclusivity is key, as there is a risk that it could deepen access and equity gaps amongst student cohorts; particularly as AI generators become pay-to-play.

The case for reimagining assessment

As we have witnessed in recent years, technology has been a driver for pedagogical change, and the pillars of pedagogy will need to adapt once more to manage AI-related disruption. We know that a key driver for academic misconduct is ambiguity from students on what is acceptable versus unacceptable behavior. Use of AI falls into this bucket, as educators themselves negotiate what role it will play in learning and assessment and what this will look like. As for whether AI writing constitutes cheating – well, that will be up to educators and institutions to determine, but knee-jerk reactions in banning AI writing entirely are not sustainable.

AI writing requires educators to reevaluate their mechanisms for proof of learning and how we measure the originality and critical thinking skills of students through assessment. As one commentator from Plagiarism Today predicts: “AI won’t be the death of the essay, but it may change it. It may change the prompts that are used, the receivables that need to be graded, and the general approach to the concept.” As such, knowing when and where a student has deployed an AI writing generator will be important for preserving assessment integrity, and shoring up an institution’s reputation and credibility of the certification students graduate with.

How to challenge and motivate students amidst the big ‘leg-up’ of AI writing is the task at hand, and the Academic Integrity Office at UC San Diego, for example, have made public their decision-making matrix for responding to developments in AI. One proposition they float is that if AI can do a task, should we be asking students to do it? If so, why, and how could you tweak the task to include a portion that requires human ingenuity? To this point, educators would be well-placed to anticipate an ideal scenario where students use AI writing as a basis or starting off-point, and then put their own spin on the content.

AI and student ownership

Although AI is a game changer for education, it presents some familiar considerations in getting students to take ownership of their work. Consider the well-documented success rates of honor codes in reducing students’ motivation to cheat, and how they helped give direction during the disruption of remote learning. Updating your honor code for AI should be no exception! Continuing to teach the value of integrity will mean getting students to acknowledge and own the output of AI, while applying the principles of citation and referencing.

To start your students on the right foot with AI writing, consider the following:

  1. Be transparent with your students about the potential of AI writing and position it as a ‘helper’, in which human expertise is ultimately required to guide the tool and elicit the right output
  2. Discuss the flaws of current AI models (such as bias) with students – knowing the fallibility of AI generators will help instill due diligence in students and accountability for their work
  3. Provide practical exercises for students to test the limits of AI writing tools and finetune the prompts they give AI
  4. Design assessments that use AI purposefully and play to its strengths and weaknesses
  5. Include it in class syllabi and go over AI writing clearly at the beginning of a term, to develop responsible use of AI that preserves academic integrity.

We realize it is challenging to codify AI writing into policy at this early stage, but it ought to be referenced in your honor code to inform students that it is firmly on the radar. Ultimately, an academic integrity policy or honor code should be viewed as a work-in-progress that reflects the dynamism of AI. It means re-negotiating some of the attachments we have formed around authorship and what constitutes original student work, while applying checks and balances to uphold assessment security and the authenticity of learning in a new era.

AI writing detection and Turnitin’s technology

As AI writing models grow in sophistication to mimic human output, the question on everybody’s lips is: how can educators reliably identify what is human versus machine written? While the complex world of AI writing may be new to many, Turnitin has been preparing for its adoption over the past two and a half years; researching and developing technology to recognise the signature of AI-assisted writing. Combine this with the fact we’ve been at the forefront of academic writing technology for nearly 25 years, and it puts us in a unique position to offer an AI writing detection solution tailored to the needs of institutions.

Turnitin’s soon-to-be-released tool for detection of AI-assisted writing and AI writing via ChatGPT is not intended as a punitive, prohibitive measure for AI use, but rather, offers educators a way to uphold responsible use of AI, and like our other solutions, gain a deeper understanding of their students’ learning. It extends our AI philosophy of leaving the determination of unoriginal work or misconduct in the hands of educators, acting on the information and insights we can provide.

We anticipate that the ability to detect the presence of AI in submitted work will bring transparency to your institution’s AI use policy, while serving to uphold your honor code and the broader culture of integrity that education depends upon.