campaign
AI writing now includes AI paraphrasing detection!
Learn more
cancel
Blog   ·  

How an AI checker can support quality, original writing

Amanda De Amicis
Amanda De Amicis
Content Marketing Lead
Turnitin

Subscribe

 

 

 

 

By completing this form, you agree to Turnitin's Privacy Policy. Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

 

AI writing tools are fast becoming a fixture of modern writing practices, so what is an AI checker and how can it help education and research communities adapt to a changing landscape?

We know that AI presents a huge opportunity to advance learning and productivity. At the same time, generative AI is also blurring the lines of authorship and has implications on integrity, accuracy, and quality of written output. The rise of AI writing detection technology reflects the demand for insight into how and when these tools are being used, to thwart misuse and uphold academic and research writing standards.

The aim of an AI checker is not to discourage use of AI writing, nor is it to police users when engaging with this breakthrough technology. Rather, it serves to empower students, educators, and diverse organizations as they navigate expectations and parameters for its responsible use.

In this blog post, we’ll define what an AI checker is (and isn’t), its role in the academic integrity workflow, its capacity to inform teaching and learning, and its benefits across both formative and high-stakes writing.

What is an AI checker and why is it used?

AI writing detection involves the application of artificial intelligence and machine learning technologies to analyze and identify text produced by generative AI. By extension, ‘AI writing detector’ or ‘AI checker’ are interchangeable terms to describe the tool that harnesses this technology to help users evaluate what proportion of text may be human- or AI-generated in any given submission.

There are plenty of reasons why people may need to identify the presence of AI-generated text, with two primary use cases concerning education and research. In an education setting, institutions are establishing AI policies and guidelines so as to avoid students’ over-reliance on the technology to the point that it impedes rather than assists in the learning process. When assessing learning and mastery of subject matter, educators must ensure AI is not unfairly influencing achievement and that students can stand on their own merit.

Is it possible to accomplish this AI writing identification manually? Research reveals that this method alone is unreliable and prone to inconsistencies. For example, in a 2024 study investigating educators’ ability to spot AI text in student essays, teachers identified only 37.8% of AI-generated texts correctly, compared to 73.0% of student-written texts, and were less confident in their judgment when they assumed the text to be AI-generated. With rapid iterations of large language models (LLMs) on the horizon, an AI checker can conduct specialized, scalable analysis in a way that is not feasible for humans.

In a research setting of high-stakes publication, policies and guidelines on AI use and its acknowledgement in submissions helps protect the authenticity and validity of researchers’ work. By flagging AI-generated portions through use of an AI checker, institutions and publishers can help verify authorship and uncover any possible inaccuracies and bias, thereby minimizing the risk of research misconduct and tainting the scientific record.

In either use case, it’s important to recognize that an AI checker does not serve as a replacement for human judgment and intuition. Whether it’s an educator accustomed to the capabilities of their student, or a researcher author who understands the context in which source material has been woven into a research paper, human insight remains essential to ensure authenticity, integrity, and context that algorithms alone cannot grasp.

How does an AI checker support responsible AI use and integrity?

We’ve begun to see the move towards measuring the process as opposed to the product of learning in response to generative AI, to produce proof of learning and preserve academic integrity as it evolves. Education technology is proving instrumental in how institutions can achieve this through pedagogical scaffolds and learning analytics, and an AI checker is an extension of this, applied to responsible engagement with AI.

Let’s rewind a little to establish that no single method is infallible in preventing students’ unauthorized AI use, whether it’s designing a so-called ‘cheat-proof’ assignment, or relying solely on AI detection to deter misuse. Multiple methods and assessment tools that fold up to fostering a culture of integrity, are an educator’s greatest defense.

Prior to the emergence of AI, we’ve had a long-held and tightly defined concept of authorship, which may have appeared to unravel upon the release of ChatGPT. Now that the dust has settled, we know that human and AI-generated content can co-exist, but ambiguity remains, especially as AI sophistication matures, in what content we can trust as written by a student or researcher and to what degree we can allow it to fly under the radar.

While researchers are trained in transparent record-keeping to reasonably distinguish author contributions, undergraduate students are not typically held to this standard as they come to grips with academic writing standards and integrating source material through accurate attribution. In either case, AI is a new frontier, and an AI checker helps foster accountability.

The value of an AI checker even holds weight with organizations who are grappling with how to maintain trust and integrity in content creation amongst employees.

Making the case for transparent AI use

The rise of AI paraphrasing—using AI technology to rewrite text while retaining the original meaning—is worth highlighting as an emerging threat for students as they develop paraphrasing techniques to show knowledge of existing ideas while distinguishing their own perspectives. A form of academic misconduct, AI paraphrasing is one offshoot of AI technology that can lead students astray.

Furthermore, excessive and/or indiscriminate use of AI-generated text can obscure student voice and bypass critique that is crucial to avoiding plagiarism, falsehoods, or bias in writing. These risks call for greater transparency as student writing evolves.

But where does a student’s writing and AI writing begin and end? Or more pointedly, where is the line between a student’s unique expression and the input of AI in their writing? An AI checker helps remove ambiguity in generated text in a number of ways:

  1. Flags potential unauthorized AI humans may otherwise miss in the review process.
  2. A trigger for educator intervention when unhealthy patterns of AI use emerge.
  3. Makes evaluation of students’ AI use a more intuitive and scalable exercise.
  4. An opportunity to foster open conversations about appropriate AI use with students.

Using an AI checker to inform student progress

As society looks to develop AI literacy in anticipation of the technology’s ubiquitous use, it is the integration of AI into pedagogy that demands extra attention. For starters, reconciling human mastery of writing alongside technology interdependence is a building block that begins in the domain of education. The task at hand is to cultivate students who are capable of operating and writing autonomously of AI, while at the same time, have a willingness to harness the technology to boost skills and productivity in preparation for the workforce.

It’s important to recognise that AI writing tools can empower students with:

  1. Instant, personalized feedback (when initiated by prompts) to help students correct and refine as they write.
  2. Idea generation to expand brainstorming efforts and provide a starting point for overcoming writer’s block.
  3. Self-paced learning (best facilitated by educators) to promote higher-order thinking and metacognition.
  4. Guidance on structure by creating outlines and providing a model for good writing flow.
  5. Language support to enhance students’ vocabulary and grammar; especially helpful for non-native speakers.

Of course, embracing AI writing technology doesn’t mean that guardrails aren’t needed to govern use. Fundamentally, AI writing tools can support self-regulated learning to hone students’ writing skills, but they also have the potential to breed an overreliance. Ultimately, we want to ensure that AI generated tools are enhancing rather than undermining student writing quality and the critical thinking and creativity skills that underpin it. By extension, we also need visibility and evaluation of AI use to shore up that outcome.

AI checkers offer a way to deconstruct a student's submission that goes beyond assessment of the final product, deeper into the composition of that piece. By determining when and where AI text appears, they equip educators with insights to help assess application of learning and authentic student voice which uphold quality, original writing. Perhaps a student has used AI writing reasonably as part of fact-finding, packaging of ideas, and polishing text. Alternatively, if AI has been found to dominate the paper, it may be a sign that a student is outsourcing their learning. An AI checker empowers educators to prevent human writing ability taking a backseat to technology.

Making space to experiment with AI writing

A conduit between human-AI collaboration and the teaching process, an AI checker can safeguard academic integrity while informing educators on students’ progress relative to AI output, and trigger formative learning opportunities to improve student writing. It’s especially helpful for teaching assistants—typically less familiar with students’ performance history—who are engaged to grade student work.

Annie Chechitelli, Chief Product Officer at Turnitin, has previously commented on the need for students to have a safe space to experiment with AI writing. Is your institution supplying students with an acceptable AI use policy and the chance to apply AI writing responsibly? If not, what is the obstacle(s)? It may be that assessable student work contributes to accreditation requirements, and uncertainty or concerns about accurately identifying AI-generated content could undermine credibility. The stakes are high, and no educator wants to pass or graduate a student who has not achieved learning on their own merit.

Indeed, Tyton Partners’ 2024 edition of their Time for Class report found that of the users who report an increase in overall workload due to AI, it was attributed to spending more time monitoring academic integrity and enforcing policies and/or redesigning assessments to counter AI usage. It’s becoming clear that educators are seeking a workflow where reliable, scalable monitoring of AI writing is made possible; during the ‘first pass’ and to validate suspected use.

How does Turnitin’s AI checker work?

If you were in any doubt of generative AI’s prevalence in student work, consider that since the launch of Turnitin’s AI writing detection tool in April 2023, over 250 million submissions have been reviewed, with 8.4 million flagged as having at least 80% potential AI writing (as of September 2024).

Now, let’s take a look at Turnitin’s AI checker, in a nutshell.

Firstly, a paper submitted through Turnitin is broken down into multiple text segments of a few hundred words each, and overlapped to capture each sentence in context. Applying our AI detection model, every sentence within a segment is given a score between 0 and 1 to determine whether the text in question has been written by AI or a human.

In the event that our AI checker determines that a sentence is not AI generated, it will receive a score of 0, and for a determination that the entire sentence was likely generated by AI, it gets a score of 1. Finally, the model takes the average scores of every segment to generate an overall prediction of how much text is believed to have been generated by AI. Our AI checker has a 1% false positive rate for a document with over 20% likely AI-generated content.

Crucially, Turnitin does not make a determination of misconduct in our similarity checking or AI writing detection technology. Rather, we provide data for educators to make an informed decision based on their academic and institutional policies. We’re committed to safeguarding students’ interests and ensuring students are not falsely accused of misconduct. Therefore, we view AI writing detection as “a signaling tool and one piece of an investigative puzzle”, meaning the purpose of the AI writing score is not to make a definitive call, but to assess in conjunction with other factors in student performance and facilitate formative conversations with students.

Overview: The role of an AI checker in modern writing practices

The aim of using an AI checker is not to pass judgment on the concentration of AI-generated text in a piece of writing, but to gain visibility of AI engagement and support healthy, intentional use of the technology that serves learning objectives and elevates human potential.

It’s inevitable that AI writing will continue to shape writing, and student writing practices must evolve with it. The ability to steer AI-generated output means using foundational skills such as critical thinking and creativity that cannot be substituted by technology, and accountability for high quality, original writing is key.

An AI checker can help offset the risks of students’ experimentation with AI writing to avoid indiscriminate or excessive use, by equipping educators with the insights they need to identify and assess student-AI collaboration. Serving an important function in safeguarding proof of learning and academic integrity, high accuracy AI writing detection courtesy of Turnitin’s AI checker can form a seamless part of your review process to empower educators and students alike to navigate the future of writing.