While academic misconduct, particularly plagiarism, has long been a concern for educators, the methods students use have evolved significantly. In 2020, research by the International Center for Academic Integrity (ICAI) revealed that over 60% of university students admit to some form of cheating, with 13.8% confessing to word-for-word copying or uncited paraphrasing, and 26% acknowledging improper collaboration on individual assignments.
However, the emergence of new technologies and methods has intensified these issues, posing fresh challenges for maintaining academic standards. In this post, we’ll shed light on both the traditional forms of academic misconduct and newer trends—from student collusion to contract cheating and generative AI misuse. We’ll examine the broader implications these behaviors have on academic standards and ethical practices. Our goal is to provide insights into these evolving risks and help you protect the integrity of your institution.
How are new and emerging trends reshaping academic misconduct?
Recent trends are intensifying the risk to institutional reputation, posing critical questions about how institutions define and safeguard originality in student work. The rapid integration of AI tools is reshaping how educators conduct their teaching and design assessments while encouraging their students to uphold the fundamental values of academic integrity.
As we explore these evolving challenges, we will examine various emerging forms of misconduct—from generative AI misuse and automated text modification to contract cheating and sophisticated exam tactics. Understanding these trends is crucial for adapting our approaches to maintain academic standards and integrity in this dynamic environment.
Generative AI misuse
The advent of generative AI has introduced new dimensions to education, particularly, academic misconduct. As institutions embrace AI for enhancing teaching and learning, they also face challenges. AI tools, designed to generate and refine text, can be used for legitimate learning, including research and idea generation, as well as learning and tutoring. However generative AI can be misused by students to produce and submit work that appears original but is actually created by algorithms.
The sophistication of generative AI tools makes it difficult for assessors to distinguish between human-written and AI-generated content, complicating the task of maintaining academic standards—particularly without the support of AI writing detection technology.
In a 2024 Turnitin study in partnership with Tyton Partners, it was found that 50% of students are likely to continue using AI tools even if banned (a 21% increase from spring 2023), highlighting the need for clear, assignment-specific AI policies that embrace, rather than prohibit, the use of AI writing tools in academia.
As students become more reliant on AI-generated content, there is concern that this may hinder the development of essential academic skills. Writing for Forbes, leadership expert, Ron Carucci says, “No clever prompt we type into an AI tool will ever be able to replace human critical thinking … In reality, critical thinking becomes even more necessary in the age of AI, both to use it properly, and to do the necessary work behind the scenes to make it a more reliable tool.”
The ethical and cultural dynamics surrounding the use of generative AI differ globally. Research by Jin et al. (2024) highlights geographic differences in AI adoption. Their study of 40 universities found that while five institutions in the UK and Australia focus on maintaining academic integrity and originality, eight universities in the US and Hong Kong emphasize leveraging AI to enhance teaching and learning. This disparity poses a challenge for developing universal standards and policies. Institutions must navigate these varied approaches to create strategies that not only address the misuse of generative AI but also harness its potential to improve learning, ensuring a balanced approach that upholds academic standards worldwide.
Automated text modification
Automated text modification involves using software to alter existing content to evade plagiarism detection systems. One basic method is text spinning, where words or phrases are replaced with synonyms, and sentence structures are adjusted using simple algorithms. This often results in text that remains somewhat recognizable as derived from the original source.
A more sophisticated form of automated text modification is AI paraphrasing. AI paraphrasing tools employ advanced algorithms to understand and rephrase text while preserving its original meaning, creating content that appears more original and coherent. While serving different purposes, both text spinning and AI paraphrasing challenge traditional methods of assessing the authenticity of academic work, making it more and more difficult for instructors to identify unoriginal submissions.
Contract cheating
Also known as ghostwriting, contract cheating generally involves students hiring third parties to complete their assignments or exams on their behalf. This practice is facilitated by the widespread availability of affordable online services that offer quick assistance. Despite some countries implementing laws to combat contract cheating by banning the advertisement of essay mills, illegal cheating websites and social media accounts persistently find their way to students.
Education reporter, Caitlin Cassidy, notes that since the Australian government introduced its essay mill law in 2021, “just shy of 300 illegal cheating websites have been blocked and 841 social media accounts, posts or adverts have been removed.” Although contract cheating might seem more like an “old school” form of academic misconduct, it remains a big concern. The emergence of generative AI has only further refined and expanded these services, making them more sophisticated and even harder to detect.
Contract cheating can also manifest itself in the form of impersonation, where a student hires someone else to take a test or even their entire university course. They may even use third-party tutors to provide answers during a live (usually online) exam.
What are the new and emerging trends in exam misconduct?
In addition to plagiarism, exam misconduct has evolved with the adoption of new technologies and methods are adopted by students seeking unfair advantages in high-stakes settings. Traditional forms of exam cheating, such as using unauthorized notes or copying from peers, are now being complemented by more sophisticated tactics.
A notable trend is the use of digital devices and apps to gain access to unauthorized information. For instance, students might use hidden earpieces or smartwatches to receive answers during exams. The proliferation of online forums and social media platforms has also enabled the sharing of exam questions and answers, further complicating efforts to maintain exam integrity.
Another emerging trend is the use of generative AI tools to assist in real-time exam cheating. Students may use AI to quickly generate responses, exploiting the technology’s ability to produce coherent and contextually relevant text. This has led to increased scrutiny on the use of AI tools and a call for updated exam policies that address these new challenges. A study conducted by Scarfe et al. (2024) at the University of Reading revealed that ChatGPT-generated exam answers went undetected in 94% of cases, achieving higher grades than actual student submissions on average. This highlights the growing challenge of detecting AI-assisted exam misconduct.
To counter these trends, institutions are adopting advanced proctoring technologies and updating their academic integrity policies. Enhanced monitoring during exams, such as secure exam software that offers AI-driven surveillance, are becoming more common in an effort to detect and prevent misconduct.
Which more traditional forms of academic misconduct should educators continue to look out for?
While new and emerging trends in academic misconduct are important to address, it is crucial for educators to also remain vigilant about traditional forms of misconduct that have persisted over time. Among these, word-for-word plagiarism stands out as the most basic and widely recognized form. This involves directly copying text from a source without proper citation and remains a fundamental concern in academic integrity.
However, the landscape of academic misconduct is diverse, and several traditional forms continue to pose significant challenges…
Paraphrase plagiarism
Paraphrasing itself is not inherently misconduct, but it becomes problematic when students rephrase someone else’s ideas without proper citation, resulting in content that closely resembles the original source despite appearing different. While paraphrase plagiarism may seem similar to automated text modification, the core issue is not just the rephrasing but the failure to properly credit the original source. Paraphrasing only crosses into the realm of misconduct when students present borrowed ideas as their own without appropriate attribution.
Self-plagiarism
Self-plagiarism—sometimes known as “duplicate plagiarism”—occurs when a student or researcher recycles their own previously submitted or published work and uses it as new, without proper acknowledgment. While it may seem harmless—after all, the individual is using their own original content—self-plagiarism is still considered a form of academic misconduct because it involves presenting old work as new, which can mislead educators, peers, or the public about the person’s current efforts or insights.
Mosaic plagiarism
Mosaic plagiarism involves stitching together phrases or ideas from various sources to create a new piece that is presented as original work. As such, this type of plagiarism is more subtle and complex than straightforward word-for-word copying, making it harder to detect.
According to De Amicis (2023), “It blurs the boundaries between defined authored work and existing ideas, so it can be difficult to pin down by educators evaluating students’ written work; especially at scale with a large cohort of students.” Educators may struggle to identify this form of misconduct, particularly when reviewing large volumes of student submissions, thus challenging the integrity of academic assessments and requiring more nuanced strategies for detection and prevention.
Student collusion
Collusion involves students working together inappropriately, sharing answers, or collaborating on assignments meant to be completed individually. It’s more common in group work or assessments where students may feel pressure to help each other. However, student collusion can also manifest itself when students divide up tasks on an individual assessment or simply share completed assignments with one another.
The University of Leeds explains that “The ability to work together is an important part of academic life and a skill that many employers value. However, it can raise integrity issues if it prevents markers from assessing your individual efforts and understanding.”
In today’s learning environments, particularly with digital communication tools, collusion has become easier to facilitate. Students can quickly share answers via messaging apps or online platforms, and without proper oversight, educators may struggle to detect it. Group chats, shared documents, or online forums may also be misused to collaborate on individual assignments, further blurring the line between collaboration and collusion.
Data plagiarism
Considered one of the most severe forms of academic misconduct, data plagiarism undermines the integrity of academic findings and the trust placed in them. It can take the form of copying data sets from other sources without proper attribution, fabricating data, or manipulating existing data to fit desired outcomes. By plagiarizing data, the act becomes a form of fabrication, as the data is only ever valid in its original context (Dougherty, 2020).
In an analysis of over 7,500 articles, Phogat et al. (2023) found that data fabrication was most prevalent in nonself-reported studies, indicating that much of this type of misconduct may go unreported or undisclosed by those involved. This highlights the challenge of detecting and addressing it.
For students, data plagiarism often manifests in assignments that require original research or analysis, whereby students might copy data from published papers, websites or other students’ work, presenting it as their own. However, it is in research that the consequences are far wider in reach. When researchers plagiarize or falsify data, it creates a ripple effect of misinformation, which can have catastrophic outcomes, particularly in fields like medical research.
Source-based plagiarism
Source-based plagiarism involves the improper use or misrepresentation of sources in academic work. Unlike direct or word-for-word plagiarism, which involves copying text verbatim, source-based plagiarism is more nuanced. It occurs when students selectively use or distort information from sources to support their arguments without proper attribution, or when they rely heavily on a single source while presenting the work as their own original thought.
This type of plagiarism often arises in research-heavy assignments where students are required to synthesize and integrate information from various sources. It can include misleading citation practices, selective use of sources, or distorting information.
Generative AI tools, such as ChatGPT, can further complicate this issue by providing students with fabricated or inaccurate sources. While these tools can generate text and suggest sources, they may also invent references or create plausible-sounding but non-existent sources, leading to potential source-based plagiarism. As AI-generated content becomes more sophisticated, the risk of students inadvertently or deliberately using fabricated sources increases, making it crucial for educators to stay vigilant and implement robust citation practices.
Overview: Considerations in the face of new and emerging trends in academic misconduct
Understanding and addressing new and emerging trends in academic misconduct is crucial for maintaining the integrity of academic institutions. From generative AI misuse and automated text modification to contract cheating and data plagiarism, these evolving challenges require proactive measures and updated policies.
As academic environments adapt to technological advancements in misconduct, vigilance remains key. Technology can both facilitate and combat misconduct, positioning it as a central battleground for academic integrity. But educators still have the opportunity to foster integrity and a genuine passion for learning through relationship-building, constructive feedback, and thoughtful exam design. It is essential to select tools that align with these pedagogical principles and support a fair and ethical academic environment.