campaign
AI writing now includes AI paraphrasing detection!
Learn more
cancel
Blog   ·  

Six ways to prepare writing assignments in the age of AI

Karen Smith
Karen Smith
Senior Teaching and Learning Specialist
Turnitin
Kristin Van Gompel, Ed.D.
Kristin Van Gompel, Ed.D.
Senior Instructional Innovations Specialist
Teaching and Learning Innovations Team

Subscribe

 

 

 

 

By completing this form, you agree to Turnitin's Privacy Policy. Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

 

Every day, new articles are published about AI and while once ChatGPT was the main focal point, new models are continuing to develop and become a part of everyone's online interactions. Every day, educators are faced with challenges relating to these tools as search engines and social media are attempting to harness AI’s power. Preparing our classrooms to combat academic integrity issues such as student collusion, a “copy-paste,” text spinners, or contract cheating have expanded. Questions about the veracity of online searches powered by AI are now a factor that must also be considered. Every day. We’re in deep now as educators have started to understand the threat, the responsibility, and the promise of AI. Our Turnitin Teaching and Learning team - all former and current educators - is there too, and one thing we’ve learned is that AI isn’t going away… and that’s okay.

More practitioners are beginning to realize that there could be some benefits to AI in education when implemented with intention. It is a balance, though, as not every use of AI tools will support teaching and learning. For example: What if AI is used to replace actual student thinking? What if it’s used to complete an entire assignment? That type of usage is the threat we as educators (yes, I am still teaching!) are working tirelessly to avoid. But WHAT IF there are things we can do to protect our writing assignments against student misuse for classrooms today?

Let’s pull a tool from a therapist toolkit–instead of reacting to AI after it was potentially misused by a student, we can proactively respond by putting some guardrails in place.

We recently shared guidelines that focused on eleven strategies for approaching AI-generated text in your classroom. Today, we’re going to expand on six specific tactics for educators:

1. Update academic integrity policy to inform instruction and assessment practices

2. Communicate new policy and assignment guidelines with students.

3. Review and revise writing assignments and associated scoring tools (rubrics, etc.).

4. Employ the writing process; live in a formative space.

5. Direct students to use writing platforms where multiple drafts can be saved for review.

6. Institute opportunities for students to discuss their work.

Notice that these six strategies focus on careful planning and approaching AI proactively. While time is a luxury educators do not have, these tactics may save time later responding to potential AI misuse cases. Let’s dig into the tactics:

1. Update academic integrity policy to inform instruction and assessment practices

How and to what extent is the use of a generative AI tool acceptable within academic environments? Changes to classroom practice rely on answering this question and updating academic integrity policies accordingly. Now is the time to research, discuss, and decide how institutions will respond to the rapidly evolving technology of AI. Our guide for updating academic integrity policies in the age of AI walks through steps for getting started. We share ideas such as establishing a common lexicon like an AI vocabulary glossary and determining ethical use of AI.

While making institutional changes first is ideal, it may not happen fast enough. Most educators are feeling like they’re already playing catch-up and need tools to respond to AI advances right now. For those educators, we recommend answering the questions for their own classroom even as they work with colleagues and leaders for institutional change. Let’s be agile and develop best practices for AI in our classrooms.

Determining acceptable use of AI will inform changes to instruction and assessment. But what does acceptable use look like? That may vary vastly for each institution, department, classroom, and even assignment, but let’s look at some specific examples. As I suggested earlier, there can be positive benefits for AI in education. Perhaps an educator decides that it’s acceptable for students to use AI writing tools during prewriting (see #3 below) to brainstorm ideas or gain other points of view on a topic. Or maybe the educator decides to allow students to submit a draft to AI to get formative feedback on their work. If educators decide to go down this path, they must choose the right AI tool and personally test it to put parameters in place.

2. Communicate new policy and assignment guidelines with students

Introduce updated policies to students and talk to them about AI. The policy should be easily accessible to all stakeholders, particularly students. Consider asking students to lead activities to paraphrase policies and present to peers. One valuable exercise that Turnitin advocates have suggested is a classic “This… Not That” activity with scenarios that students can sort/label based on their understanding of the policy. Simply create a list of a few scenarios and have students sort which are acceptable and which are not. The activity itself is fantastic, but what is even more powerful is the discussion around why some uses are acceptable and others are not.

3. Review and revise writing assignments and associated scoring tools (rubrics, etc.)

Developing best practices for crafting writing assignments that are resistant to student misuse of AI is imperative. As we’ve all likely read in the media, AI is proficient at some things, but not so proficient at others. If we, as educators, familiarize ourselves with the “answers” from an AI tool, then in theory, we should be able to modify our writing prompts to work around the technology. While this strategy isn’t foolproof, it will certainly help place some of those guardrails.

Let’s take an element that some generative AI writing tools struggle with today. In their current iteration, AI writing tools have been found to list sources that don’t actually exist. When prompted, the tool might provide references, but the sources may be fictitious. Therefore, adding a requirement for students to use verifiable sources with a reference list would help combat this issue. Beyond sources and citations, there’s additional criteria educators can consider when revisiting their assignments. Our team of veteran educators created an AI misuse rubric to help with just that. This rubric proposes four traits: student voice, critical thinking/reasoning, sources and citations, and personalization.

Consider comparing tried and true writing prompts against the rubric. Start by identifying which traits are relevant to an assignment and then assess how the prompt stacks up. Use the weaker areas to consider how it might be modified to better safeguard assignments against AI misuse. The closer a prompt gets to Advanced/Proficient, the less vulnerable to AI misuse it will be.

As a final step, educators should also update their scoring guides or rubrics to reflect new demands of the prompt. Early in my teaching career, a mentor of mine used to say, “Measure what matters most.” As we shift assignment/assessment design, our evaluation tools must be aligned. If scoring guides and rubrics heavily emphasize aspects of writing at which AI tools are skilled, the potential for misuse increases.

4. Employ the writing process; live in the formative space.

The writing process isn’t for novice writers only. Educators everywhere keep hearing how the existence of AI is going to force us to revisit our teaching practices. For those who might have stepped away from implementing the writing process, start there. The writing process isn’t for novice writers only, in part because preparing writing assignments so they include steps like prewriting and drafting will bring visibility to students’ work before a final submission. Leverage that process and require students to submit a draft, leave feedback on their work, and have them make revisions based on the feedback.

Creating a writing culture with what feels like open dialogue between student and educator makes it much less likely for students to misuse AI. Additionally, research has proven that specific types of feedback have a quantifiably positive impact on student growth. A portfolio approach is one more way to improve visibility into not only the student’s work, but also their process, all of which adds up to protection against AI misconduct.

5. Direct students to use writing platforms where multiple drafts can be saved for review

Take the writing process a step further by teaching students how to maintain a record of their work. Visibility is more important now than ever as students may find their work challenged. Consider instructing students to write in a single web document, such as Microsoft Word or Google Docs, as a form of verification so that if questions about the document’s originality surface, there will be recorded documentation to clarify the origins of a student’s work.

6. Institute opportunities for students to discuss their work

Will students use ChatGPT if they must discuss their work with a teacher or classmate? Maybe, but the probability certainly decreases. While requiring discussion may not eliminate the risk, it does provide another guardrail. Plan for assignments to include peer reviews, writing conferences, or reflection assignments live or in video format, creating another layer of visibility and open dialogue. Additionally, if they know they’ll need to share their work, students are often more invested. Not to mention, research shows that having students share their work boosts confidence and motivation toward the task (Simonsmeier et al., 2020).

While not exhaustive, these six strategies can at least serve as a place to begin, and by combining them they have an even greater impact. If educators are unsure where to start exactly, our AI misuse checklist lists these principles (and more!) to help guide them down the path of preparing writing assignments in the age of AI.

Further resources

  1. Updating your academic integrity policy in the age of AI
  2. AI misuse checklist
  3. AI misuse rubric
  4. AI vocabulary glossary
  5. Source credibility pack to help evaluate both traditional and AI sources