Blog

Researching the Nuts and Bolts Behind Revision Assistant

At Turnitin, we’re big fans of research—user research, market research, scholarly research. We’ll take it all. While there are a number of internal teams that conduct research, the New Technologies team plays a unique research role at the company. This team consists of our machine learning experts, and they investigate future potential uses of machine learning in edtech as well as the effectiveness of our existing products.

The researchers on the New Technologies team have recently published a paper titled “Formative Essay Feedback Using Predictive Scoring Models.” This paper was published at the Conference on Knowledge Discovery and Data Mining, an annual research meeting that brings together thousands of computer scientists from around the world. In it, the New Technologies team examines the performance of the predictive essay scoring models that power Revision Assistant and the use of these models to provide actionable feedback to student writers. We’re looking under the hood of our product and making sure that the technology powering the tool is working optimally.

This paper served three primary purposes:

1. To establish the state-of-the-art performance of our machine learning models.

We want to make sure that Revision Assistant continues to be the “ultimate teaching assistant.” This means the scores students receive when using Revision Assistant should be just as reliable as the scores they would receive from expert human raters. To measure this, we compared our scoring model to other approaches from the academic literature. We’re happy to report that our models have better accuracy and represent the current state of the art.

2. To introduce the use of predictive models to provide feedback.

Revision Assistant supports teachers by providing immediate, relevant feedback to student writers. To ensure that the feedback is both effective and digestible, Revision Assistant can’t simply provide feedback on every sentence. Rather, we needed a way to identify the influential parts of an essay. These are the sentences where students are being the most successful or where they could make the greatest improvements. Using the predictions from our models and a large set of example essays, our machine learning experts created a new method for finding and providing feedback on essay sentences that are particularly strong or weak.

3. To evaluate the innovative feedback generation process.

With Revision Assistant, students write an average of 7.7 drafts in response to the feedback provided. By the seventh draft, students had improved their essay scores by an average of 2.6 points on a 13-point scale. The data not only demonstrates that Revision Assistant improves essay qualities, but we found via in-product ratings that students appreciate the feedback, too. When students gave us their ratings on individual comments, they told us that 88% of positive comments and 72% of critical comments were helpful.

To learn more about this research, you can read “Formative Essay Feedback Using Predictive Scoring Models” in its entirety here.