Photo: Chris Liverani/Unsplash
- Using an AI program is not plagiarism in the traditional sense – there’s no previous work for the student to copy, and thus no original for detectors to catch.
- Is using AI to write graded papers like athletes taking performance-enhancing drugs? As a society and as a sporting culture, we’ve decided certain drugs are forbidden.
- The drugs flowing through these competitors could also cause real harm to the athletes. Would individuals using AI in writing be likewise considered in the same boat?
- Or might using AI be more like using performance-enhancing gear in sports, which is both acceptable and encouraged?
Look out, educators. You’re about to confront a pernicious new challenge that is spreading, kudzu-like, into your student writing assignments: papers augmented with artificial intelligence.
The first online article generator debuted in 2005. Now, AI-generated text can be found in novels, fake news articles and real news articles, marketing campaigns and dozens of other written products. The tech is either free or cheap to use, which places it in the hands of anyone. And it’s probably already burrowing into classrooms right now.
Using an AI program is not plagiarism in the traditional sense – there’s no previous work for the student to copy, and thus no original for teachers’ plagiarism detectors to catch. Instead, a student first feeds text from either a single or multiple sources into the program to begin the process. The program then generates content by using a set of parameters on a topic, which then can be personalised to the writer’s specifications. With a little bit of practice, a student can use AI to write his or her paper in a fraction of the time that it would normally take to write an essay.
I wanted to see what would happen if I fed an AI text generator like Sudowrite some sample texts for assignments. (Sudowrite is just one such program; I could name more, but I’d rather not make it even easier for students to cheat.) The results showed that teachers must be made aware of what is coming to their classrooms.
Say a student wanted to write an essay about US-China policy and decided to use an AI program to speed along the process. I fed it the first few lines from a recent Pentagon report: “The PRC is increasingly clear in its ambitions and intentions. Beijing seeks to reshape the international order to better align with its authoritarian system and national interests, as a vital component of its strategy to achieve the ‘great rejuvenation of the Chinese nation.’” The AI text generator provided these following sentences:
It is increasingly willing to disregard – or to actively undermine – existing international rules and norms when it feels its vital interests are at stake. The US and its allies face a rapidly modernising military, whose capabilities are growing rapidly, especially in the air and at sea. China has also made great strides in its development of cyber capabilities, and it is focused on the ability to degrade or disrupt critical US military technology, as well as on targeting civilian infrastructure. In addition to being a challenge to the US military and its allies, China’s activities in cyberspace have the potential to degrade US political and economic institutions, and the global information infrastructure is at risk.
If a high school student turned that in verbatim, their teacher would probably suspect that there was something amiss. But if this were turned in at the undergraduate or graduate level, and especially if the student were to edit the text, it would be much harder to detect.
Or let’s say they need to write a paper about climate change. Here’s what came up when I fed Sudowrite the first few lines from an article on climate change impacts from the US National Oceanic and Atmospheric Administration website:
Though we often think about human-induced climate change as something that will happen in the future, it is an ongoing process. Ecosystems and communities in the United States and around the world are being impacted today.
It’s not particularly insightful, but it provides the AI program some basic building blocks. The artificial intelligence continued along this slightly edited line of reasoning:
The consensus view emerging from the Intergovernmental Panel on Climate Change (IPCC) and other respected assessments is that the planet is warming because of the accumulation of heat-trapping gases in the atmosphere caused by human activity. The level of scientific consensus that human activities are primarily responsible for climate change is like the consensus finding on smoking causing lung cancer.
The program had likely already been trained to know the term “climate change” was connected to the IPCC; by making certain inferences it could generate text which seemed like a reasonable extension of the idea.
What about ninth-grade English? Channeling my 14-year-old self, I wrote these lines:
“Romeo and Juliet” is a play that explores many different topics such as romance, betrayal, and tragedy. It has interesting characters that make it such a good story, even 400 years later.
The computer spat back this take (which I’ve edited slightly):
Romeo, the main character, is a very passionate and immature young man who is torn between his love for Rosaline and his love for Juliet. He is also very rash and desperate to get Juliet and will do almost anything to win her. His desperation for Juliet is an interesting trait and is one that the younger kids can relate to, as well as older adults with first crushes. Romeo is an interesting character and is very different from how kids would imagine him to be. But he is also a great inspiration, showing that love is worth fighting for.
Again, the AI program had been previously trained to know that Romeo and Juliet contained certain characters, themes, and plot elements. But what stood out to me was that the text was crafted at an appropriate writing level for a high schooler, who would be reading about these star-crossed lovers for the first time. The awkward phrasing and the syntax errors baked into the program make it seem convincing.
I’ve struggled to find the right analogy to describe this phenomenon. Is using AI to write graded papers like athletes taking performance-enhancing drugs? As a society and as a sporting culture, we’ve decided certain drugs are forbidden, as they provide the user unfair advantages. Further, the cocktail of drugs flowing through these competitors and malicious sports programs could cause real physical and psychological harm to the athletes themselves. Would individuals using AI in writing be likewise considered in the same boat – a cheat to the system providing undue advantages, which also creates harm in the long run by impeding writing skills?
Or might using AI be more like using performance-enhancing gear in sports, which is both acceptable and encouraged? To use another sports analogy, even beginner tennis players today use high-performance carbon composite rackets instead of 1960s-era wooden racket technology. Swimmers wear nylon and elastane suits and caps to reduce drag. Bikers have stronger, lighter bicycles than their counterparts used a generation ago. Baseball bats evolved from wood to aluminum and developed better grips; baseball mitts have become more specialised over the decades.
Numerous educators assert that AI is more like the former. They consider using these programs violate academic integrity. Georgetown University professor Lise Howard told me, “I do think it’s unethical and an academic violation to use AI to write paragraphs, because academic work is all about original writing.”
Written assignments have two purposes, argues Ani Ross Grubb, part-time faculty member in the Carroll School of Management at Boston College: “First is to test the learning, understanding, and critical thinking skills of students. Second is to provide scaffolding to develop those skills. Having AI write your assignments would go against those goals.”
Certainly, one can argue that this topic has already been covered in university academic integrity codes. Using AI might open students to serious charges. For instance, American University indicates, “All papers and materials submitted for a course must be the student’s original work unless the sources are cited” while the University of Maryland similarly notes that it is prohibited to use dishonesty to “gain an unfair advantage, and/or using or attempting to use unauthorised materials, information, or study aids in any academic course or exercise.”
But some study aids are generally considered acceptable. When writing papers, it is perfectly fine to use grammar- and syntax-checking products standard on Microsoft Word and other document creating programs. Other AI programs like Grammarly help write better sentences and fix errors. Google Docs finishes sentences in drafts and emails.
So the border between using those kinds of assistive computer programs and full-on cheating remains fuzzy. Indeed, as Jade Wexler, associate professor of special education at the University of Maryland, noted, AI could be a valuable tool to help level the playing field for some students. “It goes back to teachers’ objectives and students’ needs,” she said. “There’s a fine balance making sure both of those are met.”
Thus there are two intertwined questions at work. First: Should institutions permit AI-enhanced writing? If the answer is no, then the second question is: How can professors detect it? After all, it’s unclear whether there’s a technical solution to keeping AI from worming into student papers. An educator’s up-to-date knowledge on relevant sources will be of limited utility since the verbiage has not been swiped from preexisting texts.
Still, there may be ways to minimise these artificial enhancements. One is to codify at the institutional level what is acceptable and what is not; in July the Council of Europe took a few small steps, publishing new guidelines which begin to grapple with these new technologies creating fraud in education. Another would be to keep classes small and give individual attention to students.
As Jessica Chiccehitto Hindman, associate professor of English at Northern Kentucky University, noted, “When a writing instructor is in a classroom situation where they are unable to provide individualised attention, the chance for students to phone it in – whether this is plagiarism, AI, or just writing in a boring, uninvested way – goes up.”
More in-class writing assignments – no screens allowed – could also help. Virginia Lee Strain, associate professor of English and director of the honors program at Loyola University Chicago, further argued, “AI is not a problem in the classroom when a student sits down with paper and pencil.”
But in many settings, more one-on-one time simply isn’t a realistic solution, especially at high schools or colleges with large classes. Educators juggle multiple classes and courses, and for them to get to know every student every semester isn’t going to happen.
A more aggressive stance would be for high schools and universities to explicitly declare using AI will be considered an academic violation – or at least update their honor codes to reflect what they believe is the right side of the line concerning academic integrity. That said, absent a mechanism to police students, it might paradoxically introduce students to a new way to generate papers faster.
Educators realise some large percentage of students will cheat or try to game the system to their advantage. But perhaps, as Hindman says, “if a professor is concerned that students are using plagiarism or AI to complete assignments, the assignments themselves are the problem, not the students or the AI.”
If an educator is convinced that students are using these forbidden tools, he or she might consider using alternate means to generate grades such as assigning oral exams, group projects, and class presentations. Of course, as Hindman notes, “these types of high-impact learning practices are only feasible if you have a manageable number of students.”
AI is here to stay whether we like it or not. Provide unscrupulous students the ability to use these shortcuts without much capacity for the educator to detect them, combined with other crutches like outright plagiarism, and companies that sell papers, homework, and test answers, and it’s a recipe for – well, not disaster, but the further degradation of a type of assignment that has been around for centuries.
This piece was originally published on Future Tense, a partnership between Slate magazine, Arizona State University and New America.