Technology Fails Plagiarism, Citation Tests

My home university has taken another step in our quest to provide our students more effective and intentional first-year and sustained writing instruction during their time at our liberal arts institution. Once we moved away from the traditional English Department-based approach to first year composition and committed to a first year seminar format, just under a decade ago, we opened the door to having professors in any department teach first-year writing.

We are currently re-thinking the first year seminar model, but we are also taking steps to support better professors who have content expertise in the disciplines, experience as researchers and writers, but little or no formal background in teaching writing or composition research. This summer, then, we have begun a year-long faculty seminar on teaching writing.

Coincidentally, on the day this week we were scheduled to address plagiarism and citation, a session I was leading, I came across in my Twitter feed Carl Straumsheim’s What Is Detected?:

Plagiarism detection software from vendors such as Turnitin is often criticized for labeling clumsy student writing as plagiarism. Now a set of new tests suggests the software lets too many students get away with it.

The data come from Susan E. Schorn, a writing coordinator at the University of Texas at Austin. Schorn first ran a test to determine Turnitin’s efficacy back in 2007, when the university was considering paying for an institutionwide license. Her results initially dissuaded the university from paying a five-figure sum to license the software, she said. A follow-up test, conducted this March, produced similar results.

I have been resisting the use of Turnitin, or any plagiarism detection software, but my university and many professors remain committed to the technology. The growing body of research discrediting the software suggests:

“We say that we’re using this software in order to teach students about academic dishonesty, but we’re using software we know doesn’t work,” Schorn said. “In effect, we’re trying to teach them about academic dishonesty by lying to them.”

My general skepticism about technology was confirmed years ago when I was serving on my university Academic Discipline Committee where faculty often debated about whether or not students flagged for plagiarism by Turnitin had actually plagiarized (see Thomas, 2007, below). That debate turned on many issues being raised by the failure of plagiarism detection software, as highlighted at the University of Texas-Austin based on Schorn’s research:

  • Despite industry claims to the contrary, most plagiarism detection software fails to accurately detect plagiarism. Read more.
  • The Conference on College Composition and Communication and the Council of Writing Program Administrators do not endorse plagiarism detection software and have issued statements warning of its limitations. Read more.
  • Plagiarism detection software can have substantial unintended effects on student learning. It perpetuates a very narrow definition of originality and does little to teach students about the complex interplay of voices required in dialogic academic writing.
  • Plagiarism detection software transfers the responsibility for identifying plagiarism from a human reader to a non-human process. This runs counter to the Writing Flag’s concern for “careful reading and analysis of the writing of others” as part of the learning process.
  • Plagiarism detection software raises potential legal and ethical concerns, such as the use of student writing to construct databases that earn a profit for software companies, the lack of appeals processes, and potential violations of student privacy and FERPA protections.
  • Plagiarism detection software does not, by itself, provide sufficient evidence to prove academic dishonesty; it should not serve as the sole grounds for cases filed with Student Judicial Services.
  • Instructors who choose to use plagiarism detection software should include a syllabus statement about the software and its use, establish appeals processes, and plan for potential technological failures.

The fourth bullet above—where the authority for both teaching citation and detecting plagiarism is shifted from the professor/teacher to the technology—is the core problem for me because of two key issues: (1) many professors/teachers resist recognizing or practicing that teaching citation (and all aspects of writing) is an ongoing process—not a one-shot act, and (2) focusing on warning students about plagiarism and suspecting all students as potential plagiarizers (teaching plagiarism, a negative, instead of citation, a positive) are part of a larger and corrupt deficit view of scholarship, students, and human nature.

While plagiarism detection software is being unmasked as not as effective as using browser search engines (a free resource), we must admit that even if software or technology works as advertised, best practice always dictates that professors/teachers and students recognize that technology is only one tool in a larger obligation to teaching and/or scholarship.

Another related example of the essential folly of placing too much faith in technology when seeking ways to teach students scholarly citation is citation software, such as NoodleBib and the more recent App RefMe.

Despite, once again, many universities (typically through the library services) encouraging uncritically students to use citation software, I discourage the practice because almost always the generated bibliographies my students submit are incorrect—in part due to some of the harder aspects of APA for the software to address (capitalization/lower-case issues, for example) and in part due to students’ lack of oversight after generating the bibliographies.

Ultimately, then, if we think of citation software as a tool, and if students can be taught to review and edit generated bibliographies, the technology has promise (setting aside that some citation software embeds formatting that can be problematic once inserted into a document also).

Both plagiarism detection and citation software are harbingers of the dangers of seeking shortcuts for teaching students any aspect of writing; spending school or university funds on these inadequate technologies, I think, is hard to defend, but the greater pedagogical problem is how technology often serves to impede, not strengthen our roles as educators—especially as teachers of writing.

Some lessons from these failures of technology include the following:

  1. Be skeptical of technology, especially if their are significant costs involved. Are there free or cheaper alternatives, and can that funding be better spent in the name of teaching and learning?
  2. Be vigilant about teacher agency, notably resisting abdicating that agency to technology instead of incorporating technology into enhancing teacher agency.
  3. Recognize that teaching writing and subsets such as citation are ongoing and developmental commitments that take years and years of intentional instruction.
  4. Resist deficit thinking about students/humans (do not address primarily plagiarism, but citation).

Straumsheim draws a key conclusion from Schorn’s research: “In addition to the issues of false negatives and positives, plagiarism detection software fits into a larger ethical debate about how to teach writing.”

The ethics of teaching writing, I believe, demand we set aside technology mis-serving our teaching and our students—setting technology aside and returning to our own obligations as teachers.

See Also

Thomas, P.L. (2007, May). Of flattery and thievery: Reconsidering plagiarism in a time of virtual information. English Journal, 96(5), 81-84.

Statement on Plagiarism Detection Software (UT-Austin)

Preventing Plagiarism (UT-Austin)

Teaching with Plagiarism Detection Software (UT-Austin)

Plagiarism Detection Software: Limitations and Responsibilities (UT-Austin)

Results of the Plagiarism Detection System Test 2013

Defining and Avoiding Plagiarism: The WPA Statement on Best Practices (WPA)

CCCC Resolution on Plagiarism Detection Services (#3)

Why I Won’t Use TurnItIn to Check My PhD Thesis, Travis Holland

3 comments

  1. howardat58

    Two things:
    1. Get your students to find out how plagiarism software works. A description of the algorithms, not the actual coding.
    2: Expect more problems when “they” want to introduce computer grading of essays/writing/… Pearson and Co are doing this for Common Core English (ELA), and the results are hilarious, or sad if you are on the receiving end.

  2. Pingback: What I’m reading 12 Jul 2015 through 15 Jul 2015 | Morgan's Log
  3. Pingback: When you get wrong answers to the wrong questions… | Honesty, honestly…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s