It’s no secret that educators have reached new levels of exasperation with students submitting work created by generative artificial intelligence. A quick scan of the r/professors subreddit routinely turns up dozens of posts along the lines “why do they keep using AI?” and “any suspected AI usage is an automatic zero.”
As a graduate instructor (read: baby professor), I didn’t quite know where I came down on the AI problem until it finally infected one of my classrooms. I’ve been lucky to date, I teach classes in public health and communication where the topics are very relevant to current events. I’ve been blessed so far with attentive students who employ decent critical thinking and react positively to my no-test and no-tech classroom approach. My teaching philosophy has always prioritized thinking through problems - not just demonstrating an understanding of concepts but using those concepts to explain the world around them.
In a previous semester, I taught a class about misinformation, and the students had to dig into their personal media usage patterns to produce longform research papers on media literacy patterns that affect their communities. In another class, I taught students about transgressive media and challenged them to deeply think about why humans create and consume disturbing art. These students were tasked with delivering 4 short essays and participating in debates. In both these classes, I was delighted to find young inquisitive minds who desperately wanted to critically examine the world, think about their place in it, and come up with ways to bring the world in line with their own ideals. I have loved teaching, and consider it a calling (which unfortunately makes it easier to exploit me for cheap labor).
So that’s why when I finally had a student submit AI generated work, I lost it. I’ve had students who are disengaged in the past, and have handed out my share of failing grades as well, but I never was upset about it. In fact, I always saw giving out F’s as another form of education - it’s a message to the student that they need to do some reflection. Maybe they took a class they weren’t ready for, or maybe they overestimated their ability to succeed without attending class. Whatever the case may be, failing a class is never the end of the world. Failure is a great teacher, and AI is no teacher at all.
The student in question was already failing my course. I had checked in on them a couple times throughout the term but came to understand they were simply not approaching college with the mature mindset needed to pass. In the past, I have emailed students like these and let them know the specific changes they should reflect on so they don’t fail another course. Usually it goes along the lines of: (1) show up to class and participate more regularly because you’re missing out on facetime with the professor that you’re paying for, (2) find a way to organize yourself so you don’t lose track of deadlines, like using a calendar app or personal planner, and (3) try to be honest with yourself throughout the semester about whether you’re struggling - the sooner you realize you’re in trouble the sooner you can get in touch with the professor and get some help.
Usually the students don’t respond to that email right away, but I’ve had at least a couple students come back in subsequent terms and let me know they appreciated the come-to-Jesus moment and are doing better now. But this student chose to go another way, and submitted a piece of work that was AI-generated.
Now, it’s historically hard to definitively prove that a student used AI to write something. There are some pieces of circumstantial evidence, like checking the metadata to see if the file creation date and last-modification date are suspiciously close together, indicating a copy-paste job, or using some of the AI checkers (which are generally accepted to be less than reliable). But there’s a clear lemon law that teachers feel in their gut when a paper is super vague, doesn’t match the tone of the student’s previous work, doesn’t reference ideas from class, or uses oddly professional vocabulary. Some professors even use heuristics like odd formatting, the use of em dashes, or bullet-pointed lists to determine AI use.
But when you know, you know. Especially when you spend so much time trying to get to know your students on a personal level, you just know. And once I knew, for the first time in my teaching career, I found myself truly and terrifyingly mad at a student. I’ve been sad, exasperated, disappointed, and worried about the future of our society, but I’ve never actually been angry at a student before (again, baby professor here).
I was angry because I put so much of myself into my teaching. Maybe it’s naive of me, but I’ve never reused someone else’s syllabus, opting to design my own courses based on my experience and understanding of pedagogy theory. I’ve always taught textbook-less courses to decrease the cost-burden, and use a wide variety of materials like videos, comic books, and cheeky prompts to keep my classroom environment fresh. I provide my students with narrative evaluations of their work that are often as long as their papers, and try to offer 1-1 meetings at the end of term to discuss what the students will do next. I care about teaching, it’s one of the things that I sincerely love, and am always working to get better at. And so even when a student doesn’t recognize how disrespectful the submission of AI generated work is, I can’t help but see red.
And I tried to think about why the student would use AI. I tried to see their perspective - maybe they just crumbled under the pressure, maybe they had no confidence in their own work after getting so much seemingly harsh feedback from me on their previous submissions, or maybe they thought AI would help with the language barrier. But I can’t see my way from any of those premises to a logical justification of the use of a text autocomplete program in a college course where the goal is to build your critical thinking and writing skills.
The understanding that professors are now arriving at is that AI use is just the symptom of a broader problem. Education isn’t a public good, or path to personal upliftment anymore. It’s a means to an end - it’s a commodity to leverage into a job and a place in society. When I was in college, I heard people whisper that cheating was ok if you got away with it because you were just demonstrating problem solving skills that would serve you well in the “real world.” But college, to me at least, was just as much the real world as my jobs have been.
My anger about AI use ultimately derives from my personal philosophy on the role of education. When you see education as a goal-oriented activity then yes, cheating, plagiarism, AI - these are all acceptable I suppose. But where does that get us really? Are we able to build a society of people who can employ critical thinking to assess when they’re being taken advantage of? Are people empowered to reflect on their lives and identify the ways in which they can feel more fulfilled and contribute to their communities? It’s an age-old complaint of professors, but it’s real - the actual value of education has never been its ability to help you get a job (and let’s be real, it doesn’t help much these days).
University is not the path for every student, but truly educating yourself is important for every person. Whether it be through going to university or just getting involved in your community, staying open minded, making good use of a library card, and asking a lot of questions, the process of cherishing education creates good people. It creates people who understand how broad the world is, and in turn how tiny the world is compared to the cosmos. That understanding is necessary if we want to hold onto our species’ identity as a civilized society.
It’s not like radical reforms in education will suddenly improve our society. It doesn’t work like that after all - a strong education system is the result of a caring society, not the precursor of it. Furthermore, combatting the new belligerent attitudes towards education is an uphill battle that the academy is clearly losing at the moment. But we still need to try to reclaim the narrative, and remind people whenever possible that education empowers people to figure out who they want to be. That’s why the AI problem bothers me so much; it tells me that my student is deciding that’s who they want to be - someone who outsources thinking to a machine.
---