The robots are taking over. Plagiarism is no longer the greatest threat to academic integrity. The world is in a flurry.
I’ve written before about what I think are the deeper issues around AI, but today I wanted to come back to that, specifically thinking about Chat GPT and the academic context, and where I think capitalism has yet again run amok.
This post was inspired by this really excellent opinion piece by Julia McKenzie Munemo. Her argument in this piece is about how intimately the process of writing can be to the process of thinking, and she thinks that students will be horribly deprived of that process if we individually or collectively turn to AI to write things for us. I agree. But I think the problem is deeper and older than Chat GPT.
As Munemo’s tagline states “In a world where students are taught to write like robots, it’s no surprise that a robot can write for them.” I would argue even further along Munemo’s reasoning: students, being taught to write like robots, are also taught to think like and value themselves like robots.
What’s Gone Wrong?
I vividly remember a moment from my days as a teaching assistant during my Master’s degree. I forget the exact class or essay topic, in part because since this first experience, I have had many similar conversations with students. This first conversation I remember relatively clearly though.
The student had come to my office hours after receiving a respectable B grade on a paper. They had clearly laid out the authors’ arguments as they were supposed to, but their own argumentation section was weak; I was unable to differentiate if or where their own original ideas came in separate from the authors they were assigned to discuss. I told them this.
The student asked me, confused, “but what do you mean you need to know what my ideas are? Why would my ideas matter?” Reader, my heart broke. I stuttered for a moment, and then, impassioned told them that everybody’s ideas mattered, and made some gestures towards how rewarding creativity and originality could be based on my own experiences. I reassured them that as graders we didn’t simply want to have our own views or opinions, or those studied in the course parroted back at us. I told them that we hoped to learn something new from our students.
They seemed stunned at this revelation, and to this day I don’t know if my emotional response was enough to overcome their skepticism at the idea that their thoughts mattered.
In my experience, students by and large, are not particularly interested in learning. And who can blame them? In some sense, learning has become a luxury for those with economic cushions to fall back on. It is exorbitantly expensive to engage in post secondary education, especially if students live away from home, and the pressure to get a job that pays a survival wage immediately upon graduation is intense. Students may not have adequate time for learning in and around the job(s) they need to work to pay for their living and study expenses. Students are highly motivated to get the grades they need to get to get the opportunities or further education they need to impress employers. The goal of education becomes not one of learning but of being properly credentialed and networked. The use of Chat GPT along with many other strategies I have seen students execute in my time as an instructor is perfectly aligned with that goal.
Asking students to work outside of this structure is certainly possible. Some professors (I assume most of them tenured or otherwise in ownership of job security) will suggest techniques like inverting the classroom, or eliminating grading as much as possible within the confines of your institution (potentially by having students assign their own grades with rationale at the end of the course). But in the absence of these techniques on the side of professors (and most will admit that these sorts of techniques are much more time consuming, especially in mega-classrooms), students actions are in alignment with the incentives they have been given.
I think that in the creation and sustainment of this incentive structure, we have harmed students.
Identifying and Healing the Harm
As Munemo points out, the benefit of writing is the process, and I would argue that it’s not just the mechanized, robotic process of inputting bullet points into an outline and then forming grammatically correct sentences and transitions around them (although this method of writing might be helpful for some as a part of their own individual process).
In incentivizing a product over a process, we have told students that we do not care about them or their inner lives, their creativity, their struggle; instead we care only about their outputs. In effect, we are treating them as a means to an end, not an end in and of themselves. We are also failing to demonstrate care for them as people, as potential or developing thinkers, knowers, and creators by refusing to engage with, reward, or encourage the process in any meaningful way. Finally, it erodes the trust that students have with teachers, as well as I think with adults and society more generally. In being taught to parrot back the answers they are told are correct, and colour within the lines to get an A+, students are being taught to accept the status quo without questioning, and also that those who are supposed to be responsible for their growth and development are not truly interested in it as long as they can robotically comply within a certain percentage of “perfection.”
This is not just happening at the university level either, it is all the way along.
We say that students are becoming catastrophically afraid of failure, because they don’t have enough opportunities to fail. Yet, we never think about rewarding failure as a part of the process, thereby celebrating them both.
But until and unless we stop expecting and incentivizing students to behave like robots, can we blame them for thinking that perhaps it’s better for everyone if they just have the robots do the work instead?
If you’re interested in my work and would like to reach out for any hiring opportunities, please contact me at Type-Driven Consulting.
Or if you like the work I do here, consider supporting it by buying me a ko-fi!
1 thought on “Chat GPT and Demonstrating Care”
[…] One major facet of the AI problem is that incentives tend to incentivize manipulation on the part of the robot being incentivized. This seems true of human behaviour as well. The rich are incentivized to manipulate politicians. Oil companies are incentivized to manipulate the public into believing their carbon footprint is what is the major cause of climate change. Students are incentivized to cheat. […]