Don’t ban ChatGPT: Teach students to do what it can’t do
In the 1970s, a new technology that was faster than the human brain and cheaper and more portable than a transistor radio encroached on classrooms. It threatened to disrupt education as we knew it.
This magical new tool was the handheld calculator.
Today, generative artificial intelligence (AI) poses a new threat, albeit one that is even more portable, inexpensive and powerful for the end user. So when OpenAI made its generative AI tool, ChatGPT 3.5, freely available in November 2022, some school districts and universities moved swiftly to ban it.
But banning ChatGPT from educational systems is just as shortsighted as banning the calculator, and just as contrary to the mission of education. As a professor of research methods and statistics, and as a former English teacher, I acknowledge that ChatGPT threatens three pillars of our education system: measurement, information accuracy, and the market value of skills. This makes it even more important that we incorporate AI into our teaching in ways that mitigate the threats and empower students.
First, much like handheld calculators, ChatGPT and its ilk present a measurement challenge. We cannot assess students’ writing skills when a chatbot does the work for them. And AI detection tools can easily lead to false accusations of plagiarism, especially for weaker writers or students whose first language is not English. As such, the rise of generative AI may indeed require that some writing assessments be administered in class offline.
Second, ChatGPT presents an information accuracy challenge. To see why, we must understand that it generates text by mimicking the language patterns of the existing, human-generated texts on which it was trained. These include millions of texts in digital libraries and culled from the internet through September 2021. ChatGPT writes answers by predicting which words and phrases have the highest probability of coming next, based on similar text. Each word and sentence is generated probabilistically based on common syntax and usage, not based on a conceptual understanding of some underlying “intelligence.”
That being the case, ChatGPT can hallucinate answers, meaning it can write things that sound reasonable and have no grounding in fact or logic. It also has the potential to produce biased or discriminatory answers, though this depends to an extent on what the user is requesting. If prompted to write essays on literary works or historical events concerning race, for instance, ChatGPT can produce nuanced, well-informed responses, but because it uses probabilities to anticipate what the user is looking for, it may respond differently to incendiary prompts.
In terms of its potential for inaccuracies, ChatGPT bears similarities with Wikipedia, which educators also initially resisted. But Wikipedia’s success lies in its reliance on citations of primary sources, which ChatGPT does not produce. As such, educators must teach students to read AI-generated texts with a critical eye, using it only as a starting point and never an endpoint in their work.
Third, ChatGPT presents a skill-devaluation challenge. Without question, AI turbo-charges writing tasks. For many of us, this can free up time for more complex and hard-to-automate tasks. But jobs that are heavily composed of writing and communication (e.g., copywriting, content-aggregation journalism, data reporting, even routine computer programming), especially at the entry-to-mid level, may become fewer in the long run.
AI did not start this trend. The digital revolution has decimated local newspapers, and many organizations now outsource the ghostwriting of uncredited promotional content. This does not mean we should stop teaching writing, which is a core skill of human communication across time and place. Instead, we must do so while teaching higher-order communication skills — creativity, tone, inclusivity — which lie more firmly in the domain of the human.
ChatGPT does at least three things faster than people — summarizing texts, aggregating information and adhering to genre conventions. If we teach students to harness this power, they can take these skills to new levels of sophistication that AI’s mimicry cannot easily replicate.
First, when teaching difficult texts, such as the research articles I often teach to my graduate students, we might ask students to critique textual summaries provided by ChatGPT. I find, for instance, that when ChatGPT summarizes research articles, it cannot parse graphs and tables — a skill with which many students also struggle — and it conflates the background literature review with the findings of the study. Students can identify and reflect upon why these limitations occur, fortifying their own comprehension skills in the process.
Second, we can teach students to use ChatGPT for a quick gloss of a new topic, just as they might use a Wikipedia article. For instance, we might ask them to quickly learn the history of the Every Student Succeeds Act of 2015, a federal education update of the 1965 Elementary and Secondary Education Act and the No Child Left Behind Act of 2002. The key would be to ask them to read with a critical eye and then to ask themselves important questions: What surprises or confuses them? What do they need to fact-check or investigate? Where should they go to learn more?
Third, we can use ChatGPT to teach writing conventions in a way that conveys that they are social norms, not inalienable laws of the universe. This is true whether we are talking about writing policy briefs, research papers, resumes, newspaper articles or even computer programs.
By prompting ChatGPT to write in a particular genre and for a particular audience, and then tweaking those specifications, we can help students pay attention to what changes. They can notice how ChatGPT structures its responses — the number of paragraphs, sentence lengths and section attributes. In this way, students learn not only how to follow genre conventions but how to identify, analyze and adapt them.
As a writer and educator, I understand the impulse to bury our heads in the sand in the face of generative AI. But banning AI serves our own fears of obsolescence, not the needs of our students.
As a virtual assistant, ChatGPT can narrow gaps in applied skills, much in the same way that handheld calculators made us all less dependent on mental arithmetic. If we want young people to be able to utilize AI rather than being manipulated by it, we must teach the evolving strengths and limitations of these powerful tools.
Jennifer L. Steele is a professor of education at American University, where she studies policies that advance equity in education systems and the labor market. She can be reached at steele@american.edu.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..