On Wednesday, April 17th the WRD Department greeted Dr. Antonio Byrd, an Assistant Professor of English from the University of Missouri-Kansas City, to present “Practicing Linguistic Justice with Large Language Models.” In the hour and a half long presentation, Dr. Byrd discussed the need for a critical AI literacy that supports students who speak non-standard forms of English.
With over 20 attendees in person at Arts & Letters Hall and many more attending via Zoom, the event was a success that garnered interest in the department from students, staff, and faculty alike.
In the interactive presentation, Dr. Byrd began by establishing that generative artificial intelligence (AI) Large-Language Models (LLMs), with Chat-GPT as one notable example, mediate both language and how we teach, especially because they influence how students conceptualize writing. Because LLMs emulate habits of white language, Dr. Byrd argued that they flatten out the uniqueness present in non-standard forms of English like African American Vernacular English (AAVE) and in turn also flatten cultures and identities. Rather than teaching students to replicate the habits of white language by using LLMs, Dr. Byrd argues for multiple languages and dialects as a “valuable rhetorical choice,” which students should be asked to articulate in their writing decisions.
In the second part of the presentation, Dr. Byrd discussed LLMs as linguistic pleasure tools. He explained that technology designers try to appeal to pathos by applying cultural knowledge or one’s sense of self to the technological design. For instance, the LLM Chat-GPT designed the user interface to appeal to consumers’ philosophical values; its chatbot interface is familiar to users and emulates social interaction with someone friendly, helpful, and invested in us. And, by using habits of white language, LLMs purport a stance of rational objectivity. Yet, LLMs training datasets reflect bias in language because they predominantly use data containing “standard” white English.
Dr. Byrd discussed LLMs beyond Chat-GPT as well, referencing Latimer, Le Chat Mistral, BLOOM, and Lex.Page. What is common among these disparate LLMs is that they replicate habits of white language to lend themselves a sense of credibility. That’s because LLMs don’t exist in a vacuum—machines are trained by humans. They reflect our own linguistic biases.
Tying in his discussion of AI back into the classroom, Dr. Byrd stated that universities which create partnerships with AI companies may endanger linguistic justice efforts because when students see LLMs generating “standard” white English, they may see it as the “correct” way to think and write. However, there is no correct way to think or write. Dr. Byrd advocates for a “partnership for discovering” in the classroom, where students can utilize their own unique linguistic background and have it be perceived as legitimate. In first-year writing classrooms in particular, Dr. Byrd explains “a lot of unlearning has to happen.” It is professors’ responsibility to help students move past visions of “standard” white English as the ultimate goal, perhaps even by breaking past the essay and utilizing multi-genre assignments to get students to think creatively and in their own voice.
Ultimately, Dr. Byrd’s presentation provided an informative and enlightening perspective that will invigorate new conversations about what it means to think and write authentically and how we can support that effort in the classroom.
To learn more about other events and speakers the WRD Department has hosted, check out Department News & Events on the WRD Blog.