Speaker: Valentin Hofmann, University of Washington and Ai2
Date: April 16, 1pm
Location: DL 480
Remote Zoom Link: Join our Cloud HD Video Meeting
Meeting Sign Up Link: Valentin Hofmann Ohio State Visit: April 16
Website: https://valentinhofmann.github.io/
Abstract: Language models are known to perpetuate systematic racial prejudices, making their judgments biased in problematic ways about groups like African Americans. While prior research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time. It is unknown whether this covert racism manifests in language models.
In this talk, I will present recent research showing that language models embody covert racism in the form of dialect prejudice: they exhibit raciolinguistic stereotypes about speakers of African American English that are more negative than any human stereotypes about African Americans ever experimentally recorded. I will talk about how these covert stereotypes are related to the attitudes that language models overtly display about African Americans, what harmful consequences they can have, and whether they are addressed by existing methods for alleviating racial bias in language models such as human feedback training. Finally, I will discuss how dialect prejudice affects the reasoning capabilities of language models.
Bio: Valentin Hofmann is a postdoc at the Allen Institute for AI and the University of Washington. His work broadly focuses on the intersection of NLP, linguistics, and computational social science, with specific interests in tokenization and socially aware language models. Previously, he was a PhD student at the University of Oxford and a research assistant at LMU Munich. During his PhD, he also spent time as a research intern at DeepMind and as a visiting scholar at Stanford University.