A paper coauthored by PhD student Sullam Jeoung, Associate Professor Jana Diesner, and Associate Professor Halil Kilicoglu was named Best Long Paper at TrustNLP: Third Workshop on Trustworthy Natural Language Processing, which was held in conjunction with the Annual Conference of the Association for Computational Linguistics (ACL 2023). In their paper, "Examining the Causal Impact of First Names on Language Models: The Case of Social Commonsense Reasoning," the researchers discuss how the use of first names in language models can impact their trustworthiness.
"Language models are increasingly used and deployed across various applications and domains that engage with users, ranging from some recent popular models such as ChatGPT and Bard to applications such as AI counseling," explained Jeoung. "As language models are used in circumstances where social intelligence and commonsense reasoning are becoming important, it is imperative to ensure the trustworthiness of language models."
Through a controlled experiment to measure the causal effect of first names on reasoning, the researchers were able to distinguish between model predictions due to chance and those caused by factors of interest. Their results indicate that the frequency of first names in a model has a direct effect on its prediction.
"Our findings suggest that to ensure model robustness, it is essential to augment datasets with more diverse first names during the configuration stage," said Jeoung.