PhD student Sullam Jeoung will present her proposal defense, "EXAMINING LARGE LANGUAGE MODELS FOR SAFETY AND ROBUSTNESS THROUGH THE LENS OF SOCIAL SCIENCE." Jeoung's preliminary examination committee includes Associate Professors Jana Diesner and Halil Kilicoglu, Assistants Professor Nigel Bosch and Haohan Wang.
Large language models have exhibited outstanding performance, at times reaching levels akin to human proficiency, thereby exerting a significant influence on our daily lives. However, they are also known to demonstrate harmful stereotypes and biases associated with socio-demographic representations. The model can inadvertently perpetuate or even amplify existing biases present in the data they are trained on. This can lead to the generation of biased or discriminatory content, which can be harmful to individuals and communities. Consequently, it is imperative to prioritize the safety and robustness of these models, by identifying and rectifying the harmful stereotypes, given the potential risks and harms that may extend to the broader public. This thesis proposes comprehensive methodologies for achieving this goal. It integrates insights from social science, psychology, and cognitive studies, to discern the extent to which these models align with or diverge from human responses. This thesis provides researchers with a more nuanced understanding of large language models (LLMs) by employing a comprehensive approach rooted in social studies
Questions? Contact Sullam Jeoung.