Responsible DS + AI Speaker Series: Byron Wallace, Northeastern University
Byron Wallace, associate professor in the Khoury College of Computer Sciences at Northeastern University, will present "Methods to Aid Model Debugging: From Rationales to Influence."
Abstract: Modern deep learning models for natural language processing achieve state-of-the-art predictive performance but are notoriously opaque. I will discuss recent work looking to address this limitation by providing varieties of "interpretability" for specific predictions, including "rationales" and important training samples. With respect to the latter, I will focus on techniques intended to aid "dataset debugging" by surfacing potentially problematic training examples.
Byron Wallace is an associate professor and the director of the BS in Data Science program in the Khoury College of Computer Sciences at Northeastern University. His research is primarily in natural language processing (NLP) methods, with an emphasis on their application in health informatics. He develops language technologies to automate (or semi-automate) biomedical evidence synthesis, with a methodological focus on model interpretability; learning with limited supervision from diverse sources; human-in-the-loop/hybrid systems; and trustworthiness of model outputs.
Questions? Contact Janet Eke or Kanyao Han
The Responsible Data Science and AI Speaker Series discusses topics such as equity, fairness, biases, ethics, and privacy. The presentations and discussions take place on Fridays, 9-10 am Central Time, on Zoom. This series is organized by Associate Professor Jana Diesner and supported by the Center for Informatics Research in Science and Scholarship (CIRSS) and the School of Information Sciences at the University of Illinois Urbana-Champaign.
If you are interested in this speaker series, please subscribe to our speaker series calendar: Google Calendar or Outlook Calendar.
This event is sponsored by Center for Informatics Research in Science and Scholarship