While traditional closed captions represent the spoken part of a video, important content may not be expressed, to the detriment of audiences who depend on captions to understand the material being presented. With the increasing reliance on videos in online learning, this becomes even more problematic. A new collaborative project being led by Assistant Professor Yun Huang will focus on explanatory captions, which give insight into a video's visual and audio content as well as the spoken word. Her project, "Advancing STEM Online Learning by Augmenting Accessibility with Explanatory Captions and AI," has received a three-year $526,006 grant (totaling $849,994 with two collaborators at Gallaudet University and University at Notre Dame) from the National Science Foundation (NSF).
"Explanatory captions have the potential to play a new role in STEM learning," said Huang. "This project will work to devise effective Q/A mechanisms and interaction designs, such as chatbots, that enable students and instructors to generate explanatory captions for STEM videos in a collaborative manner."
The proposed technologies will make videos more accessible to the deaf and hard-of-hearing (DHH) community and non-native English speakers. Evaluation sites will include Gallaudet University, the world's only liberal arts university dedicated exclusively to educating DHH learners, and the University of Illinois Urbana-Champaign, which has the largest international student population among U.S. public universities.
Huang's research areas include social computing, human-computer interaction, mobile computing, and crowdsourcing. She received her PhD in information and computer science from the University of California, Irvine.