Paul Liang Presentation
Paul Liang, PhD student studying machine learning at CMU, will present "Foundations of Multisensory Artificial Intelligence."
Abstract: Building multisensory AI systems that learn from multiple sensory inputs—text, speech, video, real-world sensors, wearable devices, and medical data—holds great promise for impact in scientific areas with practical benefits, such as in supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. In this talk, I will discuss my research on the machine learning principles of multisensory intelligence, as well as practical methods for building multisensory foundation models across modalities and tasks. In the first half, I will present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems, and their quantification enables users to understand their multimodal datasets and design principled approaches to learn these interactions. In the second part, I will present my work in cross-modal attention and multimodal transformer architectures that now underpin many of today’s multimodal foundation models. Finally, I will discuss our collaborative efforts in scaling AI to many modalities and tasks for real-world impact in mental health, cancer prognosis, and robot control.
Bio: Paul Liang is a PhD student in machine learning at CMU, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. He studies the machine-learning foundations of multisensory intelligence to design practical AI systems that integrate, learn from, and interact with a diverse range of real-world sensory modalities. His work has been applied in affective computing, mental health, pathology, and robotics. He is a recipient of the Siebel Scholars Award, Waibel Presidential Fellowship, Facebook PhD Fellowship, Center for ML and Health Fellowship, Rising Stars in Data Science, and three best paper/honorable mention awards at ICMI and NeurIPS workshops. Outside of research, he received the Alan J. Perlis Graduate Student Teaching Award for instructing courses on multimodal ML and advising students around the world in directed research.