Ismini Lourentzou presentation
Ismini Lourentzou, assistant professor at the Computer Science Department of Virginia Tech, will present "Advancements and Challenges in Multimodal Learning."
Abstract: As the field of Artificial Intelligence (AI) continues to evolve at an unprecedented pace, there is an urgent need to address several critical deficiencies in current computer vision and natural language understanding methods, including but not limited to robustness, data privacy and scarcity, bias, explainability, and human-AI collaboration. These challenges can have significant implications for the effectiveness and responsible deployment of AI systems, especially in safety-critical specialized domains such as healthcare and manufacturing. In this talk, I will cover some of my recent work in multimodal machine learning, highlighting advancements in representation learning, medical image analysis, and privacy-preserving data sharing. Finally, I will outline open research directions in human-agent collaboration and embodied intelligence.
Bio: Ismini Lourentzou is an assistant professor at the Computer Science Department of Virginia Tech, where she leads the Perception and Language (PLAN) Lab. She is also a faculty member of the Sanghani Center for Artificial Intelligence and Discovery Analytics, and an affiliate faculty of the National Security Institute and the Center for Advanced Innovation in Agriculture. Ismini's primary research focus is multimodal machine learning, particularly the intersection of vision and language in settings with limited supervision, and its applications in healthcare, embodied AI, and other fields. She served on the organizing committee of NeurIPS'22 and is currently serving on the NeurIPS'23 organizing committee, as well as holding editorial and area chair roles for top-tier AI journals and conferences. Her research has received support from NSF, DARPA, CCI, and Amazon.
To attend virtually, email Christine Hopper for the Zoom link.