He receives grant to study how risk of foreign influence on media can be mitigated

Jingrui He
Jingrui He, Professor and MSIM Program Director

The Department of Homeland Security has awarded Associate Professor Jingrui He a two-year, $319,568 grant to study how the risk of foreign influence on news media can be mitigated. Her project, "Towards a Computational Framework for Disinformation Trinity: Heterogeneity, Generation, and Explanation," will lead to a new suite of algorithms and software tools to detect, predict, generate, and understand disinformation dissemination. Hanghang Tong, associate professor of computer science at Illinois, will serve as co-principal investigator.

"As the 2020 decade unfolds, there is great optimism on what technology will emerge and how it can make daily life easier. However, the greater the technology, the greater risk foreign influence can have on that technology," He said.

For her project, He will study foreign influence via the lens of disinformation on news media from a computational perspective. She will use Explainable Heterogeneous Adversarial Machine Learning (EXHALE) to address the limitations of current techniques in terms of comprehension, characterization, and explainability.

"The proposed techniques are expected to advance state-of-the-art in machine learning and AI. They are also expected to enhance the national resilience to foreign influence operations from multiple aspects, and thus help to mitigate the risk of foreign influence through the identification of messaging, tactics, target audience, and outreach," she said.

He's research focuses on heterogeneous machine learning, rare category analysis, active learning and semi-supervised learning, with applications in social network analysis, healthcare, finance, and manufacturing processes. She earned her PhD in machine learning from Carnegie Mellon University. 

Updated on
Backto the news archive

Related News

Ocepek and Sanfilippo co-edit book on misinformation

Assistant Professor Melissa Ocepek and Assistant Professor Madelyn Rose Sanfilippo have co-edited a new book, Governing Misinformation in Everyday Knowledge Commons, which was recently published by Cambridge University Press. An open access edition of the book is available, thanks to support from the Governing Knowledge Commons Research Coordination Network (NSF 2017495). The new book explores the socio-technical realities of misinformation in a variety of online and offline everyday environments. 

Governing Misinformation in Everyday Knowledge Commons book

Faculty receive support for AI-related projects from new pilot program

Associate Professor Yun Huang, Assistant Professor Jiaqi Ma, and Assistant Professor Haohan Wang have received computing resources from the National Artificial Intelligence Research Resource (NAIRR), a two-year pilot program led by the National Science Foundation in partnership with other federal agencies and nongovernmental partners. The goal of the pilot is to support AI-related research with particular emphasis on societal challenges. Last month, awardees presented their research at the NAIRR Pilot Annual Meeting.

iSchool participation in iConference 2025

The following iSchool faculty and students will participate in iConference 2025, which will be held virtually from March 11-14 and physically from March 18-22 in Bloomington, Indiana. The theme of this year's conference is "Living in an AI-gorithmic world."

Carboni joins the iSchool faculty

The iSchool is pleased to announce that Nicola Carboni has joined the faculty as an assistant professor. He previously served as a postdoctoral researcher and lecturer in digital humanities at the University of Geneva.

Nicola Carboni

Youth-AI-Safety named a winning team in international hackathon

A team of researchers from the SALT (Social Computing Systems) Lab has been selected as a winner in an international hackathon hosted by the Berkeley Center for Responsible, Decentralized Intelligence. The LLM Agents MOOC Hackathon brought together over 3,000 students, researchers, and practitioners from 127 countries to build and showcase innovative work in large language model (LLM) agents, grow the AI agent community, and advance LLM agent technology.