New study shows LLMs respond differently based on user’s motivation

Chaewon Bak
Michelle Bak
Jessie Chin
Jessie Chin, Assistant Professor

A new study conducted by PhD student Michelle Bak and Assistant Professor Jessie Chin, which was recently published in the Journal of the American Medical Informatics Association (JAMIA), reveals how large language models (LLMs) respond to different motivational states. In their evaluation of three LLM-based generative conversational agents (GAs)—ChatGPT, Google Bard, and Llama 2—the researchers found that while GAs are able to identify users' motivation states and provide relevant information when individuals have established goals, they are less likely to provide guidance when the users are hesitant or ambivalent about changing their behavior.

Bak provides the example of an individual with diabetes who is resistant to changing their sedentary lifestyle.  

"If they were advised by a doctor that exercising would be necessary to manage their diabetes, it would be important to provide information through GAs that helps them increase an awareness about healthy behaviors, become emotionally engaged with the changes, and realize how their unhealthy habits might affect people around them. This kind of information can help them take the next steps toward making positive changes," said Bak.

Current GAs lack specific information about these processes, which puts the individual at a health disadvantage. Conversely, for individuals who are committed to changing their physical activity levels (e.g., have joined personal fitness training to manage chronic depression), GAs are able to provide relevant information and support. 

"This major gap of LLMs in responding to certain states of motivation suggests future directions of LLMs research for health promotion," said Chin.

Bak's research goal is to develop a digital health solution based on using natural language processing and psychological theories to promote preventive health behaviors. She earned her bachelor's degree in sociology from the University of California Los Angeles.

Chin’s research aims to translate social and behavioral sciences theories to design technologies and interactive experiences to promote health communication and behavior across the lifespan. She leads the Adaptive Cognition and Interaction Design (ACTION) Lab at the University of Illinois. Chin holds a BS in psychology from National Taiwan University, an MS in human factors, and a PhD in educational psychology with a focus on cognitive science in teaching and learning from the University of Illinois.

Updated on
Backto the news archive

Related News

Faculty receive support for AI-related projects from new pilot program

Associate Professor Yun Huang, Assistant Professor Jiaqi Ma, and Assistant Professor Haohan Wang have received computing resources from the National Artificial Intelligence Research Resource (NAIRR), a two-year pilot program led by the National Science Foundation in partnership with other federal agencies and nongovernmental partners. The goal of the pilot is to support AI-related research with particular emphasis on societal challenges. Last month, awardees presented their research at the NAIRR Pilot Annual Meeting.

iSchool participation in iConference 2025

The following iSchool faculty and students will participate in iConference 2025, which will be held virtually from March 11-14 and physically from March 18-22 in Bloomington, Indiana. The theme of this year's conference is "Living in an AI-gorithmic world."

Carboni joins the iSchool faculty

The iSchool is pleased to announce that Nicola Carboni has joined the faculty as an assistant professor. He previously served as a postdoctoral researcher and lecturer in digital humanities at the University of Geneva.

Nicola Carboni

Youth-AI-Safety named a winning team in international hackathon

A team of researchers from the SALT (Social Computing Systems) Lab has been selected as a winner in an international hackathon hosted by the Berkeley Center for Responsible, Decentralized Intelligence. The LLM Agents MOOC Hackathon brought together over 3,000 students, researchers, and practitioners from 127 countries to build and showcase innovative work in large language model (LLM) agents, grow the AI agent community, and advance LLM agent technology.

Chan to present "Predatory Data" work at named lectures

Associate Professor Anita Say Chan will present research drawn from her new book, Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future, at two named lectures this month. The lectures, which celebrate Women's History Month, will be held at the University of Minnesota and Carnegie Mellon University.

Anita Say Chan