School of Information Sciences

Illinois researchers examine teens’ use of generative AI, safety concerns

Yang Wang
Yang Wang, Professor
Yaman Yu
Yaman Yu

Teenagers use generative artificial intelligence for many purposes, including emotional support and social interactions. A study by University of Illinois Urbana-Champaign researchers found that parents have little understanding of GAI, how their children use it and its potential risks, and that GAI platforms offer insufficient protection to ensure children’s safety.

The research paper by Professor Yang Wang, the co-director of the Social Computing Systems Lab, and doctoral student Yaman Yu is one of the first published sources of data on the uses and risks of GAI for children. Wang and Yu will present their findings in May 2025 at the IEEE Symposium on Security and Privacy, the flagship computer security conference.

Wang and Yu said teens often use GAI platforms but little is known about how they use them, and their perceptions of its risks and how they cope has not previously been explored by scholars.

The researchers analyzed the content of 712 posts and 8,533 comments on Reddit that were relevant to teenagers' use of GAI. They also interviewed seven teenagers and 13 parents to understand their perceptions of safety and how parents attempted to mitigate risk.

They found that teenagers often use GAI chatbots as therapy assistants or confidants to provide emotional support without judgment and help them cope with social challenges. AI chatbots are embedded into social media platforms such as Snapchat and Instagram, and teens incorporate them into group chats, use them to learn social skills, and sometimes treat them as romantic partners. They use GAI for academic purposes such as essay writing, rephrasing text, and generating ideas. Teens also posted on Reddit about requesting sexual or violent content and bullying AI chatbots.

"It's a very heated topic, with a lot of teenagers talking about Character AI and how they are using it," Yu said, referring to a platform for creating and interacting with character-based chatbots.

Wang and Yu reported that both parents and children had significant misconceptions about generative AI. Parents had little to no understanding of their children's use of GAI, and their exposure to the tools was limited. They were unaware of their children's use of tools such as Midjourney and DALL-E for image generation and Character AI. They viewed AI as a tool for homework and as functioning like a search engine, while children primarily used it for personal and social reasons, the researchers said.

Teenagers reported their concerns included becoming overly dependent or addicted to chatbots to fill a void in personal connections, the use of chatbots to create harassing content, unauthorized use of their personal information, and the spread of harmful content, such as racist remarks. They also were concerned about AI replacing human labor and about intellectual property infringement.

Parents perceived that AI platforms collect extensive data, such as user demographics, conversation history, and browser history, and they were concerned about children sharing personal or family information. However, parents "did not fully appreciate the extent of sensitive data their children might share with GAI … including details of personal traumas, medical records and private aspects of their social and sexual lives," the researchers wrote. Parents also were concerned about children inadvertently spreading misinformation and worried that overreliance on AI would lead their children to avoid critical thinking.

Parents said they want child-specific AI that is trained only with age-appropriate content or a system with embedded age- and topic-control features. Children said their parents don't advise them on specific uses of GAI and they want parents to discuss its ethical use rather than restrict it, the researchers reported.

GAI platforms provide limited protection to children, focus on restricting explicit content, and do not offer parental control features tailored to AI. Wang and Yu said both the risks to children and strategies to mitigate it are more complex and nuanced than just blocking inappropriate content. One key challenge for identifying and preventing inappropriate content on GAI platforms is its dynamic nature of generating unique content in real time, compared to static online content, they said.

The researchers said it is critical that the platforms provide transparent explanations of their security and privacy risks identified by experts and recommended they offer content filters that can be tailored to individual families' needs and their children's developmental stages.

However, safety strategies can't be purely technical and need to go beyond filtering and restrictions, recognizing the tension between children's autonomy and parental control of managing online risks. Wang and Yu said adults first need to understand the motivations behind children's behaviors on GAI. They suggested a support chatbot that could provide a safe environment to explain potential risks, enhance resilience, and offer coping strategies to teenage users.

"AI technologies are evolving so quickly, and so are the ways people use them," Wang said. "There are some things we can learn from past domains, such as addiction and inappropriate behavior on social media and online gaming."

Wang said their research is a first step in addressing the problem. He and Yu are creating a taxonomy of risk categories that can be used to have conversations about the risks and interventions to help mitigate them. It also will help identify early signals of risky behavior that include the amount of time spent on a GAI platform, the content of conversations and usage patterns, such as the time of day that children are using the platforms, Wang said.

He and Yu are working with Illinois psychology professor Karen Rudolph, the director of the Family Studies Lab whose research focuses on adolescent development, to establish age-appropriate interventions.

"This is a very cross-disciplinary topic, and we're trying to solve it in cross-disciplinary ways involving education, psychology and our knowledge of safety and risk management. It has to be a technical and a social interaction solution," Yu said.

The paper "Exploring Parent-Child Perceptions on Safety in Generative AI: Concerns, Mitigation Strategies, and Design Implications" is available online. DOI: 10.48550/arXiv.2406.10461

Updated on
Backto the news archive

Related News

He inducted into Sigma Xi

Professor Jingrui He has been inducted into Sigma Xi, The Scientific Research Honor Society. Sigma Xi is the international honor society of science and engineering and one of the oldest and largest scientific organizations in the world, boasting a history of service to science and society spanning over 125 years. It has a multidisciplinary membership of scientists, engineers, and scholars, and Sigma Xi chapters can be found in universities and colleges, government laboratories, and commercial research centers.

Jingrui He

Hassan and Bashir receive distinguished paper award

A paper co-authored by PhD student Muhammad Hassan and Associate Professor Masooda Bashir received the Distinguished Paper Award at the Workshop on Security and Privacy in Standardized IoT, which was held last month in San Diego, California, in conjunction with the Network and Distributed System Security (NDSS) Symposium 2026. 

iSchool researchers to present work at Technocracy Conference

This week, iSchool PhD students and faculty will present their research at the Technocracy Conference. Hosted by the Unit for Criticism and Interpretive Theory at the University of Illinois on March 5–6, the conference will begin with a panel of graduate student papers and continue the following day with invited speakers and a keynote. All events will take place at the Levis Faculty Center on the Urbana campus. 

New multi-institutional project to use AI to represent past historical periods

A new project led by a team of researchers from four universities aims to create and evaluate language models that represent past historical periods. The project, "Artificial Intelligence for Cultural and Historical Reasoning," was recently selected for a 2025 Humanities and AI Virtual Institute (HAVI) award from Schmidt Sciences. The $800,000 grant will be split among four institutions: Cornell University, the University of Illinois Urbana-Champaign, The University of British Columbia, and McGill University. Professor Ted Underwood will serve as the principal investigator for the portion of the project at Illinois.

Ted Underwood

Wang group to present at WSDM26

Professor and Associate Dean for Research Dong Wang and PhD student Ruohan Zong will present their research at the 19th ACM International Conference on Web Search and Data Mining (WSDM 26), which will be held from February 22–26 in Boise, Idaho. WSDM is a premier international conference in web search, data mining, and AI, known for its highly selective acceptance rates. This year, the acceptance rate for the main track of the conference was only 16 percent. 

Dong Wang

School of Information Sciences

501 E. Daniel St.

MC-493

Champaign, IL

61820-6211

Voice: (217) 333-3280

Email: ischool@illinois.edu

Back to top