School of Information Sciences

Illinois researchers examine teens’ use of generative AI, safety concerns

Yang Wang
Yang Wang, Professor
Yaman Yu
Yaman Yu

Teenagers use generative artificial intelligence for many purposes, including emotional support and social interactions. A study by University of Illinois Urbana-Champaign researchers found that parents have little understanding of GAI, how their children use it and its potential risks, and that GAI platforms offer insufficient protection to ensure children’s safety.

The research paper by Professor Yang Wang, the co-director of the Social Computing Systems Lab, and doctoral student Yaman Yu is one of the first published sources of data on the uses and risks of GAI for children. Wang and Yu will present their findings in May 2025 at the IEEE Symposium on Security and Privacy, the flagship computer security conference.

Wang and Yu said teens often use GAI platforms but little is known about how they use them, and their perceptions of its risks and how they cope has not previously been explored by scholars.

The researchers analyzed the content of 712 posts and 8,533 comments on Reddit that were relevant to teenagers' use of GAI. They also interviewed seven teenagers and 13 parents to understand their perceptions of safety and how parents attempted to mitigate risk.

They found that teenagers often use GAI chatbots as therapy assistants or confidants to provide emotional support without judgment and help them cope with social challenges. AI chatbots are embedded into social media platforms such as Snapchat and Instagram, and teens incorporate them into group chats, use them to learn social skills, and sometimes treat them as romantic partners. They use GAI for academic purposes such as essay writing, rephrasing text, and generating ideas. Teens also posted on Reddit about requesting sexual or violent content and bullying AI chatbots.

"It's a very heated topic, with a lot of teenagers talking about Character AI and how they are using it," Yu said, referring to a platform for creating and interacting with character-based chatbots.

Wang and Yu reported that both parents and children had significant misconceptions about generative AI. Parents had little to no understanding of their children's use of GAI, and their exposure to the tools was limited. They were unaware of their children's use of tools such as Midjourney and DALL-E for image generation and Character AI. They viewed AI as a tool for homework and as functioning like a search engine, while children primarily used it for personal and social reasons, the researchers said.

Teenagers reported their concerns included becoming overly dependent or addicted to chatbots to fill a void in personal connections, the use of chatbots to create harassing content, unauthorized use of their personal information, and the spread of harmful content, such as racist remarks. They also were concerned about AI replacing human labor and about intellectual property infringement.

Parents perceived that AI platforms collect extensive data, such as user demographics, conversation history, and browser history, and they were concerned about children sharing personal or family information. However, parents "did not fully appreciate the extent of sensitive data their children might share with GAI … including details of personal traumas, medical records and private aspects of their social and sexual lives," the researchers wrote. Parents also were concerned about children inadvertently spreading misinformation and worried that overreliance on AI would lead their children to avoid critical thinking.

Parents said they want child-specific AI that is trained only with age-appropriate content or a system with embedded age- and topic-control features. Children said their parents don't advise them on specific uses of GAI and they want parents to discuss its ethical use rather than restrict it, the researchers reported.

GAI platforms provide limited protection to children, focus on restricting explicit content, and do not offer parental control features tailored to AI. Wang and Yu said both the risks to children and strategies to mitigate it are more complex and nuanced than just blocking inappropriate content. One key challenge for identifying and preventing inappropriate content on GAI platforms is its dynamic nature of generating unique content in real time, compared to static online content, they said.

The researchers said it is critical that the platforms provide transparent explanations of their security and privacy risks identified by experts and recommended they offer content filters that can be tailored to individual families' needs and their children's developmental stages.

However, safety strategies can't be purely technical and need to go beyond filtering and restrictions, recognizing the tension between children's autonomy and parental control of managing online risks. Wang and Yu said adults first need to understand the motivations behind children's behaviors on GAI. They suggested a support chatbot that could provide a safe environment to explain potential risks, enhance resilience, and offer coping strategies to teenage users.

"AI technologies are evolving so quickly, and so are the ways people use them," Wang said. "There are some things we can learn from past domains, such as addiction and inappropriate behavior on social media and online gaming."

Wang said their research is a first step in addressing the problem. He and Yu are creating a taxonomy of risk categories that can be used to have conversations about the risks and interventions to help mitigate them. It also will help identify early signals of risky behavior that include the amount of time spent on a GAI platform, the content of conversations and usage patterns, such as the time of day that children are using the platforms, Wang said.

He and Yu are working with Illinois psychology professor Karen Rudolph, the director of the Family Studies Lab whose research focuses on adolescent development, to establish age-appropriate interventions.

"This is a very cross-disciplinary topic, and we're trying to solve it in cross-disciplinary ways involving education, psychology and our knowledge of safety and risk management. It has to be a technical and a social interaction solution," Yu said.

The paper "Exploring Parent-Child Perceptions on Safety in Generative AI: Concerns, Mitigation Strategies, and Design Implications" is available online. DOI: 10.48550/arXiv.2406.10461

Updated on
Backto the news archive

Related News

PhD students receive scholarships from IAPP

Information Sciences PhD students Mubarak Raji, Eryclis Rodrigues Silva, and Eryue Xu, and Informatics PhD student Muhammad Hussain have received A. Serwin Conference Scholarships from the International Association of Privacy Professionals (IAPP). The award, which recognizes outstanding students in the areas of privacy, AI governance, and digital responsibility, consists of $1,000 and complimentary conference registration. The IAPP’s annual conference, Privacy. Security. Risk., will be held October 30-31 in San Diego, California.

Perkins defends dissertation

PhD candidate Jana M. Perkins successfully defended her dissertation, "Scholarship writ large: A data-rich analysis of professionalization in English literary scholarship from 1940 to the present."

Jana Perkins

Yu receives 2025 Google PhD Fellowship

PhD student Yaman Yu has been named a recipient of the 2025 Google PhD Fellowship in Privacy, Safety, and Security. The fellowship program recognizes outstanding graduate students who are conducting exceptional and innovative research in computer science and related fields, with a special focus on candidates who seek to influence the future of technology. Google PhD fellowships include tuition and fees, a stipend, and mentorship from a Google Research Mentor for up to two years. Google.org is providing over $10 million to support 255 PhD students across 35 countries and 12 research domains.

Yaman Yu

iSchool researchers to present at ASSETS 2025

iSchool faculty and students will present their research at the 27th International Association for Computing Machinery (ACM) Special Interest Group (SIG) ACCESS Conference on Computers and Accessibility (ASSETS 2025), which will be held in Denver, Colorado, October 26–29, 2025. This conference allows researchers to present their scholarship on design, evaluation, use, and education related to computing for people with disabilities and older adults.

Chan to give an invited talk on "Predatory Data"

Professor Anita Say Chan will give an invited lecture at the American University of Beirut (AUB) on October 23. The talk, part of the "Confronted with America" series hosted by the Center for American Studies and Research, will be moderated by Jihad Touma, founding director of AUB's School of Computing and Data Sciences.

Anita Say Chan

School of Information Sciences

501 E. Daniel St.

MC-493

Champaign, IL

61820-6211

Voice: (217) 333-3280

Fax: (217) 244-3302

Email: ischool@illinois.edu

Back to top