School of Information Sciences

Frank Stinar Preliminary Examination

Frank Stinar

PhD candidate Frank Stinar will present his dissertation proposal, "Modernizing and Decentralizing Ethics in Educational AI." His preliminary examination committee includes Associate Professor Nigel Bosch (Chair), Professor Dong Wang, Assistant Professor Ge Wang, and Assistant Professor Alison Duncan Kerr.

Abstract

Artificial intelligence (AI) systems are increasingly deployed in educational contexts, from adaptive learning platforms to predictive systems that identify at-risk students. While these technologies promise to enhance learning outcomes, they also raise critical ethical concerns about bias, fairness, and potential harm. Educational AI systems are at the intersection of technical and social domains. Technical solutions for bias detection and mitigation exist but often fail to account for the diversity of scenarios in which educational AI systems are applied, operating within narrow definitions of fairness that may not align with how stakeholders understand these concepts. Thus, purely technical or purely social solutions are not sufficient to mitigate possible harms. Without proper stakeholder agency and processes to translate social concerns into technical solutions, valuable insights from those most affected by educational AI systems are lost or ignored. This proposal argues that ethical practices in educational AI must be both modernized and decentralized. Modernization entails adopting state-of-the-art methods for bias detection and mitigation and making these tools accessible to educational AI practitioners. Decentralization involves increasing stakeholder input and agency into the design, deployment, and evaluation of educational AI systems. I propose to move beyond technical solutions to embrace a more inclusive and participatory approach to ethics. The proposal places ethical educational AI research onto a continuum from technical solutions to social and stakeholder-centered approaches. We begin by evaluating state-of-the-art bias mitigation methods for educational AI. Then, we examine the impacts of these technical solutions, revealing that unfairness mitigation methods can affect educational data in ways that could raise concerns about procedural fairness. Considering student perspectives more, we shift focus to understanding real-world consequences of bias by analyzing how students who are English language learners experience different outcomes in adaptive learning software compared to non-English language learners. Finally, we promote stakeholder agency by surveying students and teachers on their data-sharing preferences and views on fairness across different educational AI contexts and levels of impact. The proposal concludes with a timeline to conduct four studies to finish collecting stakeholder perspectives on data-sharing preferences.

Questions? Contact Frank Stinar

School of Information Sciences

501 E. Daniel St.

MC-493

Champaign, IL

61820-6211

Voice: (217) 333-3280

Email: ischool@illinois.edu

Back to top