Clara Belitz's Dissertation Defense
PhD candidate Clara Belitz will present her dissertation defense, “Fair for whom? Investigating learning identity, algorithmic fairness, and educational technologies.” Belitz's dissertation committee includes Associate Professor Nigel Bosch (Chair), School of Information Sciences; Professor Jana Diesner, Technical University of Munich; Associate Professor Toby Beauchamp, Gender and Women's Studies; and Associate Professor Jessie Chin, School of Information Sciences.
Abstract
Artificial intelligence (AI), as a subset of information and communication technologies, is rapidly proliferating across every sector of society. In educational spaces, data-driven systems are frequently found in the form of adaptive tutoring systems and online learning environments. While such technologies can enhance human teaching practices, their integration raises critical questions about ethics, efficacy, and accountability. Debate around values is not a novel issue; goals are often in conflict with one another and require a balance in optimization tasks—efficiency cannot come at the expense of ethical conduct, for example. Balancing complementary goals of “accuracy,” as measured by the (mis)match between observational labels and predicted outcomes, and “fairness,” as measured by one or more statistical metrics, requires thoughtful and reflexive choices about how researchers choose to measure these outcomes.
This dissertation addresses questions of how best to measure fairness and equity, inherently malleable human constructs, focusing on intelligent tutoring systems. Employing a mixed-methods approach, including interviews, surveys, and interaction logs, I investigate fairness through three studies. First, I present how students who described a learning identity in free-response survey data made more progress in an AI-driven mathematics education software across the academic year. These results highlight how intelligent tutoring systems may exacerbate existing educational inequities. Second, I introduce a novel adaptation of algorithmic bias metrics to account for classroom-level statistical dependencies. These adapted metrics improve fairness assessments for group settings. Third, I share the results of interviews and a design activity with fifteen middle- and high-school students. Students described mixed experiences: while some value AI-driven educational tools, others reported frustrations with both the technology itself and broader educational contexts. Students also broadly rejected the usage of their demographic information for predictive purposes in an educational setting. The variety of student experiences underscored disparities in AI’s benefits and harms. I find that AI-driven educational technologies support some, but not all, learners. I conclude by advocating for the integration of fairness and justice principles into future research and development of AI-driven educational systems.
Questions? Contact Clara Belitz.