Doctoral candidate Zachary Kilhoffer successfully defended his dissertation, "Human Factors in the Standardization of AI Governance: Improving the Design of Risk Management Standards for Ethical AI," on January 24, 2025.
His committee included Professor Yang Wang, Assistant Professor Madelyn Sanfilippo, Associate Professor Masooda Bashir, and Assistant Professor Jiaqi Ma.
Abstract: This dissertation explores the standardization of AI governance, focusing on bridging the gap between academic research and practitioner needs. It examines two emerging standards, the NIST AI Risk Management Framework and ISO/IEC 42001, which both adopt a risk management approach to AI governance. The research addresses three key ideas: system-level requirements for standardization to succeed in AI governance, unit-level requirements for individual AI standards to be effective, and design principles for AI risk standards to achieve better outcomes in human-centered AI systems. Through three empirical contributions, this work aims to enhance AI standards by considering practitioners, organizational contexts, and the standards themselves. It provides practical guidance for implementing AI standards and offers recommendations for refining them. By addressing AI governance challenges, this dissertation contributes to the development of more effective, human-centered AI systems, aligning theoretical principles with practical implementation to aid practitioners in developing AI responsibly.