Child welfare agencies in the United States have looked towards structured risk assessment tools over the past three decades as a means to achieve consistent, evidence-based, objective, unbiased, and defensible decision-making. These structured risk assessments act as data collection mechanisms and have further evolved into algorithmic decision-making tools in recent years. Moreover, several of these algorithmic tools have uncritically reinforced biased theoretical constructs and predictors because of the easy availability of that data. For instance, algorithms embed pseudo-predictors that use a parent’s response to interventions rather than the means and effectiveness of the interventions themselves to assess the likelihood of maltreatment. There are also significant disparities between algorithmic assessments and the narrative coding from case notes. That is, algorithms do not even mirror caseworkers' notes from their interactions with families. Algorithms create a mirage of evidence-based and unbiased practice, but can easily embed underlying power structures, conceal biases, and cause ambiguity in decision-making which has serious implications for child-welfare practice. Why haven't these algorithms lived up to expectations? And how might we be able to improve them? A human-centered approach to algorithm design can help us evaluate algorithmic decision-making systems in complex socio-political domains and uncover such disparities.
Co-organizers: MS/LIS student Sarah Appedu and Affiliate Professor Lisa Janicke Hinchliffe, professor and coordinator for information literacy services and instruction in the University Library
Questions? Contact Sarah Appedu.
This event is sponsored by HRI Research Cluster: AI & Society: Privacy, Ethics and (Dis)Information