Professor and MS/IM Program Director Michael Twidale, will give the talk, "'Don’t Go There': Human Algorithms, Transparency, Tacitness, the Enfant Terrible, and Conventions of Not Talking About Certain Things."
Abstract: There is a growing concern about how algorithms are playing a role throughout our lives – and that we don’t really know what they are, what they do or even how they work. Researchers are exploring the idea of algorithmic transparency and self-explaining systems. That is all very wise and good - but too difficult for me. So I’ve decided to do something a bit simpler. Just like a beginning programmer writing a program that says "Hello World," I have decided to explore some rather simple algorithms that get uploaded into neural networks and executed by the information processing units called humans. These humans can then be interrogated about how, why, when and whether they run their algorithms – and when they secretly switch algorithms. Just as in debugging a program, we can run the algorithm on test datasets. And just like in good debugging, the real art is to devise good test datasets that explicitly probe for exceptions and edge cases that may cause an embarrassing crash. The results can be rather thought provoking. We may make transparent things that people would rather leave unsaid. Too bad: once we start needing to teach ethics to robots, we need to de-tacit our algorithms. And I rather fear that in the process we may find our robots have higher ethical standards than we humans. Are you ready to be disapproved of by a robot?
Twidale's research interests include computer supported cooperative work, computer supported collaborative learning, human computer interaction, search based technical problem solving, museum informatics and sociotechnical systems design. His hobbies include complaining about simplistic ethics posturing, asking awkward questions, and a futile attempt to apply logical rigor to human activity.
This event is sponsored by CIRSS