Sharad GoelThursday September 13, 2018

Quantifying Bias in Machine Decisions

(Cosponsored with the Department of Information Science and CRDDS)

Abstract: Machine learning algorithms are increasingly used to guide decisions by human experts, including judges, doctors, and managers. Researchers and policymakers, however, have raised concerns that these systems might inadvertently exacerbate societal biases. To measure and mitigate such potential bias, there has recently been an explosion of competing mathematical definitions of what it means for an algorithm to be fair. But there's a problem: I’ll argue that the most prominent definitions of fairness suffer from subtle shortcomings that can lead to serious adverse consequences when used as an objective. I’ll instead advocate for a simple alternative framework for thinking about equitable decision making, drawing on ideas from statistics, economics, and legal theory.

Bio: Sharad Goel is a researcher focusing on Computational Social Science, an emerging discipline at the intersection of computer science, statistics, and the social sciences. He particularly strives to apply modern computational and statistical techniques to understand and improve public policy. He has worked on e.g. stop-and-frisk, algorithmic fairness, swing voters, and filter bubbles, and has presented and published in e.g. KDD, the Annals of Applied Statistics and Public Opinion Quarterly​.