Algorithmic Bias: The hard problem
Publié le 23 mai 2018
Machine learning is used to make consequential decisions about people: in hiring, criminal justice, insurance, and other domains. By default, these algorithmic systems will learn — and reproduce — the societal biases found in their training data. This concern animates the emerging field of fairness in machine learning. Ensuring algorithmic fairness will be hard. The first reason is that there are many intuitively desirable fairness desiderata, and recently discovered mathematical theorems show that these criteria, however desirable, are incompatible with each other. The second reason comes from domains such as natural-language processing and computer vision: machine learning models have proven surprisingly accurate at extracting gender, racial, and other biases found in language and image corpora, and we lack a good way to characterize which biases are undesirable and which ones constitute valuable knowledge. In this talk I’ll explain these “hard problems” and discuss ways to make progress.
Arvind Narayanan, Assistant Professor, Department of Computer Science and Center for Information Technology Policy at Princeton University, presents his talk at the Data Science Institute’s Data for Good series