Algorithmic Bias: The hard problem

Publié le 23 mai 2018

Créée le 23 mai 2018

Machine learning is used to make consequential decisions about people: in hiring, criminal justice, insurance, and other domains. By default, these algorithmic systems will learn — and reproduce — the societal biases found in their training data. This concern animates the emerging field of fairness in machine learning. Ensuring algorithmic fairness will be hard. The first reason is that there are many intuitively desirable fairness desiderata, and recently discovered mathematical theorems show that these criteria, however desirable, are incompatible with each other. The second reason comes from domains such as natural-language processing and computer vision: machine learning models have proven surprisingly accurate at extracting gender, racial, and other biases found in language and image corpora, and we lack a good way to characterize which biases are undesirable and which ones constitute valuable knowledge. In this talk I’ll explain these “hard problems” and discuss ways to make progress.

Arvind Narayanan, Assistant Professor, Department of Computer Science and Center for Information Technology Policy at Princeton University, presents his talk at the Data Science Institute’s Data for Good series

AUTEURS

Arvind Narayanan

Assistant Professor, Princeton

Posté par

Nozha Boujemaa

DR Inria

Ajouter une ressource

Participer vous aussi au développement de la transparence des algorithmes et des données en ajoutant des ressources

Biais techniques (Evaluation expérimentales, vérification de code, véracité des données, etc)

Data, Responsibly A Survey of Collaborative Filtering Techniques Battling Algorithmic Bias

Commentaires

Laisser un commentaire

* Tous les champs suivis d'un astérisque sont obligatoires

Votre commentaire sera révisé par le site si nécessaire.