Designing AI Systems that Obey Our Laws and Values

Publié le 6 juillet 2018

Créée le 6 juillet 2018

Operational AI Systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems (“AI Guard- ians”) as an approach to addressing this challenge, and to respond to the potential risks associated with in- creasingly autonomous AI systems.a These AI oversight systems serve to verify that operational systems did not stray unduly from the guidelines of their programmers and to bring them back in compliance if they do stray. The introduction of such sec- ond-order, oversight systems is not meant to suggest strict, powerful, or rigid (from here on ‘strong’) controls. Operations systems need a great de- gree of latitude in order to follow the lessons of their learning from addi- tional data mining and experience and to be able to render at least semi- autonomous decisions (more about this later). However, all operational systems need some boundaries, both in order to not violate the law and to adhere to ethical norms. Developing such oversight systems, AI Guard- ians, is a major new mission for the AI community.

AUTEURS

Amitai Etzioni

University Professor of Sociology at The George Washington University

Oren Etzioni

CEO of the Allen Institute for Artificial Intelligence, Seattle, WA, and a Professor of Computer Science at the University of Washington

Posté par

Nozha Boujemaa

Research director

Ajouter une ressource

Participer vous aussi au développement de la transparence des algorithmes et des données en ajoutant des ressources

Commentaires

Laisser un commentaire

* Tous les champs suivis d'un astérisque sont obligatoires

Votre commentaire sera révisé par le site si nécessaire.