Designing AI Systems that Obey Our Laws and Values
Publié le 6 juillet 2018
Operational AI Systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems (“AI Guard- ians”) as an approach to addressing this challenge, and to respond to the potential risks associated with in- creasingly autonomous AI systems.a These AI oversight systems serve to verify that operational systems did not stray unduly from the guidelines of their programmers and to bring them back in compliance if they do stray. The introduction of such sec- ond-order, oversight systems is not meant to suggest strict, powerful, or rigid (from here on ‘strong’) controls. Operations systems need a great de- gree of latitude in order to follow the lessons of their learning from addi- tional data mining and experience and to be able to render at least semi- autonomous decisions (more about this later). However, all operational systems need some boundaries, both in order to not violate the law and to adhere to ethical norms. Developing such oversight systems, AI Guard- ians, is a major new mission for the AI community.
CATÉGORIESTechniques algorithmiques Types de systèmes algorithmiques Apprentissage (machine learning): supervisé (prédictif) ou non-supervisé Domaines d'application Questions sociales, éthiques et juridiques Ethique Explicabilité des algorithmes Information flow monitoring Justice Modes d’explication : règles, arbres de décision, sélection d’exemples, etc. Opacité, asymétrie informationnelle Responsabilité, redevabilité