Gouvernance et auditabilité des algorithmes

Mobilitics, saison 2 : Les smartphones et leurs apps sous le microscope de la CNIL et d'Inria

PDF Posté par : Nozha Boujemaa

La CNIL et Inria travaillent depuis maintenant 3 ans sur un projet de recherche et d’innovation ambitieux nommé Mobilitics. Son objectif : mieux connaître les smartphones, ces objets utilisés quotidiennement par des dizaines de millions de français et qui restent de véritables boîtes noires pour les utilisateurs, les chercheurs et les autorités de régulation. Pourtant, ces « amis qui nous veulent du bien » sont d’extraordinaires producteurs et consommateurs de données personnelles. Du point de vue de la recherche, ils incarnent idéalement les enjeux au cœur de l’activité de l’équipe Privatics d’Inria : comprendre les mécanismes techniques autour des données personnelles et concevoir des solutions techniques préservant la vie privée. Un outil capable de détecter les accès à des données personnelles dans les appareils (localisation, photos, carnet d'adresses) a donc été développé, mis au point et expérimenté. Après une première vague de tests en 2013, une « deuxième saison » de Mobilitics a eu lieu pendant l’été 2014. Les premiers résultats présentés dans cette lettre illustrent bien l'intérêt du partenariat entre Inria et la CNIL : des outils imaginés et conçus ensemble sont utilisés par les deux institutions, chacune dans son rôle. Pour la CNIL, il s’agit de mieux comprendre ce qui se passe réellement lors de l’usage de ces appareils, pour définir des priorités d’action et émettre des recommandations. Pour Inria, il s’agit aussi de pousser plus loin les investigations et analyses techniques et de développer des solutions permettant de mieux protéger les utilisateurs. Ces travaux sont donc l’occasion pour les deux institutions de partager leurs analyses et interrogations. En effet, si ces technologies offrent des services extraordinaires aux individus et sont bénéfiques pour la société, elles ne peuvent se développer que dans le respect de la vie privée et des libertés individuelles. Rendre la technologie plus transparente et plus compréhensible aux citoyens est un défi commun pour la recherche et pour l’autorité de régulation.

Statement on Algorithmic Transparency and Accountability

PDF Posté par : Nozha Boujemaa

Computer algorithms are widely employed throughout our economy and society to make decisions that have far-reaching impacts, including their applications for education, access to credit, healthcare, and employment. The ubiquity of algorithms in our everyday lives is an important reason to focus on addressing challenges associated with the design and technical aspects of algorithms and preventing bias from the onset.

Big data : A Tool for Inclusion or Exclusion?

PDF Posté par : Nozha Boujemaa

We are in the era of big data. With a smartphone now in nearly every pocket, a computer in nearly every household, and an ever-increasing number of Internet-connected devices in the marketplace, the amount of consumer data flowing throughout the economy continues to increase rapidly. The analysis of this data is often valuable to companies and to consumers, as it can guide the development of new products and services, predict the preferences of individuals, help tailor services and opportunities, and guide individualized marketing. At the same time, advocates, academics, and others have raised concerns about whether certain uses of big data analytics may harm consumers, particularly low- income and underserved populations. To explore these issues, the Federal Trade Commission (“FTC” or “the Commission”) held a public workshop, Big Data: A Tool for Inclusion or Exclusion?, on September 15, 2014. The workshop brought together stakeholders to discuss both the potential of big data to create opportunities for consumers and to exclude them from such opportunities. The Commission has synthesized the information from the workshop, a prior FTC seminar on alternative scoring products, and recent research to create this report. Though “big data” encompasses a wide range of analytics, this report addresses only the commercial use of big data consisting of consumer information and focuses on the impact of big data on low-income and underserved populations. Of course, big data also raises a host of other important policy issues, such as notice, choice, and security, among others. Those, however, are not the primary focus of this report. As “little” data becomes “big” data, it goes through several phases. The life cycle of big data can be divided into four phases: (1) collection; (2) compilation and consolidation; (3) analysis; and (4) use. This report focuses on the fourth phase and discusses the benefits and risks created by the use of big data analytics; the consumer protection and equal opportunity laws that currently apply to big data; research in the field of big data; and lessons that companies should take from the research. Ultimately, this report is intended to educate businesses on important laws and research that are relevant to big data analytics and provide suggestions aimed at maximizing the benefits and minimizing its risks.

Learning to trust artificial intelligence systems

PDF Posté par : Nozha Boujemaa

For more than 100 years, we at IBM have been in the business of building machines designed to help improve the effectiveness and efficiency of people. And we’ve made measurable improvements to many of the systems that facilitate life on this planet. But we’ve never known a technology that can have a greater benefit to all of society than artificial intelligence. At IBM, we are guided by the term “augmented intelligence” rather than “artificial intelligence.” This vision of “AI” is the critical difference between systems that enhance, improve and scale human expertise, and those that attempt to replicate human intelligence. The ability of AI systems to transform vast amounts of complex, ambiguous information into insight has the potential to reveal long-held secrets and help solve some of the world’s most enduring problems. AI systems can potentially be used to help discover insights to treat disease, predict the weather, and manage the global economy. It is an undeniably powerful tool. And like all powerful tools, great care must be taken in its development and deployment. To reap the societal benefits of AI systems, we will first need to trust it. The right level of trust will be earned through repeated experience, in the same way we learn to trust that an ATM will register a deposit, or that an automobile will stop when the brake is applied. Put simply, we trust things that behave as we expect them to. But trust will also require a system of best practices that can help guide the safe and ethical management of AI systems including alignment with social norms and values; algorithmic responsibility; compliance with existing legislation and policy; assurance of the integrity of the data, algorithms and systems; and protection of privacy and personal information. We consider this paper to be part of the global conversation on the need for safe, ethical and socially beneficial management of AI systems. To facilitate this dialogue, we are in the process of building an active community of thoughtful, informed thinkers that can evolve the ideas herein. Because there is too much to gain from AI systems to let myth and misunderstanding steer us off our course. And while we don’t have all the answers yet, we’re confident that together we can address the concerns of the few to the benefit of many.

The National artificial intelligence research and development strategic plan

PDF Posté par : Nozha Boujemaa

Artificial intelligence (AI) is a transformative technology that holds promise for tremendous societal and economic benefit. AI has the potential to revolutionize how we live, work, learn, discover, and communicate. AI research can further our national priorities, including increased economic prosperity, improved educational opportunities and quality of life, and enhanced national and homeland security. Because of these potential benefits, the U.S. government has invested in AI research for many years. Yet, as with any significant technology in which the Federal government has interest, there are not only tremendous opportunities but also a number of considerations that must be taken into account in guiding the overall direction of Federally-funded R&D in AI. On May 3, 2016, the Administration announced the formation of a new NSTC Subcommittee on Machine Learning and Artificial intelligence, to help coordinate Federal activity in AI.1 This Subcommittee, on June 15, 2016, directed the Subcommittee on Networking and Information Technology Research and Development (NITRD) to create a National Artificial Intelligence Research and Development Strategic Plan. A NITRD Task Force on Artificial Intelligence was then formed to define the Federal strategic priorities for AI R&D, with particular attention on areas that industry is unlikely to address. This National Artificial Intelligence R&D Strategic Plan establishes a set of objectives for Federally- funded AI research, both research occurring within the government as well as Federally-funded research occurring outside of government, such as in academia. The ultimate goal of this research is to produce new AI knowledge and technologies that provide a range of positive benefits to society, while minimizing the negative impacts. To achieve this goal, this AI R&D Strategic Plan identifies the following priorities for Federally-funded AI research: Strategy 1: Make long-term investments in AI research. Prioritize investments in the next generation of AI that will drive discovery and insight and enable the United States to remain a world leader in AI. Strategy 2: Develop effective methods for human-AI collaboration. Rather than replace humans, most AI systems will collaborate with humans to achieve optimal performance. Research is needed to create effective interactions between humans and AI systems. Strategy 3: Understand and address the ethical, legal, and societal implications of AI. We expect AI technologies to behave according to the formal and informal norms to which we hold our fellow humans. Research is needed to understand the ethical, legal, and social implications of AI, and to develop methods for designing AI systems that align with ethical, legal, and societal goals. Strategy 4: Ensure the safety and security of AI systems. Before AI systems are in widespread use, assurance is needed that the systems will operate safely and securely, in a controlled, well-defined, and well-understood manner. Further progress in research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy. Strategy 5: Develop shared public datasets and environments for AI training and testing. The depth, quality, and accuracy of training datasets and resources significantly affect AI performance. Researchers need to develop high quality datasets and environments and enable responsible access to high-quality datasets as well as to testing and training resources. Strategy 6: Measure and evaluate AI technologies through standards and benchmarks. Essential to advancements in AI are standards, benchmarks, testbeds, and community engagement that guide andevaluate progress in AI. Additional research is needed to develop a broad spectrum of evaluative techniques. Strategy 7: Better understand the national AI R&D workforce needs. Advances in AI will require a strong community of AI researchers. An improved understanding of current and future R&D workforce demands in AI is needed to help ensure that sufficient AI experts are available to address the strategic R&D areas outlined in this plan. The AI R&D Strategic Plan closes with two recommendations: Recommendation 1: Develop an AI R&D implementation framework to identify S&T opportunities and support effective coordination of AI R&D investments, consistent with Strategies 1-6 of this plan. Recommendation 2: Study the national landscape for creating and sustaining a healthy AI R&D workforce, consistent with Strategy 7 of this plan.

A Survey of Collaborative Filtering Techniques

PDF Posté par : Dominique Cardon

As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, model- based, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.

EU regulations on algorithmic decision-making and a “right to explanation”

PDF Posté par : Dominique Cardon

We summarize the potential impact that the Euro- pean Union’s new General Data Protection Reg- ulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predic- tors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large chal- lenges for industry, it highlights opportunities for machine learning researchers to take the lead in designing algorithms and evaluation frameworks which avoid discrimination.

Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness

PDF Posté par : Dominique Cardon

Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of ‘‘what we ought to do,’’ I examine ethical dimensions of contem- porary NIAs. Specifically, in an effort to sketch an empirically grounded, pragmatic ethics of algorithms, I trace an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action.

Algorithms, Governance, and Governmentality: On Governing Academic Writing

PDF Posté par : Dominique Cardon

Algorithms, or rather algorithmic actions, are seen as problematic because they are inscrutable, automatic, and subsumed in the flow of daily practices. Yet, they are also seen to be playing an important role in organizing opportunities, enacting certain categories, and doing what David Lyon calls ‘‘social sorting.’’ Thus, there is a general concern that this increasingly prevalent mode of ordering and organizing should be governed more explicitly. Some have argued for more transparency and openness, others have argued for more democratic or value-centered design of such actors. In this article, we argue that governing practices—of, and through algo- rithmic actors—are best understood in terms of what Foucault calls governmentality. Governmentality allows us to consider the performative nature of these governing practices. They allow us to show how practice becomes problematized, how calculative practices are enacted as tech- nologies of governance, how such calculative practices produce domains of knowledge and expertise, and finally, how such domains of knowledge become internalized in order to enact self-governing subjects. In other words, it allows us to show the mutually constitutive nature of problems, domains of knowledge, and subjectivities enacted through governing prac- tices. In order to demonstrate this, we present attempts to govern aca- demic writing with a specific focus on the algorithmic action of Turnitin.

Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics

PDF Posté par : Dominique Cardon

This paper explores how political theory may help us map algorithmic logics against different visions of the political. Drawing on Chantal Mouffe’s the- ories of agonistic pluralism, this paper depicts algorithms in public life in ten distinct scenes, in order to ask the question, what kinds of politics do they instantiate? Algorithms are working within highly contested online spaces of public discourse, such as YouTube and Facebook, where incompatible perspectives coexist. Yet algorithms are designed to produce clear ‘‘win- ners’’ from information contests, often with little visibility or accountability for how those contests are designed. In isolation, many of these algorithms seem the opposite of agonistic: much of the complexity of search, ranking, and recommendation algorithms is nonnegotiable and kept far from view, inside an algorithmic ‘‘black box.’’ But what if we widen our perspective? This paper suggests agonistic pluralism as both a design ideal for engineers and a provocation to understand algorithms in a broader social context: rather than focusing on the calculations in isolation, we need to account for the spaces of contestation where they operate.

Bearing Account-able Witness to the Ethical Algorithmic System

PDF Posté par : Dominique Cardon

This paper explores how accountability might make otherwise obscure and inaccessible algorithms available for governance. The potential import and difficulty of accountability is made clear in the compelling narrative reproduced across recent popular and academic reports. Through this narrative we are told that algorithms trap us and control our lives, undermine our privacy, have power and an independent agential impact, at the same time as being inaccessible, reducing our opportunities for critical engagement. The paper suggests that STS sensibilities can provide a basis for scrutinizing the terms of the compelling narrative, disturbing the notion that algorithms have a single, essential characteristic and a pre- dictable power or agency. In place of taking for granted the terms of the compelling narrative, ethnomethodological work on sense-making accounts is drawn together with more conventional approaches to accountability focused on openness and transparency. The paper uses empirical material from a study of the development of an ‘‘ethical,’’ ‘‘smart’’ algorithmic videosurveillance system. The paper introduces the ‘‘ethical’’ algorithmic surveillance system, the approach to accountability developed, and some of the challenges of attempting algorithmic accountability in action. The paper concludes with reflections on future questions of algorithms and accountability.

Why Map Issues? On Controversy Analysis as a Digital Method

PDF Posté par : Dominique Cardon

This article takes stock of recent efforts to implement controversy analysis as a digital method in the study of science, technology, and society (STS) and beyond and outlines a distinctive approach to address the problem of digital bias. Digital media technologies exert significant influence on the enactment of controversy in online settings, and this risks undermining the substantive focus of controversy analysis conducted by digital means. To address this problem, I propose a shift in thematic focus from controversy analysis to issue mapping. The article begins by distinguishing between three broad frameworks that currently guide the development of controversy analysis as a digital method, namely, demarcationist, discursive, and empiricist. Each has been adopted in STS, but only the last one offers a digital ‘‘move beyond impartiality.’’ I demonstrate this approach by analyzing issues of Internet governance with the aid of the social media platform Twitter.

Governing Algorithms: Myth, Mess, and Methods

PDF Posté par : Dominique Cardon

Algorithms have developed into somewhat of a modern myth. On the one hand, they have been depicted as powerful entities that rule, sort, govern, shape, or otherwise control our lives. On the other hand, their alleged obscurity and inscrutability make it difficult to understand what exactly is at stake. What sustains their image as powerful yet inscrutable entities? And how to think about the politics and governance of something that is so difficult to grasp? This editorial essay provides a critical backdrop for the special issue, treating algorithms not only as computational artifacts but also as sensitizing devices that can help us rethink some entrenched assumptions about agency, transparency, and normativity.

Ajouter une ressource

Participer vous aussi au développement de la transparence des algorithmes et des données en ajoutant des ressources