Learning to trust artificial intelligence systems

PDF Posté par : Nozha Boujemaa

For more than 100 years, we at IBM have been in the business of building machines designed to help improve the effectiveness and efficiency of people. And we’ve made measurable improvements to many of the systems that facilitate life on this planet. But we’ve never known a technology that can have a greater benefit to all of society than artificial intelligence. At IBM, we are guided by the term “augmented intelligence” rather than “artificial intelligence.” This vision of “AI” is the critical difference between systems that enhance, improve and scale human expertise, and those that attempt to replicate human intelligence. The ability of AI systems to transform vast amounts of complex, ambiguous information into insight has the potential to reveal long-held secrets and help solve some of the world’s most enduring problems. AI systems can potentially be used to help discover insights to treat disease, predict the weather, and manage the global economy. It is an undeniably powerful tool. And like all powerful tools, great care must be taken in its development and deployment. To reap the societal benefits of AI systems, we will first need to trust it. The right level of trust will be earned through repeated experience, in the same way we learn to trust that an ATM will register a deposit, or that an automobile will stop when the brake is applied. Put simply, we trust things that behave as we expect them to. But trust will also require a system of best practices that can help guide the safe and ethical management of AI systems including alignment with social norms and values; algorithmic responsibility; compliance with existing legislation and policy; assurance of the integrity of the data, algorithms and systems; and protection of privacy and personal information. We consider this paper to be part of the global conversation on the need for safe, ethical and socially beneficial management of AI systems. To facilitate this dialogue, we are in the process of building an active community of thoughtful, informed thinkers that can evolve the ideas herein. Because there is too much to gain from AI systems to let myth and misunderstanding steer us off our course. And while we don’t have all the answers yet, we’re confident that together we can address the concerns of the few to the benefit of many.

A Survey of Collaborative Filtering Techniques

PDF Posté par : Dominique Cardon

As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, model- based, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.

Algorithmic Ideology

Lien Posté par : Dominique Cardon

This article investigates how the new spirit of capitalism gets inscribed in the fabric of search algorithms by way of social practices. Drawing on the tradition of the social construction of technology (SCOT) and 17 qualitative expert interviews it discusses how search engines and their revenue models are negotiated and stabilized in a network of actors and interests, website providers and users first and foremost. It further shows how corporate search engines and their capitalist ideology are solidified in a socio-political context characterized by a techno-euphoric climate of innovation and a politics of privatization. This analysis provides a valuable contribution to contemporary search engine critique mainly focusing on search engines' business models and societal implications. It shows that a shift of perspective is needed from impacts search engines have on society towards social practices and power relations involved in the construction of search engines to renegotiate search engines and their algorithmic ideology in the future.

Dynamics and Biases of Online Attention: The Case of Aircraft Crashes

PDF Posté par : Dominique Cardon

Researchers have used Wikipedia data as a source to quantify attention on the web. One way to do it is by analysing the editorial activities and visitors’ views of a set of Wikipedia articles. In this paper, we particularly study attention to aircraft incidents and accidents using Wikipedia in two different language editions, English and Spanish. We analyse how attention varies over several dimensions such as number of deaths, airline region, locale, date, first edit, etc. Several patterns emerge with regard to these dimensions and articles. For example, we find evidence that the attention given by Wikipedia editors to pre-Wikipedia aircraft incidents and accidents depends on the region of the airline for both English and Spanish editions. For instance, North American airline companies receive more prompt coverage in English Wikipedia. We also observe that the attention given by Wikipedia visitors is influenced by the airline region but only for events with high number of deaths. Finally we show that the rate and time span of the decay of attention is independent of the number of deaths and the airline region. We discuss the implications of these findings in the context of attention bias.

Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness

PDF Posté par : Dominique Cardon

Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of ‘‘what we ought to do,’’ I examine ethical dimensions of contem- porary NIAs. Specifically, in an effort to sketch an empirically grounded, pragmatic ethics of algorithms, I trace an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action.

The Trouble Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making

PDF Posté par : Dominique Cardon

We are currently witnessing a sharp rise in the use of algorithmic decision- making tools. In these instances, a new wave of policy concerns is set forth. This article strives to map out these issues, separating the wheat from the chaff. It aims to provide policy makers and scholars with a comprehensive framework for approaching these thorny issues in their various capacities. To achieve this objective, this article focuses its attention on a general analytical framework, which will be applied to a specific subset of the overall discussion. The analytical framework will reduce the discussion to two dimensions, every one of which addressing two central elements. These four factors call for a distinct discussion, which is at times absent in the existing literature. The two dimensions are the specific and novel problems the process assumedly generates and the specific attributes which exacerbate them. While the problems are articulated in a variety of ways, they most likely could be reduced to two broad categories: efficiency and fairness-based concerns. In the context of this discussion, such prob- lems are usually linked to two salient attributes the algorithmic processes feature—its opaque and automated nature.

Big Data - A Tool for Inclusion or Exclusion?

Lien Posté par : Nozha Boujemaa

Executive Summary We are in the era of big data. With a smartphone now in nearly every pocket, a computer in nearly every household, and an ever-increasing number of Internet-connected devices in the marketplace, the amount of consumer data flowing throughout the economy continues to increase rapidly. The analysis of this data is often valuable to companies and to consumers, as it can guide the development of new products and services, predict the preferences of individuals, help tailor services and opportunities, and guide individualized marketing. At the same time, advocates, academics, and others have raised concerns about whether certain uses of big data analytics may harm consumers, particularly low- income and underserved populations. To explore these issues, the Federal Trade Commission (“FTC” or “the Commission”) held a public workshop, Big Data: A Tool for Inclusion or Exclusion?, on September 15, 2014. The workshop brought together stakeholders to discuss both the potential of big data to create opportunities for consumers and to exclude them from such opportunities. The Commission has synthesized the information from the workshop, a prior FTC seminar on alternative scoring products, and recent research to create this report. Though “big data” encompasses a wide range of analytics, this report addresses only the commercial use of big data consisting of consumer information and focuses on the impact of big data on low-income and underserved populations. Of course, big data also raises a host of other important policy issues, such as notice, choice, and security, among others. Those, however, are not the primary focus of this report. As “little” data becomes “big” data, it goes through several phases. The life cycle of big data can be divided into four phases: (1) collection; (2) compilation and consolidation; (3) analysis; and (4) use. This report focuses on the fourth phase and discusses the benefits and risks created by the use of big data analytics; the consumer protection and equal opportunity laws that currently apply to big data; research in the field of big data; and lessons that companies should take from the research. Ultimately, this report is intended to educate businesses on important laws and research that are relevant to big data analytics and provide suggestions aimed at maximizing the benefits and minimizing its risks.

Ajouter une ressource

Participer vous aussi au développement de la transparence des algorithmes et des données en ajoutant des ressources