Can an Algorithm be Disturbed?
Posté par : Dominique Cardon
Within literary and cultural studies there has been a new focus on the “surface” as opposed to the “depth” of a work as the proper object of study. We have seen this interest manifested through what appears to be the return of prior approaches including formalist reading practices, attention to the aesthetic dimensions of a text, and new methodologies that come from the social sciences and are interested in modes of description and observation. In arguing for the adoption of these methodologies, critics have advocated for an end to what Paul Ricoeur has termed “the hermeneutics of suspicion” and various forms of ideological critique that have been the mainstay of criticism for the past few decades.2 While these “new” interpretations might begin with what was once repressed through prior selection criteria, they all shift our attention away from an understanding of a “repressed” or otherwise hidden object by understanding textual features less as signifier, an arrow to follow to some hidden depths, than an interesting object in its own right. Computer aided approaches to literary criticism or “digital readings,” to be sure, not an unproblematic term, have been put forward as one way of making a break from the deeply habituated reading practices of the past, but their advocates risk overstating the case and, in giving up on critique, they remain blind to untheorized dimensions of these computational methods. While digital methods enable one to examine radically larger archives than those assembled in the past, a transformation that Matthew Jockers characterizes as a shift from micro to “macroanalysis”, the fundamental assumptions about texts and meaning implicit in these tools and in the criticism resulting from use of these tools belong to a much earlier period of literary analysis.