The Technocultural Dimensions of Meaning

296
The TechnoCultural Dimensions of Meaning: Towards a Mixed Semiotics of the World Wide Web GANAELE LANGLOIS A DISSERTATION SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY GRADUATE PROGRAM IN COMMUNICATION AND CULTURE YORK UNIVERSITY/RYERSON UNIVERSITY TORONTO, ONTARIO MAY 2008

description

Ganaele Langlois, PhD Dissertation

Transcript of The Technocultural Dimensions of Meaning

The TechnoCultural Dimensions of Meaning:

Towards a Mixed Semiotics of the World Wide Web

GANAELE LANGLOIS

A DISSERTATION SUBMITTED TO THE FACULTY OF GRADUATE STUDIESIN PARTIAL FULFILMENT OF THE REQUIREMENTS

FOR THE DEGREE OFDOCTOR OF PHILOSOPHY

GRADUATE PROGRAM IN COMMUNICATION AND CULTUREYORK UNIVERSITY/RYERSON UNIVERSITY

TORONTO, ONTARIO

MAY 2008

THE TECHNOCULTURAL DIMENSIONS OFMEANING:

TOWARDS A MIXED SEMIOTICS OF THE WORLDWIDE WEB

by Ganaele Langloisa dissertation submitted to the Faculty of Graduate Studiesof York University in partial fulfi llment of the requirementsfor the degree of

DOCTOR OF PHILOSOPHY©

Permission has been granted to: a) YORK UNIVERSITYLIBRARIES to lend or sell copies of this dissertation inpaper, microform or electronic formats, and b) LIBRARYAND ARCHIVES CANADA to reproduce, lend, distribute,or sell copies of this dissertation anywhere in the world inmicroform, paper or electronic formats and to authorize or andto authorize or andprocure the reproduction, loan, distribution or sale of copiesof this dissertation anywhere in the world in microform, paperor electronic formats.

The author reserves other publication rights, and neither thedissertation nor extensive extracts from it may be printed orotherwise reproduced without the author’s written permis-sion.

iv

ABSTRACT

This dissertation project argues that the study of meaning-making practices on the

Web, and particularly the analysis of the power relations that organize communicational

practices, needs to involve an acknowledgement of the importance of communication

technologies. This project assesses the technocultural impact of software that

automatically produces and dynamically adapts content to user input through a case study

analysis of amazon.com and of the MediaWiki software package. It offers an

interdisciplinary theoretical framework that borrows from communication studies

(discourse analysis, medium theory, cultural studies of technology), from new media

studies (software criticism) and from Actor-network theory and Felix Guattari’s mixed

semiotics. In so doing, the research defines a new methodological framework through

which the question of semiotics and discourse can be analyzed thanks to an exploration of

the technocultural conditions that create communicative possibilities.

The analysis of amazon.com examines how the deployment of tools to track,

shape and predict the cultural desires of users raises questions related to the imposition of

specific modes of interpretation. In particular, I highlight the process through which user-

produced meanings are incorporated within software-produced semiotic systems so as to

embed cultural processes within a commercial imperative. While amazon.com is an

instance of the commercial use of dynamic content production techniques on the Web,

Wikipedia stands as a symbol of non-commercial knowledge production. The Wikipedia

model is not only cultural, but also technical as mass collaborative knowledge production

depends on a suite of software tools - the MediaWiki architecture - that enables new

v

discursive practices. The Wikipedia model is the result of a set of articulations between

technical and cultural processes, and the case study examines how this model is captured,

modified and challenged by other websites using the same wiki architecture as

Wikipedia. In particular, I examine how legal and technical processes on the Web

appropriate discursive practices by capitalizing on user-produced content as a source of

revenue.

vi

Acknowledgements

I am greatly indebted to my supervisor, Dr. Barbara Crow, for being an astonishing

mentor whose generous support, guidance and encouragement throughout the years have

made my experience of graduate school truly fulfilling. My hearfelt thanks go to my

other committee members, Dr. Greg Elmer and Dr. Steve Bailey, for their generous

feedback and encouragement. I am extremely grateful for Dr. Wendy Hui Kyong Chun’s

advice and support. This project would not have been half as interesting without the

support of the members of the Infoscape Lab at Ryerson University. Our conversations

and collaborative work have been a central source of inspiration. My thanks also go to

Dr. Fred Fletcher and Diane Jenner for their help over the years. This dissertation would

not have been written without the financial support of the Social Sciences and

Humanities Research Council, the Ontario Graduate Studies Program, the Infoscape Lab

at Ryerson University and the Canadian Media Research Consortium. Finally, my thanks

go to my family for their support throughout the years and to my partner Michael Gayner,

for his infinite patience.

vii

TABLE OF CONTENTS

COPYRIGHT....................................................................................................................ii

CERTIFICATE................................................................................................................iii

ABSTRACT...................................................................................................................... iv

ACKNOWLEDGEMENTS.............................................................................................vi

TABLE OF CONTENTS ............................................................................................... vii

LIST OF FIGURES ......................................................................................................... xi

LIST OF TABLES ......................................................................................................... xiv

Introduction: The Technocultural Dimensions of Meaning: Software Studies and

the World Wide Web.....................................................................................................1

Chapter 1. Technology and Media: Towards a Technocultural Approach to the

World Wide Web .........................................................................................................24

1. Towards a Material Approach to Media Analysis: Medium Theory and

Materialities of Communication...................................................................... 27

viii

2. Technologies as Actors: Actor-Network Theory, Cultural Studies and

Medium Theory................................................................................................. 32

3. Analyzing Web Technologies: The Problem with Essentializing Medium

Characteristics................................................................................................... 38

4. The Web as a Layered Technocultural Entity ................................................. 44

5. Technologies of the Web and the Question of Representation ....................... 47

6. Towards a Technocultural Approach to the Politics of Representation on the

Web..................................................................................................................... 51

Chapter 2. Web Technologies, Language and Mixed Semiotics..................................60

1. The Technocultural Dimensions of Discourse.................................................. 61

2. Reconsidering Linguistics .................................................................................. 67

3. Mixed Semiotics .................................................................................................. 73

4. Mixed Semiotics and the Web............................................................................ 94

5. Introducing the Case Studies ........................................................................... 100

Case study 1: Adaptive interfaces and the production of subjectivities - the case of

Amazon....................................................................................................................... 102

Case Study 2: Mixed Semiotics and the Economies of the MediaWiki Format......... 103

ix

Chapter 3. Cultural Objects and Software-Assisted Meaning Creation - The Case of

Books on Amazon.com...............................................................................................105

1. Amazon.com and Mixed Semiotics.................................................................. 105

2. The Architecture of Amazon.com: Data Processing as A-semiotic Encoding

........................................................................................................................... 118

3. Signifying Semiologies on Amazon.com: Shaping the Cultural Perception of

Meaning ........................................................................................................... 122

4. User-Produced Content: Meaning Proliferation and Cultural Homogeneity

........................................................................................................................... 149

5. Amazon.com’ A-Signifying Semiologies: Shaping Sociality and Individuality

within a Commercial Space............................................................................ 167

Chapter 4. Mixed Semiotics and the Economies of the MediaWiki Format ............183

1. Technodiscursive Mediations and the Production of Wikipedia as a

Technocultural Form...................................................................................... 185

2. The Circulation of the MediaWiki Software and the Rearticulation of

Technical, Discursive and Cultural Domains............................................... 205

2.1 Cultural Formatting as the Rearticulation of Discursive Rules ............................ 211

2.2 A-signifying Processes and the Channeling of Cultural Formats......................... 226

x

Chapter 5. Conclusion: Meaning, Subjectivation and Power in the New Information

Age...............................................................................................................................239

1. Rethinking the divide between Information and Meaning Production and

Circulation through Mixed Semiotics Networks.......................................... 242

2. Mixed Semiotics and the Politics of Usership................................................. 253

3. Mixed semiotics and Software Studies............................................................ 264

Bibliography ...................................................................................................................269

xi

LIST OF FIGURES

Figure 1: The Web Stalker _______________________________________________ 10

Figure 2: The IssueCrawler (Govcom.org)___________________________________ 12

Figure 3: Amazon.com Cookies - Screen Capture of Mozilla Firefox Cookie Window 107

Figure 4: The Amazon.com Interface (Cookies Enabled) ______________________ 108

Figure 5: Personalization on Amazon.com__________________________________ 109

Figure 6: The Empire of Fashion _________________________________________ 116

Figure 7: Harry Potter and the Deathly Hallows _____________________________ 117

Figure 8: A-semiotic and Signifying Processes on Amazon.com_________________ 121

Figure 9: Recommendations featured on the Empire of Fashion page. ____________ 127

Figure 10: Recommendations by Items Bought for Harry Potter and the Deathly

Hallows. ____________________________________________________________ 128

Figure 11: “My Profile” page on amazon.com. ______________________________ 129

Figure 12: Personalized Recommendations Based on Items Rated._______________ 130

Figure 13: Recommendations Based on Items Viewed for The Empire of Fashion __ 131

Figure 14: Recommendations based on Item Viewed for Harry Potter and the Deathly

Hallows. ____________________________________________________________ 133

Figure 15: Recommendation Network for the Empire of Fashion (depth 1). 28.March

2007. _______________________________________________________________ 137

Figure 16: Recommendation Network for the Empire of Fashion (depth 1 - subjects). 28

March 2007. _________________________________________________________ 138

xii

Figure 17: Recommendation Network for The Empire of Fashion (depth 2). 28 March

2007. _______________________________________________________________ 139

Figure 18: Recommendation Network for The Empire of Fashion (depth 2 - subjects). 28

March 2007. _________________________________________________________ 140

Figure 19: Recommendation Network for Harry Potter and the Deathly Hallows (depth

1). 27 March 2007. ____________________________________________________ 141

Figure 20: Recommendation Network for Harry Potter and the Deathly Hallows (depth

1; subjects). 27 March 2007._____________________________________________ 142

Figure 21: Recommendation Network for Harry Potter and the Deathly Hallows (depth

2). 27 March 2007. ____________________________________________________ 143

Figure 22: Recommendation Network for Harry Potter and the Deathly Hallows (depth

2; subjects). 27 March 2007._____________________________________________ 144

Figure 23: Customer Reviews for Lipovetsky’s Empire of Fashion ______________ 154

Figure 24: Customer Discussions for Harry Potter and the Deathly Hallows. ______ 154

Figure 25: Listmanias and So You’d Like To guides - Harry Potter and the Deathly

Hallows _____________________________________________________________ 157

Figure 26: Harry Potter Tags. ___________________________________________ 160

Figure 27: Editorial Reviews for The Empire of Fashion_______________________ 163

Figure 28: Product Placement on Amazon.com Homepage _____________________ 164

Figure 29: Harry Potter Product Placement on the Harry Potter and the Deathly Hallows

Page________________________________________________________________ 164

Figure 30 - The Wikipedia Homepage _____________________________________ 194

xiii

Figure 31: Power Struggles on Wikipedia (Herr and Holloway, 2007) ____________ 202

Figure 32: Largest Mediawikis - Format ___________________________________ 213

Figure 33: Wikimocracy.com ____________________________________________ 215

Figure 34: A Wikipedia Skin Clone _______________________________________ 216

Figure 35: A Mixed Skin Model__________________________________________ 216

Figure 36: A MediaWiki Site with a Different Skin than Wikipedia ______________ 217

Figure 37: Largest MediaWikis - Focus ____________________________________ 221

Figure 38: Largest MediaWikis - Intellectual Property Regimes _________________ 228

Figure 39: Largest MediaWikis - Intellectual Property Regimes Breakdown _______ 229

Figure 40: Largest MediaWikis - Advertising Breakdown______________________ 234

xiv

LIST OF TABLES

Table 1: Glossematics……………………………………………………………80

Table 2: Guattari and Glossematics……………………………………………...84

Table 3: Mixed Semiotics………………………………………………………..87

Table 4: Amazon.com’s Signifying Semiologies………………………………114

Table 5: Mixed Semiotics on Amazon.com…………………………………….115

Table 6: Mixed Semiotics and the Recommendation System on Amazon.com..147

Table 7: Surfing Paths on Amazon.com………………………………………..151

Table 8: Mixed Semiotics and Users on Amazon.com…………………………161

1

Introduction

The Technocultural Dimensions of Meaning: Software Studies and the World WideWeb

Our mundane engagement with the Web is marked by the magic of instantaneous

communication, by the possibility of having access to a wealth of information via screens

and interfaces mimicking well-known cultural tropes (e. g. the “home” button, the page)

as well as introduce new ways of accessing information that are unique to new media

(e.g. hyperlinks and search boxes). Part of the “magic” of the Web is that it requires less

and less computer literacy. The trend that has been exacerbated with the rise of the World

Wide Web has been to involve less and less computer know-how through the

proliferation of software capable of translating computer processes into recognizable

cultural signs and commands. Meaning, then, becomes a problematic site of analysis in

this new technocultural environment as it is mediated by software and circulates as both

informational input and cultural sign. While the range of fields of study (linguistic,

literature, cultural studies) as well as theories and methods (structural linguistics, literary

analysis, discourse analysis, content analysis) available for the study of meaning seem to

cover all the angles (linguistic, cultural, socio-political) through which meaning is shaped

and communicated, there is a gap when it comes to recognizing the contribution of

information and communication technologies, and software in particular, in the

constitution of the communicative conditions within which the practices of meaning-

making and representation can take place. Indeed, a Web interface bridges the technical

and cultural dimension of communication by translating data input into recognizable

2

cultural forms and meanings. For instance, when a server breaks down, when the Internet

browser is outdated or missing a software component, we are forced to acknowledge that

the production of meaning is not simply a cultural practice, but a technocultural one that

also involves a specific set of materials and techniques.

The interfaces that are presented to us on the Web are built through the

conjunction of transmission protocols and software programs. This allows for the

definition of specific conditions of meaning-making, that is, specific possibilities of

expression. This research project is focused in particular on examining how specific

possibilities of expression and practices of meaning-making arise within online spaces,

and argues that while there exists a healthy body of research on the political, economic

and legal importance of the protocols and codes that regulate processes of transmission

(Lessig, 2005; Benkler, 2006; Galloway, 2004), more needs to be done with regards to

examining how software-mediated semiotic processes serve to order specific

communicational, cultural and social values and practices. As such, this research

demonstrates how the technical, commercial and social forces that define online semiotic

processes establish rules of meaning-making according to a set of power formations.

Examining the processes of expression and practices of meaning-making online is

important because they are not as direct and simple as taking a pen and writing on a piece

of paper, even though it oftentimes feels like typing a letter (Fuller, 2003). Although it

seems that, for users, blogs can be created with a few clicks, and image and sound can be

easily uploaded onto websites such as Flickr or YouTube, the processes of expression on

the Web engage a complex system of software - a series of commands given to a

3

computer either by human users or other software programs - in order to translate data

input into meanings. This is made even more complex with the growing popularity of the

Web as a form of cultural expression. Increasingly, using the Web does not simply mean

reading a Web page or uploading media onto a Web page, but having software give

meaningful feedback to users, for instance under the form of tailored content and targeted

advertising. It is these changes in both the modes of expression and in the new forms of

software-mediated communication available to users that are the focus of this study

through an analysis of software that supports content production. These changes cannot

be captured by conventional theories focused on the study of meaning, as they result from

the introduction of new software systems whose processes are always hidden behind a

cultural interface. The interface then, is double-edged. On the one hand, it is a product of

software and on the other, it hides some software processes and highlights others. For

instance, the process of surveillance and analysis through which targeted advertising can

take place is not always visible to the user. Rather, targeted advertising appears on the

screen as another form of the magic of instantaneous communication. The first step of

this project is to make these hidden software processes apparent in order to examine the

cultural, political and economic assumptions embedded in them. This, in turn, impacts

what types of meanings-making practices can be used on the Web, and on how users can

interact with meanings and with each other.

By arguing for a technocultural conception of meaning on the Web, that is, for an

analysis of the production and circulation of meaning that takes into account the specific

material, technical and cultural milieu of the Web and in particular the role played by

4

software in the production and circulation of meaning, this research aims to renew and

expand a general concern, within the field of communication studies, with the articulation

between meanings and the social and cultural context. This research inscribes itself

within the larger problematic originating with Foucault on the necessity to examine texts

not only for the meanings that are embedded in them, but also for the ways in which the

economies of meaning production and circulation reveal, create, influence and are

influenced by social relations of power. Power, in this context, can be defined as not

simply a repressive force, but a historically located set of dynamics and force relations

that constitute a “productive network which runs through the whole social body”

(Foucault, 1980b, p. 119). The main point is that texts and meanings do not simply

express ideas and ideologies; they also existentialize modes of being, subjectivities, and

identities. Such modes of existentialization take place at different levels: texts can

participate in existentializing, in making real specific subjectivities through the process of

representation, and texts can also existentialize specific relations between the producers

and consumers of texts - between, for instance, authors and readers. In so doing the

production of texts reinforces a social order that defines who has the right to speak about

specific topics and how, as well as the proper ways to read a text, that is, the proper way

of interpreting texts according to a specific cultural milieu.

This research will show that meanings, as shown by Foucault’s analysis of

discourse, power and knowledge and as further investigated by Deleuze and Guattari’s

examination of collective assemblages of enunciation and diagrammatic machines, are

not simply worth studying at the level of representation. Texts also participate in the

5

shaping of a social order. In that sense, the specific milieu within which texts are

produced and consumed; the political, economic, social and cultural actors that make

specific textual conditions possible all need to be examined in order to assess the

articulations between the textual and the social, between meanings and culture. By no

means are these articulations simple, and the present research argues that the examination

of the milieu or context within which meanings are produced and put into circulation

cannot limit itself to the social, especially in the case of the Web. As Kittler points out in

Gramophone, Film, Typewriter (1997), the limit to Foucault’s approach to discourse is

that it fails to acknowledge the role played by technologies of communication through

their intervention in the production, circulation and storing of meanings. Communication

technologies directly intervene in the production and circulation of meanings by

presenting a set of material limits and possibilities. As Harold Innis (1951), Marshall

McLuhan (1996) and Elizabeth Eisenstein (1979) argued, the possibilities and limits of

communication technologies have profound impacts in the organization of a social and

political order. The present research argues that there is a link to be made between the

study of the limits and potentials of communication technologies and the analysis of the

articulation between the textual and a social order. This type of technocultural analysis is

all the more central to the study of the Web as software constantly mediates all processes

of signification. It is therefore important to examine how software is shaped by other

technical, economic and political processes, and how it participates in the shaping of

conditions of meaning production and circulation that define uses of the Web and, in so

doing, establish new subjectivities and new relations of power. Thus, the software being

6

used for meaning production and circulation on the Web is central for examining the

specific power relations that are formed on the Web.

There already exists a body of research on the cultural impact of software known

as “software studies”, which was originally coined by Lev Manovich in his book The

Language of New Media (2001). As a nascent field of studies involving an

interdisciplinary range of sources, software studies has garnered international attention,

with an edited book entitled Software Studies: A Lexicon forthcoming in 2008 and a new

research initiative in Software Studies directed by Manovich at the University of

California - San Diego. Software studies is interdisciplinary and encompasses a wide

range of approaches from philosophy to cultural studies, new media theory,

communication studies and other social sciences approach. Software studies is about

defining new methods, theories and tools for the study of the cultural, political, economic

and social impact of software. In terms of its history, Manovich’s original call for the

development of a field of software study stemmed from the recognition that:

New media calls for a new stage in media theory whose beginnings can betraced back to the revolutionary works of Harold Innis in the 1950s andMarshall McLuhan in the 1960s. To understand the logic of new media, weneed to turn to computer science. It is there that we may expect to find thenew terms, categories, and operations that characterize media that becameprogrammable. From media studies, we move to something that can be called“software studies” - from media theory to software theory. (2001, p. 48,italics in the text)

Manovich indicates that software studies cannot be considered as a subfield of media

studies, but rather as a new field making use of the central theoretical questions in media

studies in order to develop new approaches to the study of software. Furthermore, as

7

indicated by the direct reference to, on the theoretical level, the works of the Toronto

school and medium theory and, on a practical level, the importance of the computer as a

technical and material object, the research questions that define software studies go

further than the analysis of the psychological, social and cultural contents present on

interface. Rather, software studies encompasses both the interface and the unseen layers

of programs that make the interface possible in order to explore the hermeneutic, social

and cultural realities that appear as a consequence of new modes of representation,

expression and meaning-making.

There are strong links between software studies and media studies, particularly in

the contention that technologies of communication play an important role in shaping

cultural perceptions and in allowing for new forms of social relationships to emerge, as

Manovich's acknowledgement of the importance of Innis and McLuhan’s approach

demonstrates. The characteristic of software studies, according to Matthew

Kirschenbaum (2003), is a focus on the material environment - the imbrication of

technical apparatuses - in order to understand the rise of new cultural and social

phenomena. For instance, as Kirschenbaum argues, rather than examining virtuality as a

fully-fledged concept, a software studies approach would examine virtuality as a

phenomenon that is the product of the articulation between material (i.e. technical

processes) and cultural norms and practices. Software embodies the articulation between

the cultural and the material, as well as the imbrication of culture and technology in that

it includes the technical apparatus that enables and mediates new cultural representations

and social relations. Software studies, as exemplified in the work of Matthew Fuller

8

(2003) and Wendy Huy Kyong Chun (2005), attempts to acknowledge both the

construction of software - its cultural, political and technical economy - in order to

examine what is culturally enabled, or disabled by software, as well as the ways in which

software is further articulated within cultural, economic and political processes so as to

create new technocultural environments. The study of software, in that sense, is the study

of the technoculture produced when software systems are deeply embedded and

constantly mediate culture (Williams, 1961), that is, ways of life, meanings, norms and

practices. The inseparability between techniques and culture, between material

apparatuses and norms, values, meanings, identities and ways of relating to each other, is

at the core of software studies. The finality of software studies, then, is to offer a critical

account of software through deconstructing the usually unquestioned economic, political

and cultural logic embedded within software systems. This, as Fuller (2003) argues,

allows for a critical reflexivity on “the condition of being software - to go where it is not

supposed to go, to look behind the blip; to make visible the dynamics, structures, regimes

and drives of each of the little events it connects to” (32). The reappropriation of software

through this critical reflexivity includes experimentation with new forms of software to

highlight the technocultural assumptions embedded in technical systems.

In that sense there is a link between software studies and other approaches to

studying the Internet, and the World Wide Web in particular, that focus on the question

of the verticality of the Web (Elmer, 2006) and information politics (Rogers, 2004) to

analyze cultural content on the Web. According to these approaches, the processes of

transmission and mediation that takes place through information networks need to be

9

studied not only at the level of the front-end, that is, the human-understandable signs

appearing on a screen, but also at the level of the back-end; the many layers of software

that are needed, from transmission protocols to computer languages and programs, to

transform data into signs (Rogers, 2004). The acknowledgment of the role played by

technical specificities in making communication on the Web and the Internet possible has

led to further attention to the visual regimes produced by specific technical characteristics

of the Web, and to the ways in which these characteristics can be deconstructed. For

instance, alternative ways of exploring the potential of the Web through the creation of

alternative modes of surfing have been at the core of Geert Lovink and Mieke Gerritzen’s

Browser Day Project1 and Matthew Fuller’s Web Stalker . Fuller’s experimental Web

Stalker (2003) - a Web browser that deconstructs the conventions embedded in popular

Web browsers - represents a first attempt to overcome the page metaphor and to represent

Web browsing in spatial terms, where URLs are represented as circles and hyperlinks as

lines, with text and images collected in a separate window.

1 http://www.waag.org/project/browser1998

10

Figure 1: The Web Stalker

Fuller’s exploration, through the Web Stalker, of the cultural conventions embedded in

software - how websites are usually perceived as a collection of pages and hyperlinks -

finds an echo in other social sciences and cultural studies approaches to the Web, which

focus on examining the technical mediations of content on the Web in order to see the

technodiscursive and technocultural rules that create specific flows of content. Such

approaches were originally focused on hyperlinks, with the contention that hyperlinks are

indicator of the absence or presence of relationships among entities on the Web (Park &

Thelwall, 2005). For instance, Halavais and Garrido’s work (2003) on the hyperlink

network of the Zapatista movement shows how the examination of linking patterns

among websites gives strong clues as to the new relationships between the local and the

11

global and as to how social movements can be both focused on a single cause and exists

in a decentralized manner.

In a similar way, Rogers’ information politics argues for the tracking of

hyperlinked content on the Web as way of examining the deployment of issue networks

on the Web (2004). The IssueCrawler developed by the Govcom.org foundation directed

by Richard Rogers functions by allowing researchers to enter a string of URL that are

then crawled for their hyperlinks. The IssueCrawler departs from other hyperlink network

analysis tools in that it looks for co-linkage rather than all the hyperlinks. That is, the goal

of the IssueCrawler is to identify, starting from the list of hyperlinks generated from

crawling the original URLs, which other organizations or URLs are linked to by at least

two of the original URLs. Such an approach identifies the organizations serving as a

reference points for other organizations and thus allows for the visualization of which

issue nodes, or URLs that are the most important in a given issue network. Furthermore,

the IssueCrawler can be used to identify which domains are linked to - whether

educational (.edu), governmental (.gov), NGO (.org) or commercial (.com), as well as the

geographic relationship between an event and the ways in which issues surrounding an

event are discussed by organizations potentially located in other countries.

12

Figure 2: The IssueCrawler (Govcom.org)

The analysis of the front-end and back-end of information politics has evolved further

with the notion of code politics, where content needs to be located as a product of and as

embedded within a series of technical mediations that express cultural, commercial and

political processes. The Infoscape Lab at Ryerson University has focused on developing

software tools to examine the code politics of the Web. Code politics involves not only

hyperlinks, but also other markers, such as metatags and other HTML code which give

information as to how web designers want their website to be perceived by search

engines, as well as indications as to how information within websites is structured and

through what means (i.e. open-source software rather than proprietary software). As such,

13

a code politics approach aims to further integrate questions related to content with the

political economy of layers of codes, both within website and on the broader Web

(Langlois and Elmer, 2007).

A software studies approach to the Web involves a technocultural examination of

the materiality of Web technologies in order to see how they articulate with cultural,

political and economic processes, that is, how they translate, support and challenge new

practices and power relationships. The present research belongs to a software studies

framework in that it examines the materiality of software in charge of transmitting,

mediating and producing human-understandable meanings appearing at the interface

level. In so doing, the present research re-formulates the long-standing question about

what constitutes the language of new media, and in particular, the language, or languages,

of the Web. This question is not new, and there are a number of contributions within the

field of software studies. Manovich’s Language of New Media (2001) identified some

unique characteristic, or principles of new media: numerical representation, modularity,

automation, variability and transcoding. These principles serve as a basis on which to

examine processes of cultural manipulation of objects through software as well as to

distinguish software from the older media forms. In particular, these principles of

software highlight the ease of manipulation of data offered by software - the seeming

simplicity of manipulating images, sounds and videos so that any user can do tasks that

used to be delegated to specialists within the old media context. Software, in that sense,

does not simply mimic the old, and there is a tension between software as remediation of

other media (Bolter & Grusin, 1999) and the radically new modes of technical production

14

that are needed to give the impression that software is embedded within traditional

mediated forms of perception. The limits of Manovich's (2001) approach paradoxically

lies in the size of its scope. Manovich identifies principles across new media, that is,

across a range of technologies from the Internet, through the Web to video games and

digital art forms. While invaluable in providing some core principles of new media, this

approach is limited in that there is a need to examine, in turn, the communicative

characteristics, cultural practices and context that differentiate between, for instance,

communication across computer networks, as opposed to watching a digital movie.

Furthermore, the finalities of new media forms, whether they are inscribed within an

artistic, open-source or commercial logic have to be acknowledged, as they participate in

the shaping of technocultural contexts. In the same vein, Bolter and Grusin’s exploration

of the processes of remediation (1999) between new media and old media is central in

establishing a genealogy of new media in terms of the continuities and ruptures with old

media, but it remains focused in establishing characteristic at a general level.

The transition from a general examination of new media to a more focused

examination of software - of the deployment of technical components that are unique to

new media - constitutes a first step in refining our understanding of the ways in which

software shapes a technocultural milieu. Wendy Chun’s (2005) exploration of the historic

and theoretical genealogy of software highlights the ways in which the analysis of

software includes both a focus on the materiality of software as well as a consideration of

its social and cultural consequences. As a departure from new media approaches, Chun’s

exploration of software as a technical and material process through which social

15

relationships are redefined and new modes of cultural perceptions are developed involves

not only an exploration of what is embedded in software, but also of the ways in which

software is articulated with other social, cultural, political, economic and legal processes.

As Chun (2005) demonstrates, these two levels of analysis - the social relationships and

the cultural perceptions organized through the deployment of software - are not separate.

It is possible to focus one’s analysis primordially on the ways in which political,

economic and legal processes are articulated with software in order to produce new social

relationships, such as the reorganization of the workplace. For instance, Adrian

MacKenzie’s (2006) exploration of the Java programming language highlights the many

levels at which the deployment of a particular software program involves a multitude of

actors, from other software programs to industry incentives and programmers.

Alternatively, it is possible to analyze the cultural perceptions embedded in software

technologies, as Manovich (2001) does. However, the acknowledgment that software

both produces and is produced by other cultural and social processes, that is, that

software technocultural milieus are the products of many articulations rather than

resulting from either technological or social determinism, highlights the complexity of

software and poses new theoretical and methodological challenges. In that sense, it

becomes difficult to explore changes in cultural perceptions without examining how the

capacities embedded in software are in turn articulated with social, political and

economic practices. For instance, it is difficult to understand the importance of data

manipulation through remixing and sampling without examining how new practices of

producing sounds, images and texts challenge traditional conceptions of the work of art,

16

authorship and intellectual property as well as create new artistic processes and new

processes of circulation and consumption of cultural objects (Lessig, 2005). Software,

including the cultural perceptions embedded in software, mobilizes a range of processes

and both shapes and is shaped by these processes.

The theoretical complexity of software as both product of and producing a

technocultural context is further enhanced by the difficulty of examining tightly

imbricated software programs. Software, in a sense, has also become too broad a term.

Software encompasses different levels from what happens when objects are manipulated

at the interface level to the automated programming processes that are invisible to the

users and take place without user intervention. The difficulty of identifying software

(Chun, 2005, p. 28), lies in this very imbrication, in the process through which a software

program relies on another software program in order to function. Furthermore, while

software was originally associated with one communication device (i.e., the computer), it

has now been deployed onto other communication devices that are used within vastly

different contexts. While the cell phone is turning more and more into a mini-computer,

its context of use is still different from a desktop or a laptop. There are thus several

technocultural contexts within which software can be deployed, and it becomes

increasingly difficult to be able to encompass all of these contexts, with their specific

technical, social and cultural processes. The present research focuses on a specific layer

of software within the technocultural context of the World Wide Web. The main research

question is about how software components in charge of promoting and producing

content and of facilitating users in producing their own content on the Web create new

17

discursive and cultural practices and meanings as well as are articulated and captured by

other technical, economic and political processes that are present on the Web. In so

doing, the scope of the research is not on software in general, or even software on the

Web, but rather on the software components that have a linguistic purpose so that they

are primarily designed to produce content or facilitate content production. In so doing,

this research is focused on exploring the relationships between software and users, that is,

on how the user as a category with a specific field of cultural values, norms and practices

is produced through specific Web technocultural contexts.

As Chun (2005) argues, software is ideology in that the interface produces

specific modes of representations that shape modes of activity, and thus users:

Software, or perhaps more precisely operating systems, offer us an imaginaryrelationship to our hardware: they do not represent transistors, but ratherdesktops and recycling bins. Software produces “users.” Without OS therewould be no access to hardware; without OS no actions, no practices, andthus no user. (p. 43)

Accounts on the importance of operating systems and desktop programs such as

Microsoft Word have been invaluable in pointing out how software allows for the

cloaking of data processes and signals through the use of common metaphors of desktop

and file folders (Kittler, 1997; Fuller, 2003; Johnson, 1999; Chun, 2005). The World

Wide Web has come under the same scrutiny, although most research from a software

studies paradigm has come to be outdated in that it focuses on the HTML environment.

While HTML is still used for website development, it is more and more replaced by other

18

languages or included within other software programs (i.e. XHTML, ASP, PHP).2 These

technical developments that have taken place over the past few years need to be taken

into account, especially as the majority of these new languages to support content on the

Web have been focused on making it easier for users with no HTML knowledge to post

content on the Web and thus to participate in the shaping of online spaces through, for

instance, blogs and wikis. Another characteristic of these languages that is central to this

study is the capacity for websites to update content in a dynamic way, that is, to evolve

and change content depending on the surfing behaviour of a specific user. The

customization of content - customized news, customized shopping suggestions - has also

been a growing trend on the Web, and there is thus a vast amount of software studies

research to do on these new technological, economic and cultural trends. While there is

an ever-growing amount of studies on the social use of blogs, wikis, participatory

websites, news groups, and social networks,3 there are comparatively less software

studies analyses of the changes in Web content creation over the past few years. As such,

there is a need to address the role played by software that supports content production in

shaping new signifying practices.

The present study aims to offer a first step towards such an analysis by focusing

on two case studies of two popular formats that embody differing cultural, technological,

economic and political conceptions of the Web and of Web users. The first format is the

2 For a history of Web languages, particularly those related to dynamic content

production, see: http://royal.pingdom.com/?p=2223 See for instance, a 2007 special issue on the social uses of social networks from the

Journal of Computer-Mediated Communication: http://jcmc.indiana.edu/vol13/issue1/

19

one offered by amazon.com, and the second one is based on the MediaWiki software

package, which has been used to produce popular open-source collaborative online

spaces such as Wikipedia. Both formats are vastly different in terms of the Web

languages and software they use and in terms of their economic model, as amazon.com is

a for-profit enterprise whereas MediaWiki is part of an open-source, not-for-profit model.

However, they are similar in that they rely on user-produced content, either to have an

extensive database of user-produced reviews and ratings about cultural products in the

case of amazon.com, or to produce a large repository of knowledge, in the case of

Wikipedia and other wikis. This requires the use of software to facilitate content creation

and to update website content in an almost instantaneous manner. Furthermore, the

popularity of both amazon.com and MediaWiki spaces such as Wikipedia does not

simply lie in their capacity to attract users and be a popular shopping space or a popular

informational space, but also in the ways through which they have developed specific

models that are used elsewhere on the Web. Amazon and MediaWiki exist in different

languages and local versions. Amazon inc. has also been developing solutions for online

shopping that are used by other online retailers. It also makes use of advertising networks

and has developed an online presence on other popular Web spaces, such as Facebook.

The MediaWiki software package is being used by countless wikis, both on the Web and

by private Intranet networks. Following Chun’s (2005) exploration of software as

ideology, the starting point of this research lies in the examination of the types of cultural

and communicational practices that are enabled by the software used by these formats

and, in turn, of how these software layers shape the role of users as agents and actors

20

within commercial and non-commercial representational spaces. As will be explained in

this study, the idea is not to consider that only software as an unproblematic entity, but to

recognize that it is itself the product of complex technical, commercial and political

articulations. This study argues for a way of locating the interactions and articulations

between cultural, technical, social, political and economic actors in the shaping of

representational spaces and their users. To summarize, examining the technocultural

dimensions of meaning requires an acknowledgement of the networks of power that

traverse specific Web spaces and shape cultural, discursive and technical agents such as

software layers and users.

There are several difficulties in realizing such an analysis. The first challenge lies

in finding a theoretical framework to take into account the technocultural role played by

technical devices such as software, but in a way that recognizes that software both shapes

and is shaped by a range of processes, and therefore that the cultural representations and

discursive rules that are mediated by software are embedded within complex power

networks. The goal, in short, is to avoid falling into either technological determinism or

social constructionism. As will be argued in Chapter One, a problem in analyzing the

effects of software stems from an over-reliance on medium theory. While medium theory

is useful, particularly with its focus on the materiality of communication technologies as

central for understanding the social and cultural changes introduced by the development

of a new medium, the framework it offers is not adapted to the specificity of software,

which I argue is built through a system of layers involving different types of technical

tools and cultural logics. Furthermore, the complexity of software as both shaping and

21

shaped by technocultural processes cannot be explained through a medium theory

framework, which too often attempts to identify one essential feature of a medium rather

than acknowledge, in the case of software, its composition as layers. As will be argued in

the first chapter, a starting point for further enriching software theory is Actor-network

theory (ANT), which offers ways to explore how technical and human actors are

deployed through their articulations with each other within actor-networks. Actor-

network theory has gained popularity in the field of the cultural studies of technology and

communication technologies (Slack & Wise, 2002) and within the field of software

studies (McKenzie, 2006). Its framework for exploring the constitution of networks of

human and non-human actors with delineated sphere of agencies offers a robust model

for which to explore the Web as constructed through layers of software.

Technocultural analyses of meanings from a software studies perspective can

benefit from the framework developed by actor-network theory. However, ANT falls

short of offering a model through which to study networks of actors at the semiotic and

discursive levels, which is what the study aims to achieve. As ANT has traditionally not

been concerned with techno-semiotic systems, there needs be a new theoretical and

methodological framework to complement it. Chapter Two argues that Felix Guattari’s

mixed semiotics framework can be used to examine the technocultural formation of

semiotic systems. Guattari’s mixed semiotics framework was developed in reaction to

structuralist linguistic frameworks, and argues that processes of meaning formation

cannot be simply studied through an analysis of signs, but also through an exploration of

processes that are beyond and below signs, such as material processes and a whole field

22

of power relations. Adapted to the study of software-assisted meaning production and

circulation, such a framework allows for the identification of the technocultural and

technocommercial processes that make use of and participate in the shaping of the

cultural Web interface.

Chapter Three and Chapter Four identify some of these processes through a case

study analysis of amazon.com and wikipedia.org. In particular, the processes that are

identified within the mixed semiotics framework concern the encodings of the material

intensities of the Web - particularly users’ surfing pattern - so as to develop new semiotic

systems and new a-signifying systems that impose discursive and signifying rules and

modes of subjectivation onto users. ANT complements this framework by allowing for a

mapping of the shaping of the agency of both commercial and non-commercial human

and non-human actors that participate in the deployment of specific software layers and

are in turn embedded and redefined within software-produced mixed semiotics. The

analysis of amazon.com shows how the deployment of software tools to track, shape and

predict the desires of users raises questions related to the automated production of

identities and subjectivities. In particular, the analysis developed in Chapter Three

highlights the process through which user-produced meanings are incorporated within

software-produced semiotic systems so as to embed cultural processes within a

commercial imperative. The analysis of the circulation of the Mediawiki software in

Chapter Four shows how the circulation of the MediaWiki software package through

Wikipedia and other websites model is not only cultural, but also technical as mass

collaborative knowledge production depends on a suite of software tools - the wiki

23

architecture - that enables new discursive practices. In particular, Wikipedia is the result

of a set of articulations between technical and cultural processes, and the case study

shows that this model is captured, modified and challenged by other websites using the

same wiki architecture as Wikipedia. The chapter also highlights how legal and technical

processes on the Web appropriate discursive practices by capitalizing on user-produced

content as a source of revenue.

Chapter Five synthesizes the research by highlighting the relevance of mixed

semiotics and ANT in identifying some of the power formations that make use of cultural

meanings and the semiotic systems within which these cultural meanings can be shaped

and communicated. The shaping of a cultural horizon through the deployment of a

specific set of techniques is one of the central concerns in the development of the Web,

particularly as it involves a complex set of relationships between the front-end of the

interface and the back-end of data gathering, storing and processing. The study of the

interface through mixed semiotics and ANT thus reveals the ways in which the interface

can be used to both shape the category of the user and hide the power formations and

informational processes that intervene directly in this process of shaping. The use of the

mixed semiotics framework allows for a reassessment of the articulations between

informational process and cultural dynamics that intervene in defining a horizon of

subjectivation - that is, a set of practices with which human actors are forced to articulate

themselves in order to exist as users.

24

Chapter 1

Technology and Media: Towards a Technocultural Approach to the World WideWeb

Examining the technocultural dimensions of meaning, and in particular the role

played by software in creating specific technocultural conditions and relations of power

to regulate meaning circulation and production requires a detour by ways of defining

what I mean by a technocultural approach to the Web. It is necessary to examine the Web

as a technoculture, that is, as a site that is defined through the imbrication and

articulations of technical possibilities and constrains within cultural practices and power

formations. The aim of this chapter is to understand the particularities of the Web as a

technocultural context.

The main challenge in studying the Web is best problematized in Lev Manovich’s

statement that new media appear when the “computer is not just a calculator, control

mechanism or communication device, but becomes a media processor, a media

synthesizer and manipulator” (2001, p. 25-26). That is, the Web is not simply a

technology, even though it is relying on complex set of techniques, from hardware to

software. As a medium, it is also deploys a cultural process that mobilizes users,

languages and representations. The main question for this chapter is about taking into

account the relationship between technology and culture as they surface through the Web.

There is a need to develop a theoretical framework capable of taking into account the

articulations that defines technocultural networks and shape a medium such as the World

Wide Web.

25

There have been numerous studies of the Web as a medium. Political economy

approaches have been useful in demonstrating the shaping of the Internet and the World

Wide Web by the market and the state (Lessig, 1999; McChesney, 2000; Mosco, 2004).

At the level of content, methodologies such as content analysis and discourse analysis

have been adapted to examine the meanings propagated through websites and Web

spheres and new methodologies such as hyperlink analysis (Garrido and Halavais, 2003)

have been developed to examine Web-specific textual characteristics. Methodologies

drawing on ethnography have been reworked to analyze the social consequences and uses

of those textual characteristics (Hines, 2000; Schneider and Foot, 2004).

This non-comprehensive list of the types of research that are being undertaken for

the study of the World Wide Web have managed to partly adapt Stuart Hall’s classic

definition of a cultural studies approach to communication (1980). Indeed, Hall’s focus

on the encoding and decoding of messages invites us to explore the relationships between

frameworks of knowledge, relations of production and the technical infrastructures that

shape media messages. Cultural Studies approaches to technology have successfully

demonstrated that the study of content cannot be separated from the social, political and

economic context of communication. In turn, it is necessary to acknowledge that

technologies of communication are not simply carriers of content, but are parts of

complex technocultural entities that participate in the material constitution of discourse.

As is suggested by Jennifer Daryl Slack’s invitation to focus on “the interrelated

conditions within which technologies exist” (1989, p. 329), the analysis of the Web as a

social space of representation and discourse requires an examination of its material and

26

technological basis.

This chapter argues that the challenge in understanding the Web as a medium lies

in the acknowledgement of the Web as a layered technocultural entity, as an assemblage

of diverse technical tools, from the hardware, electric signals, algorithms, and protocols

of communication to the software, interface and sites of representation they offer. The

analysis of these layers needs to go beyond the categories of hardware and software

shaped by users. In order to see how these layers are made to act as a medium, it is

necessary to not treat them as separate, hierarchized entities. Rather, this research calls

for considering the links between these layers through an analysis of the junctures where

technocultural agencies are negotiated and mediated.

The research uses a variety of theoretical approaches in order to acknowledge the

complexity of the Web as a medium, a set of technologies, a cultural form and a space

where discursive formations are produced. A comparison between medium theory (Innis

1951, McLuhan 1995, Meyrowitz 1993, 1994) and “material” analyses of communication

(Kittler 1990; 1997, Hayles 1993; 2003; 2004, Hansen 2000, Gumbrecht 2004, Galloway

2004) highlights the general problematic in studying the characteristics of a medium: the

roles played by technologies not only in shaping new modes of representation, but more

importantly in shaping cultural changes and new social relations. The problem, then,

becomes one of analyzing the Web as a complex technocultural assemblage. Actor-

network theory provides the theoretical basis through which cultural studies, medium

theory and material analyses of new media and the Internet can be re-evaluated and

adapted to take into account the cultural effects of Web technologies.

27

1. Towards a Material Approach to Media Analysis: Medium Theory and Materialities

of Communication

The present research examines the World Wide Web as a medium, and not as a

set of information and communication technologies. This is not meant to deny the

importance of technologies. On the contrary, treating the Web as a medium means

conceptualizing it as an entity, as an agent and not as a neutral tool that faithfully mirrors

social and cultural processes without in some ways distorting and changing them.

In order to examine the relationships between media, technology and discourse, it

is necessary to establish a working definition of the concept of medium. Ian Angus

usefully argues that a “medium is not simply a technology, but the social relations within

which a technology develops and which are re-arranged around it” (1998). A medium,

then, is the space where technology, social relations and cultural processes are

articulated. A medium is a communication system, that is, an information delivery system

that, according to Meyrowitz, creates new social environments and is thus active in

bringing social change (1986, p. 15). Perhaps the most satisfying definition for the

purpose of this study is to adopt Kittler’s scientific definition of media as forms of data

storage and transmission (1996). This definition expresses a theoretical shift in the

examination of the cultural impacts of media systems. By referring to “data storage and

transmission”, this definition calls upon a transmission model of communication rather

than the more accepted “ritualistic” model of communication (Carey, 1975) used for

cultural analyses of communication. This definition highlights the importance of the

technical capacities of a medium, and links them to the qualitative question of the “form”

28

of the medium. This definition shows that the cultural characteristics of a medium are

linked to its technical capacities. What the medium can or cannot transmit and how it

transmits information is crucial for understanding the kinds of social relationships and

power dynamics that can be developed through a specific media system.

Medium theory is a central reference for examining the impact of media systems

within the field of communication and cultural studies. Medium theory has its roots in the

works of the Toronto school, particularly those of Harold Innis and Marshall McLuhan,

who “developed a way of writing about western civilization by focusing on the media not

only as technical appurtenances to society, but as crucial determinants of the social

fabric” (Carey, 1968, p. 271). Innis and McLuhan illustrate two ways of focusing on the

cultural impact of media: while Innis (1951) paid attention to the social and political

transformations brought about by media technologies, McLuhan (1995) interrogated the

physical and psychological impact of media. Innis (1951) argued that writing and print

create the possibility of greater territorial expansion and control, thus allowing for the

creation of political and economic empires that control vast expanses of space. This space

bias of writing and print also has consequences for the ways in which knowledge is

defined in terms of abstraction and universality. McLuhan (1995) argued that the sensory

imbalance towards sight that is created by writing produces a cultural definition of

knowledge based on continuity, sequence, and rationalization. Both Innis and McLuhan

focused on the technical capacities of a medium such as print in order to flesh out some

of its cultural consequences. Such an approach that reintegrates technologies within the

study of media can be found in works focusing on new media and new information

29

technologies. Paul Virilio (2000), as well as Arthur Kroker and Michael Weinstein

(1994), can be considered as affiliated to medium theory, as their analyses of

communication networks and of the proliferation of media highlights the rise of an

ideology of speed and virtualization, where the boundaries between reality and virtual

reality is constantly blurred.

Other scholars who do not particularly affiliate themselves with medium theory

nevertheless reintegrate an analysis of different technological components for

understanding the cultural impact of new media. Galloway (2004), for instance, examines

the protocols of the Internet, such as TCP/IP, with regards to the power dynamics they

create through comparing the technological concept of protocol with Deleuze’s societies

of control (1992) and Foucault’s bio-politics. Galloway’s Protocol (2004) departs from

more common analyses of information technologies focused on social uses. It participates

in the resurgence of the material turn in communication studies, especially in its analysis

of new media that have been described as dematerialized and dematerializing (Kitzmann,

2005). The theoretical need for an analysis of the material aspects of new media is best

expressed by Sean Cubitt, who argues that “in the half-acceptance of a view that they in

some way effectively dematerialize the older media, we have intellectually betrayed the

digital media” (2000, p. 87). The concept of “materiality” echoes and expands the main

concern developed by medium theory in that it allows for a focus on the technical and

material characteristics of a medium in order to assess its diverse impacts, from the

question of embodiment (Hayles 1993, 1994, 1999) to that of political and cultural

practices (Kittler 1990; 1997, Galloway, 2004). One of the common features between

30

medium theory and material analyses is that the medium is seen as an agent of change.

Kittler’s reference to Nietzsche’s comment about his use of a typewriter: “our writing

tools are also working on our thoughts” (1997, p. 200) illustrates the importance of

specific technologies of writing. What Nietzsche’s example shows, for Kittler, is that the

typewriter changed the nature of writing. In the same vein, Hayles’ work is characterized

by the recognition that that the arrival of new media and electronic literature undermines

the supremacy of print (2003). By challenging print, new media also point at the limits of

scholarly approaches that took the print media ecology as a given.

Medium theory and material analyses of communication both offer a common set

of research questions on the cultural impact of media. Furthermore, a common

assumption, or theoretical move is their distanciation from the question of content. As

Meyerowitz describes it, medium theory is focused on examining the “relatively fixed

characteristics of a medium that make it physically, psychologically, and sociologically

different from other media, regardless of content and grammar choice” (1993, p. 61), that

is, regardless of the message being transmitted and the rhetorical effects being used. In so

doing, the approach developed by medium theory tends to ignore questions of content. As

McLuhan famously declared: “the medium is the message” - what is actually transmitted

is not content but psychological, physiological and social effects produced by the

capacities of different media.

In the same vein, material analyses that are derived from the Humanities (Kittler,

Hayles, Hansen) operate a similar distanciation between media and their content.

Theoretically, such a move corresponds to a reaction against hermeneutics. As Wellbery

31

describes it in the preface to Kittler’s Discourse Networks, hermeneutic theory

“conceives of interpretation as our stance in being: we cannot but interpret, we are what

we are by virtue of acts of interpretation. Hence the universality claim of hermeneutics,

its claim to the position of queen of the sciences” (1990, p. ix). In that sense, the material

turn in communication points out the limits in the study of meanings to understand

cultural processes. Gumbrecht’s Production of Presence (2003) offers a historical

account of the beginnings of the “materialities of communication” movement within the

humanities in the 1970s. As Gumbrecht recalls, concepts such as “materiality” and the

“non-hermeneutic” were developed against the universality claim of interpretation (p. 2).

Rather, by repositioning the role of technologies of communication, material analysis

aims to demonstrate that the possibility of meaning production is contingent on the

technologies of communication available.

Hansen (2000) goes a step further in separating technology (including

technologies of communication) from text and discourse. Hansen posits technology as a

radical other, a second nature that impacts us on a primordial psychological and physical

level. In so doing, Hansen rejects culturalist approaches to technology by arguing for an

understanding of the experiential impact of technology. Hansen’s argumentation rests

upon a critique of the equation between technology and discourse. His key concept of

technesis is meant to represent the “putting into discourse of technology” (p. 20). For

Hansen, technology does not belong to the discursive, but to the real. Hansen not only

shares a similar view as McLuhan in that he dismisses the study of the content of media

messages, but more importantly, by rejecting the equation of technology as discourse,

32

that is, as a carrier of cultural meanings, Hansen attempts to avoid the reduction of

technology as an extension of the human mind. For Hansen, only looking for the

meanings and cultural values carried by technologies means positing that technology is,

to some extent, a social construction. On the contrary, Hansen argues that the impact of

technology takes place before the formation of discourse, at the experiential level. Such a

rejection of interpretation and discourse to understand technology is not only meant to

define new theoretical and research questions, but more importantly offers a critique of

the philosophical treatment of technology through a new definition of technology as a

second nature (p. 234). Technology is not a tool anymore, but a material force and an

agent.

As the various authors cited above show, the theoretical move deployed through

the focus on the question of materiality consists of extending the field of the Humanities

through dealing with research questions about the role of technology that were usually

the focus of other fields of research, such as science and technology studies (STS), while

abandoning traditional concerns with the question of content. Materiality studies and

medium theory see technologies as active in bringing social, cultural and psychological

change. The subsequent question is about how to trace these agencies, and this is where

Actor-network theory can bring some useful theoretical contributions.

2. Technologies as Actors: Actor-Network Theory, Cultural Studies and Medium

Theory

The development of an analysis of the technocultural materiality of the World

33

Wide Web stems from the recognition that in order to examine the Web as a medium, it is

necessary to focus on the technical mediations that make discourse possible. This

demands a conceptualization of the role played by technologies. The methods developed

by Actor-network theory are central to this conceptualization. As indicated by its name,

Actor-network theory examines the relationships among the actors that form socio-

technical networks. The term “actor” designates not only the human actors in charge of

implementing these systems, but also non-human actors, such as technologies,

institutions and discourses (Latour, 1999 pp. 174-215). ANT was developed as a form of

ethnomethodology within the field of Science and Technology Studies, and has been

mainly used to describe scientific processes (Latour, 1999) and the implementation of

technologies, from transportation systems (Latour, 1996) to information systems

(Avgerou et al., 2005). Cultural Studies approaches to technology have engaged with

ANT, especially in the works of Wise and Slack (Wise, 1997; Slack, 1989; Slack & Wise

2002). These types of cultural studies of technology and ANT share a common set of key

theoretical inspirations, among which the rejection of the modernist paradigm that

establishes hermetic and essentialist categories; i.e. technology vs. society, nature vs.

culture (Latour, 1993; Wise, 1997). The framework offered by ANT recognizes the

complexity of assemblages of human and non-human actors through an

acknowledgement of the limits of modernist categories. Latour (1993) uses the concept of

hybrid to show the impossibility of separating technology, science, nature, culture and

society. To use the example of the Web, the concept of hybridity underlines that the Web

is not simply a technology, but is also a cultural artifact and a political and economic

34

entity. The usefulness of ANT within a Cultural Studies framework lies in the

development of analytical tools to account for the multiple facets of this socio-technical

network.

Furthermore, ANT’s insistence that technological entities should be considered as

actors alongside human and other non-human actors leads to a critical assessment of the

concept of causality (Wise, 1987, pp. 9-13). One of the most provocative examples in the

examination of human and non-human actors is Latour’s Aramis, or the Love of

Technology (1996), which consists not only of a description of the relationship between

the different institutional bodies and human actors that were in charge of implementing a

failed transportation system, but also of giving voice to the technology itself. What

Latour suggests is that the relationship between social agents and technical actors is not

mono-causal, but reciprocal and multicausal, thus echoing the concept of articulation as

developed by cultural studies (Slack, 1996; Grossberg, 1996; 1987). What we see as mere

technological objects offer constraints and possibilities, and as such are best defined as

actors who develop spaces of agencies. For ANT, the risk in focusing solely on the social

agents and cultural processes that shape technologies is to fall into some form of social

determinism, where technology is seen as a “receptacle for social processes” (Latour,

1993, p. 55).

The kind of analysis that ANT promotes seeks to open the black box - the “many

elements that are made to act as one” (Latour, 1987, p. 131) - in order to examine the

network of the many actors that constitute it. ANT invites us to see the Web not as a

computer network, but as a socio-technical network that assembles human and non-

35

human actors; computer developers, hardware, technical standards and protocols,

institutional bodies that regulate the architecture of the Web, software and software

developers, and users. As Latour and Callon (1981, p. 286, cited in Slack and Wise, 2002,

p. 489) argue, the concept of actor encompasses “any elements which bends space around

itself, makes other elements dependent upon itself and translate their will into a language

of its own.” In tracing the flows of agency that define the actors and their space of agency

within a network (Slack & Wise, 2002, p. 489), the approach developed by ANT is

reminiscent of the cultural studies concept of articulation as the “nonnecessary

connections of different elements that, when connected in a particular way, form a

specific unity” (Slack, 1989, p. 331).

Furthermore, ANT enriches the concept of articulation by defining it as a process

of mediation and translation (Latour, 1999, pp. 174-215). These terms are used to

describe the process of distortion of cultural, social and political ideals when they are

embodied through a specific technology. A whole socio-technical network is composed

through the processes of delegating tasks to non-human actors. Translation does not mean

direct correlation: while the technology being created is supposed to answer to these

specific cultural, social and political ideals, there is no guaranteed equivalence between

technique and ideals. The characteristic of the technology itself, and the setting in which

it is added make these equivalencies problematic. Through the translation of ideals into

technologies, meanings change and evolve. The socio-technical hybrid that is being

produced represents a process of mediation, where the original meaning is changed

through its material implementation. Latour uses the example of the speed bump to

36

illustrate the process through which, by delegating a goal (slow down traffic) to a non-

human actor (the speed bump), the original meaning is changed from “slow down so as

not to endanger people” to “ slow down and protect your car suspension” (Latour, 1999,

186-87). The effect might be the same, but the meaning has changed.

At the same time, Cultural Studies can complement ANT by reincorporating the

question of power into ANT’s analytical framework (Wise, 1987, pp. 33-36), through a

focus on the broad ideological, economic and political matrix or context within which an

actor network is being developed. As ANT has been developed partly in reaction to

macro-analyses that give all agency to ideology and the economy, to the extent that it has

failed recognize the large-scale effects that are created through the stabilization of power

relations.

ANT needs to be adapted to answer questions related to the rise of media. Indeed,

one of the central questions that remains to be examined about what happens once Web

technologies, which are extremely standardized and automated, are deployed throughout

society so that they do not solely belong to their creators, but materializes in specific

cultural process. How do these technological layers are made to act as a specific

medium? A comparison between ANT and medium theory becomes necessary. There is a

similarity between ANT and medium theory in the acknowledgement that technologies

are not neutral or passive, but rather active in promoting change and establishing new

social relationships. However, there are strong differences between ANT and medium

theory. Medium theory has been focused on large-scale social change through the

deployment of different media technologies, while ANT, an ethnomethodology (Latour,

37

1997), has traditionally been focused on more localized phenomena. Rather than

attempting to establish a broad picture of the social impact of the Internet, research that

uses ANT has focused on the development of a particular information system within a

specific organization (see, for instance, Avgerou et al. 2005). Furthermore, ANT is also

characterized by its rejection of pre-existing frameworks and categories in favour of

learning “from the actors without imposing on them an a priori definition of their world-

building capacities” (Latour, 1997, p. 20).

One of the fundamental differences between ANT and medium theory lies in the

problem of technological determinism. While Innis’ work has been critically assessed as

focusing on the cultural consequences of the conjunctures of media technologies and

social, political and economic processes (Buxton, 1998; Wernick, 1999), medium theory,

particularly the work of McLuhan, has been criticized for the ways in which it ignores the

institutional and cultural context that foster the development of specific media forms to

the detriment of others (Williams, 1975; Meyrowitz, 1994, p. 70). In the case of a

medium theory approach to the Internet, charges of technological determinism have

surfaced against the idea that computer networks have ushered in a new cultural order.

McLuhan’s global village, for instance, has been revived to express some of the

potentialities of information technologies in terms of reorganizing not only modes of

communication, but also social relationships and knowledge.

These types of utopian and dystopian discourses have been rightly criticized for

their failure to take into account the context within which new technologies are

developed (Mosco, 2004). Common criticisms, however, do not so much deny that

38

communication technologies have an impact, but rather show that there is a need to

distinguish between the ideological discourses that are constructed around technologies,

the ways in which technologies are appropriated by social and economic forces, and the

ways in which technologies sometime resist these appropriations and create new

possibilities. ANT’s invitation to examine in detail and without a priori the relationships

that form the networks within which new information technologies are located opens the

way for a recognition of the complex and paradoxical effects of media technologies. For

instance, the Web might be seen as yet another outlet for media giants (McChesney,

2000), at the same time as it offers the possibility for people to express themselves and

reach audiences through online forums and blogging. Through the mapping of the flows

of agency that circulate between human and technological actors within specific contexts,

ANT helps us recognize that there might not be a simple pattern to the relationships

between media and culture.

3. Analyzing Web Technologies: The Problem with Essentializing Medium

Characteristics

It seems difficult, then, to establish any links between ANT and medium theory,

which can be characterized as an attempt to find the essential features of media

technologies regardless of their contexts of use and deployment. However, going back to

the limits of ANT with regards to the acknowledgement of broader structures of power

mentioned above, there is a need to recognize that although there are problems with

essentializing approaches to media, there are some stable features that are established

overtime. For instance, the uses of the Web might be paradoxical, but representational

39

forms on the Web are fairly harmonized through Web protocols and design conventions.

This leads us back to the question of the ways in which the technological layers of the

Web offer a certain range of possibilities and delineate the fields of agencies within

which they are articulated.

A medium theory approach to the World Wide Web calls for an examination of

the technical characteristics of the World Wide Web and the ways in which these

characteristics offer new cultural possibilities. Medium theory invites us to explore not

only what is beyond the surface of Web representations--the social context--but also what

is below--the hardware and software that shape what we see on our computer screens as

mediated messages. For instance, Kittler’s concept of discourse network invites us to

consider the technical processes of specific forms of communication in order to uncover

the ways in which the machinery of computer networks shapes knowledge according to

specific technico-cultural rules. Adapting such an analytical framework to the Web, then,

demands an acknowledgement of the complex processes of computing in terms of their

ability to create specific possibilities of communication.

The examination of these possibilities as they are expressed on and through the

Web raises the question of how we should apprehend the problem of technological layers.

The web is, after all, only a service allowing for hypertextual communication that makes

use of the communication infrastructure offered by the Internet. The Internet, in turn,

allows for communication between computers through the definition of specific

protocols. At its basis, the Internet makes use of common computing principles that

transform electric signals into binary code which are then processed through algorithms

40

and Boolean algebra. In that sense, it is possible to see the Web as the extension of basic

computing principles at a global level. A medium theory approach to the World Wide

Web can then be defined as focused on the cultural and philosophical values embedded in

these basic computing principles. For instance, Kien underlines that the philosophical

works on logic by Leibniz and Boole are at the basis of computer engineering (2002, p.

29-30). The computer as the embodiment of Leibniz’s and Boole’s scientific methods for

assessing truth through the use of monist rationality propagates a new way of seeing the

world by transforming everything into arbitrary symbolic signs that can be subjected to a

series of calculations. As Bolter argues, the translation of information into digital code

that can be manipulated through algorithms tends to erase the “dividing line between

nature and the artificial” (1984, p. 218). Computing principles, then, invite us to conjure

the world and ourselves as data that can be logically analyzed.

As Lister et al. argue, the principle of digitization is “important since it allows us

to understand how the multiple operations involved in the production of media texts are

released from existing only in the material realm of physics, chemistry and engineering

and shift into a symbolic computational realm” (2003, p. 16). This can be seen as an

extension of Bolter’s argument that computing fosters a blurring of the natural and the

artificial. The idea of dematerialization, which is often referred to in characterizing

digitization, offers an illustration of this. Dematerialization can be taken as problematic

and paradoxical in that it does not mean the absence of material supports for

representation, but rather points to the new material relationships that are established

between content and hardware. Computing dematerializes representations through a

41

series of calculation in order to make them readable on a range of devices (computers, PC

or Mac, PDAs). A digital picture, then, is not something produced by a camera through

chemical processes, but a representation that samples the “real” and that can be easily

manipulated. What the concept of dematerialization highlights is that the status of

images, and similarly videos, audio pieces and texts--is different when mediated by

computers. Manovich offers an illustration of the importance of the binary code when he

defines the five principles of new media as numerical representation, modularity,

automation, variability and transcoding (1999, pp. 27-48). What these principles suggest

is that the production of representations through computers makes representations

malleable through the artifice of the binary code.

Consequently, the question that is raised relates to the status of ordinary language

itself as it is processed through and mediated by computer languages. An illustration of

this is the new problematic that is raised by the production of signs through computing.

Following Saussure (1983), the sign is made up of a signifier--i.e. a string of letters--and

a signified--the concept that is attached to that specific string of letters. Processing signs

through computers requires another layer of mediation, in that the signifier itself needs to

be mediated. A word processor, for instance, allows users to create signifiers by

recognizing that different keys on the keyboard are associated with specific letters. This

operation requires that the act of typing be converted into binary code that is then

processed in order for the output--the word on the screen--to appear. In that sense, the

seemingly instantaneous act of typing, and by extension, the seemingly instantaneous act

of recording sound with an MP3 player or having a picture displayed on the screen of a

42

digital camera, is actually a complex process that requires a mediation of the signifiers.

As Kittler suggests in Discourse Networks (1990), the area of exploration that emerges

from this focuses on the ways in which changes in the material basis for representation

change the cultural concept of representation itself, and by extension relations of power

and what we understand as knowledge.

It thus becomes necessary to explore the ways in which the World Wide Web

extends these principles of malleability, artificiality and mediation through binary code.

While regular users never see the strings of zeros and ones that are processed by the

computer, these operations are essential in that they shape the representations that appear

on the computer screen. In that sense, the mediated texts that circulate on the Web can be

seen as extensions of monist rationality, Boolean logic and algorithmic formulas mixed

with the electric signals of the hardware. This allows us to reconsider Manovich’s remark

about the transition from calculation to representation in a new way. Whereas Manovich

considers this transition in historical and genealogical terms, it also appears that this

problematic bridging of mathematics and culture is one of the omnipresent (that is,

always necessary) characteristics of new media, including Web communication. In

particular, the necessary involvement of mathematical formulas in the production of

cultural representations raises questions as to the relationships between information and

its material support, as discussed previously with the question of dematerialization. More

generally, in the computing process, the mathematical layer becomes a new mediator that

encodes physical input and decodes it as a string of signifiers. This inclusion within the

semiotic process was absent in pre-computer forms of printing and writing.

43

Kittler (1995) goes a step further in examining the relationship between code and

culture by declaring that “there is no software.” Kittler focuses our attention onto the

hardware of the computer in that “all code operations (...) come down to absolutely local

string manipulations and that is, I am afraid, to signifiers of voltage differences.” In

particular, Kittler usefully points out the ways in which the software layer is constructed

so that computer language can appear as everyday language. This prevents us from

focusing our attention on the effects of the computer as a medium. As Kittler declares:

“What remains a problem is only the realization of these layers which, just as modern

media technologies in general, have been explicitly contrived in order to evade all

perception. We simply do not know what our writing does.”

Kittler’s conclusion regarding the unique characteristics of computer

communication presents us with several insights as well as unresolved questions. By

focusing on the technical conventions of computer communication, Kittler usefully points

out that the study of computer-mediated texts does not consist simply of studying the

interface, but more importantly of rediscovering the hidden processes that make the

existence of text and discourse possible. As Hayles (2004) argues, we need to recognize

that whereas print is flat, code is deep. However, the one limit of Kittler, which is by

extension a problem in finding out the unique characteristics of a medium, is a tendency

to reduce a complex technical system to one essential operation--i.e. the production of

electric signals. This is where ANT can be used to investigate the relationships between

the elements that form the technical materiality of the Web.

44

4. The Web as a Layered Technocultural Entity

While the layers of hardware and software that encode knowledge as electric

signals and data are invisible to the user, they actually promote specific ways of using

and interacting with messages and offer new cultural definitions of knowledge and

discourse--as malleable representations, for instance--that are medium-specific. However,

it must also be acknowledged that the algorithmic processing of electric signals is only

one of the elements that construct something such as the World Wide Web. That is, if the

World Wide Web establishes rules to transmit and represent data, it should then be

looked at in terms of the kinds of principles it propagates. The process, then, is not one of

peeling back the layers to get at some core essential feature, but one of studying their

interactions. As Galloway (2004) describes it in his analysis of protocol, the exploration

of technical standards must take into account the multiple layers of technical encoding

that are used to build the World Wide Web:

...the content of every protocol is always another protocol. Take, forexample, a typical transaction on the World Wide Web. A Web pagecontaining text and graphics (themselves protological artifacts) is marked upin the HTML protocol. The protocol known as Hypertext Transfer Protocol(HTTP) encapsulates this HTML object and allows it to be served by anInternet host. However, both client and host must abide by the TCP protocolto ensure that the HTTP object arrives in one piece. Finally, TCP is itselfnested within the Internet Protocol, a protocol that is in charge of actuallymoving data packets from one machine to another. Ultimately the entirebundle (the primary data object encapsulated within each successiveprotocol) is transported according to the rules of the only “privileged”protocol, that of the physical media itself (fibre-optic cables, telephone lines,air waves, etc.). (pp. 10-11)

What Galloway suggests is that an examination of the physical and mathematical

structure of the World Wide Web is not enough. A representation on the World Wide

45

Web is produced through the interaction between different layers of codes and protocols,

and different layers of hardware and software. The question, then, is not so much one of

finding the fundamental technical characteristic of a medium so as to draw some essential

cultural characteristic, but to examine its technical formation in genealogical terms. As

the “network” approach developed by ANT suggests, there is a need to problematize the

description of the Web as layers of technical processes. That is, it is necessary to

investigate not only what forms these layers of hardware and software, but also how they

are related to each other and how they potentially influence each other. ANT’s invitation

to treat technological objects as actors becomes all the more relevant when dealing with a

complex automated system such as the Web, and examining the relationships,

articulations and translations among these actors could lead to a better understanding of

the characteristics of Web communication.

A starting point for examining the layers that constitute the Web is the analysis of

the different cultural values that are encoded within the technical objects and processes

that form the Web. In that regard, electric signals, algorithms, binary representations and

the Leibnizian and Boolean logic they embody are but one part of the problem. What also

needs to be considered, as mentioned earlier, is hypertext as a connection device that was

supposed to be an extension of the mind but was also reshaped according to specific

power relations (Moulthrop, 1994). Also of importance is the conception of the Internet

as a distributed network, an anti-hierarchical structure that seems to embody Deleuze and

Guattari’s concept of the rhizome (1987). In so doing, the goal is to examine the

conjuncture of different technocultural processes and the hybrids they produce.

46

This cultural level of analysis, however, is not enough by itself. Another way of

analyzing the layers that form the Web is to consider the rules of transmission they

propagate. At the level of transmission over the Internet, the works of Lessig and

Galloway offer a first foray into the space of agency of computer networks defined as

networks of struggles and power relationships. In particular, Galloway critically assesses

distributed networks such as the Internet by examining the ways in which protocols--the

sets of “recommendations and rules that outlines specific standards” (2004, p. 6)--have

become sites of struggle. For Galloway, the equation between distributed networks, in

which there are “no chains of command, only autonomous agents operating according to

pre-agreed rules, or protocols”, and the concept of a free, uncontrolled and uncontrollable

network does not hold. While the concept of protocol is anti-hierarchical, it is still a

“massive control apparatus” (2004, pp. 243). As protocol defines the rules of data

exchange, its potential can be dangerous. For instance, the protocols that make the

Internet an open network are also the ones which allow for something like surveillance to

exist. Furthermore, the actors in charge of defining the protocols and rules of Internet

communication can also be criticized for representing specific interests. ICANN (Internet

Corporation for Assigned Names and Numbers), for instance, has come under fire for

privileging US interests (Wikipedia, 2005a, 2005c). Thus, regulatory bodies can serve

specific interests and can reintroduce hierarchical and centralized relationships within

networks that were previously distributed networks (Galloway, 2004, Lessig, 1999). The

fact that protocol is not only a set of technical standards but also the site of power

struggles thus illustrates the ways in which a study of the underlying technical structure

47

of the World Wide Web in important.

Such approaches to the rules of transmission over the Internet need to be extended

to the rules of transmission of the Web and other processes that make computer

communication possible. To expand on Galloway’s comments on the layers that form the

Internet, it is not really a question of “privileging” one protocol over another, but rather

of examining the ways in which physical signals get encoded and translated through

different protocols. ANT’s concept of mediation as the examination of the ways in which

an original meaning, goal or intent is changed and distorted through the process of

technical translation is important here (Latour, 1999, pp. 176-193). That is, there might

be a need to use ANT’s concept of mediation not only with regards to the relationships

between the human and the non-human, but also with regard to the relationships and

negotiations between a set of automated non-human actors. In the final instance, the

examination of these relationships should operate not only in terms of transmission, but

also in terms of the politics of representation.

5. Technologies of the Web and the Question of Representation

At the beginning of this chapter, it was pointed out that one of the reasons why

computer communication is important for media studies is that the computer is not

simply a transmission device, but also a device for representation. The question of layers,

then, concerns not only the protocols that are used for ensuring communication between

computers, but also requires a consideration of the ways in which technical elements

participate in the construction of representations, that is, the ways in which they enable

specific practices of meaning-making. There is a need to understand the relationships

48

between the layer of transmission and the layer of representation. The layer of

representation brings us back to the most visible layer of the Web--the interface. While

the interface is designed to be manipulated by users, I would like to focus on treating the

technological elements that form this layer as actors, and not simply as tools to be used.

The reasons for this is to highlight the space of agency of these software actors in order to

examine their deployment as communicative agents.

Web standards should not only be analyzed as transmission devices, but also as

representational devices. In order to operate this shift, it is also important to consider

technical standards and computer processes not only in terms of the control and limits

they express, but also in regard to the cultural environments they create. The agency of

software needs to be fully acknowledged, as software becomes not only the actor with

which users have to interact but also the mediator that defines the conditions and cultural

richness of these interactions. This recasts the analysis of computer code in terms of

exploring the ways in which cultural experiences are constructed through a series of

encodings that “encapsulate information inside various wrappers, while remaining

relatively indifferent to the content of information contained within” (Galloway, 2004, p.

xiii). This opens the way for a reassessment of the relationship between meaning and

computing and between media representation and artificial calculation. Positions such as

the one developed by Kittler (1995) when he declares that “there is no software”, that is,

that software is just an illusion masking specific technical processes, need to be critically

assessed. If the software layer is that which creates the connection between the ordinary

languages that are used to articulate representations and the hardware layer of electric

49

connections, its role as yet another mediator needs to be taken into account. If we start

from the premise that computer networks such as the World Wide Web become

important only when they develop the capacities to encourage the production of meaning,

we have to focus on software as being that which fabricates these cultural opportunities

out of the hardware.

It is first necessary to further define the differences and relationships between

software and hardware. While the hardware refers to the physical parts of the computer--

from the motherboard to the modem--the software refers to the computer programs that

are stored in a computer. The system software is defined as the set of computer programs

that help run the hardware, for instance, operating systems such as Windows. Earlier

parts of this chapter reviewed the role of system software in terms of its implementation

of a specific form of logic, but it is also necessary to focus on software in terms of its

signifying function. In that regard, it is important to look at the application software,

which is developed so that users can accomplish several tasks (Wikipedia, 2005b). There

are thus several layers of software, and each of these computer programs have specific

goals. For instance, the range of software needed in order to produce a website includes

programs to create and edit graphics, an editor that can add the HTML, XML or

JavaScript descriptors, and sometimes a program to create animations such as Flash

animations. The kind of application software needed for the user is a Web browser. A

Web browser is capable of translating data, text, codes and protocols into a Graphical

User Interface (GUI) that uses ordinary language and representations to make data

intelligible and thus allows the user to draw meanings from what appears on the screen.

50

Software is important in the study of the World Wide Web as it is that which

“forges modalities of experience--sensoriums through which the world is made and

known” (Fuller, 2003, p. 63). There has been a great interest in software in terms of the

legal issues that are raised through the copyrighting of software, and the alternatives

offered by the open software movement. What these issues illustrate is that software does

not simply raises commercial issues, but also cultural ones. As software is the technical

means through which one can express oneself, it is in some ways akin to the alphabet

(Mezrich, 1998). However, software is not simply a means to an end, but also a computer

program that defines the ways in which users can interact with texts. To reformulate the

agency of software in Foucauldian terms, software is part of the assemblage that defines

the rules of discourse, and thus a specific range of activities for users. In that sense, it is

interesting to notice that the field of software analysis seems to have been mostly ignored

by cultural studies. Web design programs such as Dreamweaver are interesting objects in

that they embed some of the conventions of Web presentation by giving the user a

determined range of choice in how to organize information in Web format, such as pre-

designed pages, framesets, and CSS styles. Design conventions are embedded in the

software, and propagate specific ways in which information should be packaged. As

such, web design software participate in the development of specific rhetorical strategies

for Web texts.

The aesthetic examination of software reveals some of the specific cultural

representations that surface on the World Wide Web. Manovich (2001), for instance, sees

software in a historical perspective, by arguing for an understanding of the similarities

51

between the avant-garde movement and the representations that are made through

computer software. The presentation of hyperlinked and coexisting multimedia elements,

for instance, is reminiscent of “visual atomism”, and the windowed presentation of the

Graphical User Interface (GUI) can be traced back to the montage experiments carried

out by Dziga Vertov, among others. In some ways, then, there is an intriguing evolution

of the concept of the artifice of the virtual as it changes from the artificial coding of

information, to the creation of representations that acknowledge their artificial combining

of disparate elements in order to foster meanings that can be communicated to users.

Consequently, the form of the Web--its technical structure--influences the content

that is being propagated. The software layer allows for the representation of information

and data through metaphors which, as Fuller explains, generate “a way for users to

imaginarily map out in advance what functional capacity a device has by reference to a

pre-existing apparatus” (2003, p. 100). In the case of the World Wide Web, it is

interesting to notice that the experience of surfing is actually an act of “invocation”

(Chesher, 1997) or of “calling up HTML files on a computer” (Fuller, 2003, p. 87). The

spatial imagery of surfing and of cyberspace in general are but “an effect of the bricolage

of digital images, text, and other elements linked together by hypertext references”

(Shields, 2000, p. 145). The “bricolage” is important here as the process whereby the

technocultural assemblages that form the World Wide Web act to represent data and in

that sense establish some of the rules of discourse on the Web.

6. Towards a Technocultural Approach to the Politics of Representation on the Web

The main argument put forward is that the Web should be analyzed as an

52

assemblage of technocultural layers in order to examine how the conjuncture of these

layers constitute and create the Web as a medium. The question of layers points to the

need to not reject any of the technocultural components of the World Wide Web as

irrelevant. Thus, it is not a question of privileging medium over message, or transmission

over representation. Rather, considering the Web as an actor-network allows for a

recognition of the complex relationships between technology, language and culture.

The broad analysis of different approaches to media and to the World Wide Web

in the previous sections of this chapter helps delineate three types of technical layers that

constitute the World Wide Web. The first layer is the hardware layer, or the set of

technical agents allowing for the actual production and circulation of electric signals over

networks. The second layer is the code layer that defines the rules for data exchange. The

third layer is the representational layer and includes the interface. These layers are

difficult to separate - could protocols and software even exist without the hardware?

Would a piece of hardware do anything without some system of representation?

However, such categorization is useful for further locating the purpose of this research as

an attempt to understand the interaction between these layers. Furthermore, the ways

different aspects of these layers have been treated in the literature on the Internet and the

World Wide Web underlines some of the research gaps that the present project aims to

fill in. In general, analyses from the social sciences that have included Internet and Web

technologies have mostly focused on the question of transmission. While there are

analyses focused on the question of representation, they have been focused on new media

in general rather than the everyday Web.

53

The question of transmission is found in research focused on the effects produced

by new information and communication technologies. An important body of research can

be found in this area, for instance, Virilio’s work on the ideology of speed in the

information age, and also sociological analyses of ICTs focused on the new social

relationships that are produced through computer networks. Castells (2000) can be

considered as affiliated with this, as his goal is to account for the social and economic

changes and processes created in the information age. In particular, Castells’ argument

that information technologies allow for the extension of communication networks that

reorganize economic and social flows is based on a consideration of the properties of the

hardware and not so much on the content carried by these networks. Thus, who is

connected to the network and who is not reveals the new power relationships and social

inequalities produced by information capitalism. Dyer-Witherford (1999) and Robins and

Webster (1998) develop a similar kind of analysis through a political economy

perspective. For Dyer-Witherford, capital can integrate new social sites and activities

through information technology - for instance, the biopolitical system created through the

informatization of life. Information and communication technologies ensure the smooth

flow of the circuit of capital into all spheres of life. In a similar vein, Robin and Webster

argue that the communication revolution is not simply economic and technological, but

also social in that it restructures everyday life, in particular through the abolition of the

difference between work time and free time and through the rise of new systems of

surveillance. Analyses of the political consequences of networks on the weakening of

democratic ideals have been done from various angles - not only political economy, but

54

also through using Marxism and phenomenology (Barney, 2000), and critical theory

(Dean, 2002). For Barney, the invocation of humans through network technologies as

“standing reserves of bits” (2000, p. 192) does not foster democratic reflection and

exchange, but rather data mining at the global level. For Dean (2002), the proliferation of

computer-mediated communication and information exchange, does not foster a public

sphere, but rather serves to create a form of communicative capitalism that limits the

horizon of public discussions through the matrix of publicity/secrecy. The question of

networks can be covered through multiple angles, some technologically deterministic,

others focused on the social integration of technology. The common research point,

though, is to examine what the possibility of instantaneous electric communication allows

for in terms of new power relationships. These types of analysis do not focus on the

content transmitted through computer networks and do not distinguish between different

computer networks (such as, Internet, intranets, and private networks).

With analyses focused on the code and protocols of Internet, there is a greater

focus on specific technologies of transmission and their social effects or social shaping.

Lessig’s and Galloway’s works represent such an approach, especially as they focus on

the technical infrastructure of the Internet as a site of power struggle. Whereas for

Galloway, the protocols of the Internet create new forms of control, Lessig focuses more

on the kind of freedom offered by the Internet, and the role played by government and

market regulations in ensuring or destroying that freedom of information exchange. In

Code, and Other Laws of Cyberspace (1999) and in parts of The Future of Ideas (2001),

Lessig focuses not on the physical layer (the hardware) or the content layer (the message)

55

but on the code layer - that which makes the hardware run. In particular, Lessig focuses

on the code of the Internet as a site that influences what kind of communication is

available on the Internet. As Lessig (2001) recalls, the protocols of the Internet

encouraged innovation and minimized control. Thus, code is law in that it defines the

possibilities of communication on the Internet. However, the architecture of the Internet

is now changing, and governments and commerce are increasing their ability to control

cyberspace (2001, p. 236-237). Thus, there is a growing concern with the growth of

control technologies that regulate the content, code and applications used on the Internet.

Code is law, but it can also be controlled by the market and the state - the potential of

codes can be articulated to fulfill specific needs that, in the end, pervert democratic ideals

of communication. In the same vein, Galloway’s examination of protocol engages in a

multi-faceted analysis. Protocol is not simply a physical science; it also has formal

qualities and is “a type of distributed management system for both human and non-

human-agents.” Galloway’s analysis consists of not only examining the potentials of

protocol, but also the ways in which it is articulated through technocultural networks.

That is, technological potentials have to be examined in terms of the ways in which they

are realized or not through their embedding within actor-networks. In particular,

Galloway’s conclusion that there have been failures of protocol in that its potential has

been limited by organization such as ICANN. However, Galloway does not conclude that

institutions have the upper hand in defining what a technology is. Rather, the possibilities

embedded in protocol can be both beneficial and dangerous. For that reason, protocol can

be rearticulated to fulfill other political goals that the transformation of the Internet and

56

the Web as commercialized and privatized spaces of surveillance. Both Lessig and

Galloway show that there is no simple way of analyzing the relationship between

technology and culture. One has to proceed through a mapping of the agencies that are

distributed within technocultural networks, and among institutional actors, physical

objects, codes and protocol. Their political projects of redeploying code and protocol to

serve more progressive political ideals aim at developing new networks to redistributed

flows of agencies and potential.

There is thus a broad body of research focused on the question of transmission,

although the focus has been primordially on computer communication in general and the

Internet in particular, with the World Wide Web being considered as a subset of the

Internet. While this is a valid approach to the Web from a transmission perspective,

examining the Web as a medium also requires an analysis of its specific characteristics.

While there is an extremely relevant body of work on the interface (Manovich 2000,

Bolter and Grusin 1999, Johnson 1997) and on software (Fuller, 2003) from a cultural

perspective, these works tend to focus on new media in general and not specifically on

the World Wide Web. For instance, Bolter and Grusin’s Remediation (1999) represents

an attempt to study the cultural characteristics of new media through a medium theory

approach. In particular, their concept of hypermediacy as the effects through which new

media are made present rather than transparent offers a critical vocabulary for

understanding the aesthetic presence of new media. Bolter and Grusin draw on

McLuhan’s argument that the content of a medium is another medium in order to develop

a genealogical approach to the aesthetic of new media (1999, p. 47). Their approach is to

57

examine the ways in which old and new media imitate and compete with each other (p.

189). Video games can thus refashion cinema, while the TV screen of CNN, for instance,

mimics the multimediality of the Web. In so doing, one of Bolter and Grusin’s

conclusions is that “new media are old and familiar”, that “they promise the new by

remediating what has gone on before” (p. 270). Bolter and Grusin efficiently demonstrate

the ways in which new media rely on the form of older media in order to be accessible.

At the same time, Bolter and Grusin’s approach tends to fall into another extreme; that of

erasing the particular characteristics of new media through a constant focus on the

presence of older media behind the new.

This research aims to fill in some of the gaps in the analysis of the Web interface,

but also has a broader scope. With regard to the question of transmission, the literature

review presented above shows that there is already a rich body of work. Thus, questions

about the effects of networks, the concept of instantaneous communication, the conjuring

of the world as data, for instance, are not central research points, but are rather considered

as part of the context within which the Web is developed as a medium. It is expected that

these questions will surface, but they are not the primary object of investigation. The

main research proposition is that there has not been sufficient attention paid to the

articulation between the code layer and the interface layer - between the different

languages of the Web and the software that allow for the representation of information.

Thus, the case studies explore the genealogy of specific discursive characteristics of the

Web that are encoded and decoded by Web software in order to examine the rise of new

communicational and cultural practices, and of the new social realities for the human and

58

non-human actors involved in these communicational practices.

However, undertaking such an analysis poses theoretical and methodological

challenges, particularly as the focus is not simply on examining the relationship between

technology and culture, but on examining how specific technologies of expression

interact with cultural processes to create the web as a medium establishing new social,

hermeneutic and cultural realities. In short, the focus is on software that shapes data and

informational input into representations, and on the cultural dynamics that emerge

through this process. As argued throughout this chapter, ANT offers a first step towards

examining the role of technologies in the cultural process. As a methodology, ANT is

designed to focus on specific technico-social events, and proceeds by following the actors

of a specific actor-network. ANT’s descriptive process aims at analyzing how intentions

are materially transcribed through technical actors. In do doing, the description of

technical actors proceeds by examining how these actors act either as intermediaries that

faithfully transmit original goals and meanings, or as mediators that translate, distort and

change these meanings. As Latour puts it, ANT is focused on examining material objects

when they are visible, that is, when social ties are built through them (2005, p. 80).

However, ANT focuses on communication technologies as technological actors but not as

media actors promoting both specific rules of transmission as well as specific

representational, semiotic systems. Thus, whereas ANT is extremely useful in revisiting

medium theory so as to correct its essentializing tendency and offer an analytical

framework that takes technocultural articulations into account, it needs to be

supplemented with a framework capable of examining the relationships between

59

technology and language. Even medium theory fails in offering a framework for

analyzing these relationships, as it has been established on the assumption that the

medium and the message should be separated, and that the medium is more important

than the message. In so doing, questions related to language and how technologies of

communication have an influence on the shaping of meaning are ignored. There is a need

to turn to theories of language and meaning in order to define a framework capable of

encompassing the technocultural elements that participate in the shaping of meaning.

60

Chapter 2

Web Technologies, Language and Mixed Semiotics

The first chapter argued for an understanding of the role played by technologies in

cultural contexts departing from a model founded on the tacit separation between the

technological and the cultural. Revisiting medium theory’s focus on examining the

characteristics of medium through Actor-network theory allows for the deployment of an

analytical framework taking into account the continuity between technology and culture -

the technocultural junctures, articulations, mediations and translations where

characteristics and agencies are defined. What is the kind of framework needed for the

examination of technocultural networks at the level of language? This question fails to be

answered by either medium analysis or material analysis, as they tend to focus on the

effects of media, be they physiological, social, political or psychological, but not on the

impacts of media technologies on language, and on the processes through which

meanings can be constituted and communicated. The separation between medium and

content, as well as that between medium theory and semiotic analyses need to be

critically assessed in order to establish a semiotic framework that goes beyond an

analysis of signs and meanings. There is a need to incorporate an analysis of what

Deleuze and Guattari call “regimes of signs” that constitute “semiotic systems” that are

based on assemblages and networks that are not primarily linguistic and involve an entire

field of power relations that cross through technological, social, political and economic

domains (1987, p. 111).

It is then possible to analyze the production and circulation of meanings through

61

technocultural processes with a methodological framework that takes into account not

only linguistic processes, but also the non-linguistic processes from which

representations, and the meanings carried by these representations, can emerge.

Examining the semiotic process can thus lead to a mapping of knowledge and power

relationships to discover not only the rules of expression, but also how they are

constructed through technocultural flows of agency present in the process of mediation.

This requires a critique of methodologies focused on language, meaning and discourse,

particularly through the work of Deleuze and Guattari on mixed semiotics and the

development of an a-semiotic model.

1. The Technocultural Dimensions of Discourse

While material analyses of communication and medium theory are useful for

pointing out some of the blind spots in current research on the Web, it is arguable as to

whether technology and the message being transmitted through the use of technology can

be separated. Wise (1997) offers an important argument when he states that the problem

with technologies of communication is that:

they appear to embody both technology (the device) and language (thecontent being broadcast or transmitted). This makes them often difficultto analyze in these terms - though crucially important - because theyseem to slip to one side of the other like a watermelon seed.” (p. 72)

That is, the problem with communication technologies is that technology and language

are not distinct spheres, but part of a continuum - the message being produced involves

the deployment of a technological and technocultural apparatus, as well as it is a carrier

of cultural meanings. In that sense, the point made by Hansen (2000) in separating

62

technology from technesis, or the putting into discourse of technology, becomes

problematic in the case of communication technologies. There is a difference to be made

between technologies to produce objects and technologies that are designed to produce

and transmit information and meanings. Arguing for a separation between the

experiential impact of communication technologies and the cultural meanings transmitted

through messages fails to acknowledge the continuum through which this experiential

impact resonates in the circulation of meanings and discourses. The message, in that

sense, carries traces of the medium. By drawing on Deleuze and Guattari, Wise aims to

abolish the difference that can be felt in other theoretical approaches to communication

between technology as having real, material consequences and content as taking place on

an insubstantial plane of meaning. On the contrary for Wise, both technology and

language have material effects. Technology concerns “the direct manipulation of real

elements” (the use of tools), and language (the use of symbols) “refers to a regime of

signs” and by extension to the “distribution of certain discourses in social space” (1997,

pp. 62-63). Both technology and language have material effects in that they manipulate

and establish relations between social actors (1997, p. 63). The question, then, lies in the

examination of how technology and language are articulated.

Wise’s discussion of language as the distribution of discourses is based on a

Foucauldian definition of discourse. Discourse understood in that sense is “the ability of

distributing effects at a distance, not just meaning and signification” (1994, p. 63).

Following Foucault, discourse analysis is not only focused on what meanings are

propagated in specific sets of texts, but more importantly on the ways in which these texts

63

embody, transmit, produce and materialize social relations of power. Whereas

posthermeneutic, material critics cited in the previous chapter criticized the focus on the

purely linguistic aspects of texts, Foucault’s definition of discourse allows for a

reintegration of the material within the space of language. For Foucault, discourse is the

space where “power and knowledge are joined together” (1980a, p. 100). By power,

Foucault means a “productive network” (1980b, p. 119) through which are defined

relationships that establishes specific roles for and relationships between subjects.

Analyzing specific sets of texts to examine their discourse means defining discourse as a

set of practices that define subjects and create and “form the objects of which they speak”

(1993, p. 48). Discourse produces and defines objects of knowledge, the legitimate

methodology through which one can talk meaningfully of objects and construct

representations, and the subjects who can legitimately transmit discourse. The point of

discourse analysis, following Foucault’s framework, consists of studying “not only the

expressive value and formal transformation of discourse, but its mode of existence”, and

“the manner in which discourse is articulated on the basis of social relationships” (1977,

p. 137). As Kittler (1990) puts it, Foucault aimed to analyze discourse “from the outside

and not merely from a position of interpretive immanence” and defined discourse

analysis “as a reconstruction of the rules by which the actual discourses of an epoch

would have to have been organized in order not to be excluded as was, for example,

insanity” (p. 369). Discourse is material in that it creates social relations of power.

Through discourse analysis, it becomes possible to examine the ways in which

hermeneutic frameworks come into being.

64

The articulation between technologies of communication and discourse is

explored in Kittler’s Discourse Networks (1990) and Gramophone, Film, Typewriter

(1997). Kittler defines discourse networks as “networks of technologies and institutions

that allow a given culture to select, store and process relevant data” (1990. p. 369). Kittler

expands Foucault’s concern with the processes of establishing social relations through

language, but reintroduces media technologies as key components in the establishment of

discursive formations. His critique of Foucault lies in the failure to take into account the

specificities of media systems as modes through which information, knowledge, values

and identities are mediated and therefore shaped. Kittler’s approach does not separate the

medium from the message. His framework offers a complement to the question of

interpretation in that it allows for an extension of the effects of media to the formation of

subjectivities, and it offers ways to examine how specific conditions of meaning

production are created through the assemblage of communication technologies, cultural

processes and institutions (Gane, 2005, p. 29). Kittler’s theoretical richness lies in his

detailed analyses of discourse networks as complex formations. Technical analysis,

discourse analysis, historical consideration and textual analysis are all combined to

examine the ways in which technological possibilities, subjectivities and specific

meanings are circulated through networks of discourses. Wellbery uses the term

“mediality” to describe Kittler’s approach. Mediality is “the general condition within

which, under specific circumstances, something like poetry or literature can take place”

(1990, p. xiii).

Kittler’s approach to the role played by technologies of communication in

65

defining discourse and participating in the construction of media systems is of

methodological importance in that it reconciles the content being transmitted and

technologies of communication by conceptualizing them as part of the same network.

Furthermore, Kittler demonstrates in Discourse Networks (1990) and Film, Gramophone,

Typewriter (1997) that the analysis of texts can take place alongside social, political and

technocultural analyses. Throughout numerous analyses of specific texts produced

through different media, Kittler expands the concept of mediality through a detailed

analysis of the traces of specific media present in the texts being analyzed. The text, then,

becomes a valuable tool for defining the characteristics of a medium - characteristics that

are not only aesthetic, or cultural, but also experiential. The analysis of a discourse

network, including the texts produced by that network, allows for a critical reflection on

the genealogies of media systems.

Kittler’s approach can be seen as a happy medium that reintroduces the question

of meaning, and particularly of meaning formation, in the analysis of the technocultural

deployment of media systems. In particular, Kittler's analyses extend Foucault's concerns

with the production and circulation of specific regimes of power, knowledge and

subjectivity by arguing for a greater attention to the processes of information storage,

transmission and manipulation that create new subject positions, new power dynamics

and new hermeneutic horizons. Furthermore, Kittler's mix of technical, archival and

textual analyses provides a kind of multi-methodological framework allowing for a

recognition of the multiple imbrications and articulations between media and culture.

Kittler's approach is thus extremely useful, but it falls short of offering a satisfying

66

analysis of new media. As argued in the previous chapter, Kittler's essentialist move

towards a reduction of new media as pure electrical signals and his erasure of the human

within these new communication channels tends to ignore the complexity of technical

layers on which new media, and the World Wide Web, are built.

Manifestations of the media can thus be found in the content being transmitted. In

that sense, the point is not to reject content as useless for understanding the impact of a

medium, but to focus research questions about the manifest characteristics and properties

of a medium onto texts that have usually been analyzed within a hermeneutic framework.

Gumbrecht’s approach is theoretically similar to that of Kittler, in that it aims to define a

posthermeneutic analysis of texts that “would be complementary to interpretation” (2003,

p. 2). As Gumbrecht (2003) recalls:

Our main fascination came from the question of how different media -different materialities - of communication would affect the meaning that theycarried. We no longer believed that a meaning complex could be keptseparated from its mediality, that is, from the difference of appearing on aprinted page, or a computer screen, or in a voice message.(p. 11)

How meaning can emerge is, however, only one of the questions that needs to be asked

by a posthermeneutic framework. For Gumbrecht, the material aspect of any

communication is that it produces presence, that it produces “effects of tangibility”

(2003, p. 16). It is not simply a question of analyzing meaning-effects anymore, but one

of analyzing the oscillation of meaning effects and presence effects (2003, p. 2).

Gumbrecht’s argument that media produces specific presences is important on several

counts, and particularly as another way of seeing the inseparability of content and

medium. Gumbrecht defines production of presence in spatial terms - presence is what is

67

before our eyes. There are several presences that are produced through acts of

communication. One effect of presence is what McLuhan (1995) and Hansen (2000)

would describe as the physical and psychological impact of media at the experiential

level. Another type of presence could be the production of subjects and subjectivities as

defined by Foucault, and by extension, social, economic and political relations. Finally,

the production of presence can also be taken in a self-reflective manner as the presence of

the medium itself. The production of presence understood in that sense helps refocus the

concept of mediality as the feedback loop between text and technology through which

specific characteristics of the medium are called forth.

2. Reconsidering Linguistics

There is a need for a reconsideration of media as producers of meanings. The

question is not about examining the meanings of media as objects regardless of the

representation they produce. On the contrary, it becomes necessary to develop another

way of analyzing the representations, or what comes to be called content. Media are

primarily focused on the production and transmission of signs. It is this specific role of

media as part of specific economies of meanings and significations that are transmitted

through representations that need to be further analyzed. In so doing, the purpose of this

study is not to undertake an actor-network analysis of the two case studies by examining

the different assemblages and networks within which specific codes and languages are

deployed. Rather, the goal of this study is to examine the ways in which specific codes

and languages participate in the production of specific sign-systems. Thus, the question

that drives the two case studies is about what a sign is in different media contexts – how

68

is it presented and how users are supposed to interpret and use it. In so doing, the goal of

this study is to look at the deployment of regimes of signs that are based on

technocultural processes. That is, I am interested in looking at how processes of

signification are created through specific technocultural environments, and what their

effects are in this technocultural and discursive context.

Developing a methodological framework to answer this question demands a

critique of mainstream linguistic analysis as it has been developed in communication and

cultural studies. Without doubt, the most popular linguistic theory stems from Saussure's

Cours de linguistique générale, which established linguistics as a discipline and a

science. The most popular element of Saussure's work is the analysis of signs as the

elements through which language can exist. Saussure's presentation of signs as being

made of a signifier and a signified presents us with the assumption that a sign is made up

of a concept (the signified) and a sound-image (the signifier). The sign is thus that which

bridges a universal (the concept) and a specific (the word), a meaning and the sound-

image that comes to be associated with it (Burns, 2001, p.8). Furthermore, as Samuel

Weber (1976) recalls in his presentation of Saussure's linguistic theory, Saussure was

interested in studying la langue, that is, the homogeneous system of rules within which

signs can be deployed. As Weber explains, the move to study la langue (the system of

language) rather than le language (language in general) or la parole (speech) represents a

move towards establishing the legitimacy of a scientific approach to language (1976, p.

915). La langue, then, is homogenous; it represents the social aspect of speech in that it is

created through collective consent, and it is concretized as a system of signs (1976, p.

69

916).

At the theoretical level, the important aspect of Saussure's linguistic theory lies in

the move towards establishing the independence of linguistic processes from other

processes. At a general level, Weber usefully underlines that such a move is part of a

structuralist “conviction that the laws which govern the functioning of a sign system are

independent both of the individual subjects participating in it and of the specific material

embodiment of the sign” (1976, p. 917). Weber traces the implications of this structuralist

move for the actual theory of the sign that Saussure develops. First of all, Saussure moves

away from the “representation-denominational conception of language” (1976, p. 920).

Seeing language as a process of representation implies putting more importance on the

signified (that which is being represented) than on the signifier. The signifier thus exists

as a mean to refer to a reality, concept or object that is outside of language. Thus, “the

signified, which is being represented, enables us to delimit the signifier, which represents

it. Meaning is ontologically and linguistically prior to the linguistic entity, which it

'authorizes'” (1976, p. 920). Saussure's conception of language throughout the Cours de

linguistique générale progressively departs from the model of language as representation

to a model of language as a self-referential, closed and autonomous system (1976, p.

925).

In order to arrive at this conclusion, Saussure introduces a distinction between the

concept of signification and that of value. As Weber explains, signification “designates

the representational, denominational, referential and semantic aspect of language” (1976,

p. 926). The value of a sign, on the other hand, is not based in representation or in

70

relation to something outside of language, but on the differences that exist between a sign

and other signs. Thus, “mouton” has the same signification as “sheep”, but its value is

different in that there is the word “mutton” in English, which does not have any

equivalent in French. For Saussure, the question of linguistic value points out a new

relationship between the signifier and the signified that is not covered in the framework

of language as representation. As Weber argues, Saussure's radical conclusion is that “the

identity of a sign is a function of the position of a sign with regards to other signs” (1976,

p. 920). Saussure thus reverses the assumption that meaning exists outside of language by

concluding that meaning is produced through the semiotic process itself without

references to an outside reality. Thus, “there are no preestablished ideas and nothing is

distinct before the apparition of the language system” (Saussure, in Weber, 1976, p. 922).

Saussure's theory of linguistics plays a central role in explaining the divide in

communication studies between medium and content. According to Saussure, the study

of the production and circulation of meaning can only be made through the study of

signs. Furthermore, these systems of signs are cut off from a reality out there: the referent

– the actual object designated through a sign – disappears completely, as well as the

signified, which links the object to its conceptual representation. Meaning appears

through the play of signifiers – through the relationships and differences that delineate

the meaning of signifiers. Furthermore, questions related to the materiality of the medium

(the sound of a word, for instance), are evacuated from Saussure's linguistic theory. The

linguistic value of a sign is rooted in conceptual differences, not in material ones. Finally,

Saussure's theory of linguistics is established as an autonomous, self-sufficient system.

71

Any questions related to the relationship between linguistics and the social are thus

ignored. While Saussure's model can be considered as the foundational model for

analyzing the production of meanings, its limits have been pointed out. In particular, as

Klaus Bruhn Jensen argues, “the problem with Saussurean semiology in communication

studies has been a tendency to give much attention to signs as such, less to society, and

hardly any to the 'life' of signs in social practices” (1995, p. 3). Saussurean linguistics

fails to focus on the social context within which signs are deployed and meanings

constructed. In some ways, discourse analysis, especially the kind of discourse analysis

stemming from the works of Foucault, can be seen as a way to correct this shortcoming.

While discourse analysis helps contextualize the production of meaning and allows for a

mapping out of its articulations with more general social phenomena, pragmatic

approaches to language have allowed for a reconnection between sign and the social by

presenting signs as not only shaped by social and cultural norms, but also as having an

impact, an effect on these norms. Recognizing that signs have a social life (Jensen, 1995)

demands an exploration of the ways in which signs exist not in absolute, conceptual

modes, as Saussurean linguistics would have it, but circulate through everyday life. The

study of signs, then, requires seeing the deployment of specific signs and representations

as instances of social action, as acts conveyed with specific purposes in specific contexts

(Jensen, 1995, p. 11). For Jensen, Peirce's pragmatic approach to the study of signs

through semeiosis offers a way of bridging questions regarding content and meaning with

the problematic of the audience as active participants and builders of meanings and signs.

However, it is not simply a question of reassessing the links between language

72

and the social, but to see how technologies participate in the shaping of language. Thanks

to Foucault, as Kittler argues, there is a decentering of the human through the refusal of

an instrumentalist view of language. Thus, for Kittler, so-called man is but a production

of a specific discursive situation that is undermined by the appearance of new electronic

media. Language as discourse does not simply transmit; it shapes our relationship to the

world and positions us within a specific knowledge/power system. However, language is

not the only actor in producing discursive change – it itself is influenced by material,

technological and cultural conditions. Thus, the challenge lies not simply in considering

the relationships between categories of the social, the linguistic, the cultural, and the

technological, but to examine how these aspects emerge in relation to each other.

Furthermore, the question that drives this research is not only about how specific

discourse networks on the Web shape and translate power/knowledge relationships, but

also to see how the status of language itself is affected by these technocultural contexts.

This stems from the consideration that language should not be considered as an “abstract

differential system” such as Saussure's concept of langue (Bishop and Phillips, 2006, p.

53), but as a lived, evolving system that is articulated on specific technocultural

processes. This demands a reassessment of the categories of signification. In particular,

this conception of language raises the question as to what exactly a sign is depending on

the medium and cultural contexts within which it is deployed. Radio, film, television, the

Web all use something that we would call language, but their different materialities

(sound, image, electric signals) cannot be considered to be creating the same language. In

that sense, it becomes necessary to examine how signs are formed through media

73

technologies. Such an analysis of language would allow for a finer exploration of the

question of discourse networks through an examination of how specific media languages

play a role in shaping relationships of power and knowledge. In so doing, one of the

central questions is about representation. Representation, in that sense, is not to be

understood as the presentation of some reality through language, but as the shaping of a

so-called reality through a specific media situation. Such a research question echoes some

of the analyses developed by renowned new media scholars such as Manovich, Bolter

and Grusin, Cubitt and Fuller. However, the present work argues that such focus on the

question of the relationship between medium, language and representation, can benefit

from a reconsideration of the problem of linguistics and of the problem of language. By

first asking what a sign is in the context of the case studies, the present study aims to

examine how the assumption that language is connected to processes that might not be

linguistic might help in defining the significative and discursive impact of a medium. The

remainder of this chapter argues that a robust methodological framework can be

developed from Deleuze and Guattari’s work on linguistics and glossematics.

3. Mixed Semiotics

The influence of Deleuze and Guattari on communication and cultural studies is

far-reaching and impossible to present in a few lines. As Seigworth and Wise put it: “just

pick up many of the writings by Lawrence Grossberg, Dick Hebdige, Meaghan Morris,

Stephen Muecke, Elspeth Probyn, McKenzie Wark, and others and you will find an

ongoing and active engagement with the work of Deleuze and Guattari” (2000, p. 139).

Furthermore, Deleuze and Guattari are far from being unknown in the field of new media.

74

Deleuze's “Postscript on the Societies of Control” (1992) has been of great influence for

understanding the new power dynamics that are made possible through new technologies

of information and communication and that mark a shift from disciplinary societies to

societies of control. Deleuze and Guattari's work on the rhizome and on concepts of

territorialization and deterritorialization have also been used for examining the impact of

new technologies and new media, from hypertext to protocol and decentralized networks

(for instance, Galloway, 2004). The influence of Deleuze and Guattari is thus far-

reaching, in that it concerns not only the formulation of a theory and practice of cultural

analysis (i.e. the relationship between practice and theory through Deleuze's notion of

concept, the call for a pragmatic approach to culture rather than an interpretative one), but

also, in the field of new media, the search for equivalencies between new technologies

and Deleuze and Guattari's concepts of control, rhizome, territorialization and

deterritorialization. There is a certain risk associated with the search for equivalences

between concepts that were developed in a pre-Internet period and new media

technologies, but this tension is useful for pushing forward the production of new

theoretical and methodological frameworks. However, Deleuze and Guattari's work on

linguistics, particularly their critique of Saussure's structural linguistics and the

development of a-signifying semiotics to understand the construction of meaning does

not seem to be widely known in the field of Internet and new media.4

Before attempting to present Deleuze and Guattari's work on semiotics, it is

4 See the Configurations 10(3), 2002, for a special issue on the study of the relationshipbetween software and the body for examples of the use of Deleuze and Guattari’s mixed

75

necessary to point out some of the key concepts in their works that are needed to

understand the novelty of their approach to semiotics. As Brian Massumi explains it in

his foreword to Thousand Plateaus (1987), the work of Deleuze and Guattari is a

rebellion against traditional, modernist Western thought and philosophy that aims to not

only critique the impact of capitalism and capitalist power relationships on subjectivity

and the human psyche and to denounce the failure of Western philosophy, state

philosophy or logos-centered thought, but also to undertake a “positive exercise” in

developing new ways of understanding and undermining these power relationships (1987,

p. xi). As such, one can find in Deleuze and Guattari's work a series of oppositional

keywords: the rhizome versus the tree as a new model for building and distributing

knowledge, the striated, hierarchized space of the state as opposed to the smooth, open-

ended nomad space that “does not respect the artificial division between the three

domains of representation, subject, concept, and being; [that] replaces restrictive analogy

with a conductivity that knows no bounds” (1987, p. xii). Thus, Deleuze and Guattari

develop a “smooth space of thought”, or a “schizoanalysis” and “pragmatics” that has for

goal “the invention of concepts that do not add up to a system of belief or an architecture

of propositions that you either enter or you don't, but instead pack a potential in the way a

crowbar in a willing hand envelops an energy of prying” (1987, p. xv).

With regard to contextualizing Deleuze and Guattari's work on language, one of

their main characteristics is to offer a critique of Saussurean linguistics that is rooted in

the refusal of a hierarchized, compartmentalized approach to language. Deleuze and

semiotics.

76

Guattari's critique of Saussure's linguistics is particularly developed in Anti-Oedipus (

1983) and Thousand Plateaus (1987). A central aspect of their critique concerns the

tyranny of the signifier, that is, the problematic centrality of the signifier in structural

linguistics for explaining meaning formations (1983, p. 242-243). Deleuze and Guattari

attack the transcendental model developed by Saussure by arguing that meaning does not

come from some sort of transcendental idea, but rather is immanent, that is, developed

through multiple material, social and linguistic flows, conjunctures and relays. In so

doing, Deleuze and Guattari proceed by reconnecting language to other non-linguistic

processes. Furthermore, Deleuze and Guattari's work is not concerned with meaning in

the traditional sense, in that they have a pragmatic approach to language. In particular,

Deleuze and Guattari focus on the concept of order-word in the sense that “language is

the transmission of the word as order-word, not the communication of sign as information

(1987, p. 77). As Porter and Porter explain it, the concept of “order-word” “is meant to

signify the immediate, irreducible and pragmatic relation between words and orders”

(2003, p. 139). Deleuze and Guattari's concept of order-word departs from the traditional

research question of structural linguistics in that it argues that language is not simply

about what things mean but the ways in which they order – shape, hierarchize – the world

through words. Thus, for Deleuze and Guattari, “a rule of grammar is a power marker

before it is a syntactical marker” (1987, p. 76). As Porter and Porter further argue,

Deleuze and Guattari's pragmatic approach to language, their examination of the

relationship between words and orders can be understood as being “implicated in a social

order or in forms of (...) social obligation that presuppose imperatives” and as

77

“performing an ordering function (by changing) the circumstances in which they are

formulated” (2003, p. 139). The first aspect of order-word refers to “the social-

institutional setting in which a communicative exchange takes place”, which defines

specific roles and action for this communicative exchange to function (2003, p. 139). The

second aspect of the order-word is illustrated by words imperatively ordering, or creating

new circumstances (i.e. “You are free to go”). This kind of pragmatic approach to

language – of focusing on the effects of language – represents a departure from the kind

of research questions that are at the core of Saussure's structuralist linguistics. As

Guattari declares: “We're strict functionalists: what we're interested in is how something

works, functions – finding the machine. But the signifier is still stuck in the question

'What does it mean?'” (Cited in Elmer, 2003, p. 243). Deleuze and Guattari's approach to

language is thus similar to Foucault's approach to discourse as the space where power and

knowledge meet. As Wise describes it, language for Deleuze and Guattari is about “the

ability to have effects at a distance” (1997, p. 63).

Deleuze and Guattari's starting point is that a pragmatic approach is of central

importance in that “linguistics is nothing without a pragmatics (semiotic or political) to

define the effectuation of the condition of possibility of language and the usage of

linguistics elements” (1987, 85). In that sense, Deleuze and Guattari's project is radically

different from Saussure, as their study of language does not attempt to establish a self-

sufficient, autonomous linguistic category, but to connect language to its specific uses,

that is, to specific contexts. At the same, time, their framework for undertaking such an

analysis demands a “high level of abstraction” in order to “pursue (...) unusual if not

78

unnatural connective syntheses, generalizable in structural terms as unrestricted and

unpoliced passages, meetings, and alliances at all levels and places” (Genosko, 1998, pp.

177-178). The image used by Deleuze and Guattari to express their strategy for analysis

is that of the abstract machine. As Wise describes it, Deleuze and Guattari's machine is

“what perceived regularities in the material are attributed to” (1997, p. 64). The abstract

machine helps mapping regularities without calling forth a macro-structure that

determines all phenomena. However, there is a need to acknowledge regularities: “what

we then posit is an abstraction (that does not exist in the actual) that is machinelike in its

function in that it produces regularities. We call this generally an abstract machine (p.

64).” In terms of a study of semiotics and language, the abstract machine “connects a

language to the semantic and pragmatic content of statements, to collective assemblages

of enunciation, to a whole micropolitics of the social field” (1987, p. 7). In so doing, the

main innovation in Deleuze and Guattari's approach is to present an analytical framework

for the analysis of the “conditions of possibility of language and the usage of specific

linguistic elements” anchoring language in non-linguistic processes – material, social,

technological ones (1987, p. 85).

Deleuze and Guattari argue for a multiplicity of sites and processes of meaning-

making so as to free the question of meaning from the purely linguistic domain. Deleuze

and Guattari offer a framework that is based on the reconciliation between the material

and linguistic aspects of communication. In so doing, they offer a way to develop an

analysis of language that answers the question “of how (if at all) media and materialities

of communication could have an impact on the meanings that they were carrying”

79

(Gumbrecht, 2004, p. 15). Finally, the semiotic analysis developed by Deleuze and

Guattari allows for a redefinition of the concept of meaning itself. Semiotics is still the

study of meaning formation and circulation. However, divorcing meaning from

Saussurean linguistics allows for departing from a strict focus on the concepts that are

associated with words (the process of signification). The kind of semiotics developed by

Deleuze and Guattari allows for a redefinition of meaning as the effects of language,

effects that are not simply linguistic but also social, cultural and psychological.

Consequently, the new semiotics that is developed by Deleuze and Guattari allows for a

“redefinition of the question of meaning and signification as not coming down from

above or emerging from the nature of things, but as resulting from the conjunction of and

friction between different semiotic systems” (1977, p. 299).5

What kind of analytical framework can be used to study “the crystallization of

power in the field of linguistics” (1996c, p. 141)? Deleuze and Guattari offer a new

linguistic framework to understand semiotic-pragmatic processes, one that is deeply

influenced by Hjelmslev linguistic theory – glossematics. As Genosko describes it,

Hjelmslev's glossematics consists of developing an “algebra of language” to “calculate

the general system of language in relation to which particular languages would reveal

their characteristics (Genosko, 2002, p. 155-157). What is a sign according to

glossematics? As Hjelmslev explains it, a sign is not an object, it is a semiotic function

that establishes a connection between two planes: the plane of expression and the plane of

5“Il s'agit de redéfinir la question du sens et de la signification, non comme tombant duciel ou de la nature des choses, mais comme résultant de la conjonction de systèmes

80

content (Hjelmslev, 1971, p. 72). There are two levels at which content and expression

can be analyzed: that of substance and that of form. Furthermore, the process of

signification, as Genosko summarizes it, involves first an “unformed amorphous mass

common to all languages called purport (matter) [that is] formed into substance”

(Genosko, 2002, p. 161). Once a substance of expression and a substance of content are

formalized, they can be further translated into a form of expression and a form of content

through the semiotic function of the sign, which establishes a link between these two

categories. The process of signification in glossematics can be represented as follows:6

Table 1: Glossematics

Matter (purport) Substance Form

Expression Materials available formanifesting content

Actual assemblage ofmaterials used tostructure content

Content Content of the humanmind before anystructuring intervention

Content of the humanmind in a structuredform

An example of the process of signification as presented through glossematics is a stop

sign on the road. The substance of content “stop” could be expressed through different

substances of expression (such as written letters, sounds, and colours). In order to

structure the concept of “stop” into a form of content that is understandable by all, a form

of expression that can be associated with it is the colour red.

sémiotiques confrontés les uns aux autres.”6 The definition of “matter” is taken from Genosko (2002, p. 161). The definitions ofexpression and content are adapted from Gumbrecht, 2004, p.15.

Unformed amorphousmass (unknowable untilis formed into asubstance)

81

As Deleuze and Guattari recall, a common understanding of expression and

content associates them with the Saussurean concept of signifier and signified. However,

for Deleuze and Guattari, Hjelmslev's glossematics (1983) is radically opposed to

Saussurean structuralism as it is immanent rather than transcendent, and as it allows for a

mapping of flows that goes beyond the relationships between signifier and signified.

Thus:

Far from being an overdetermination of structuralism and of its fondness forthe signifier, Hjelmslev's linguistics implies the concerted destruction of thesignifier, and constitutes a decoded theory of language about which one canalso say – an ambiguous tribute – that it is the only linguistics adapted to thenature of both the capitalist and the schizophrenic flows: until now, the onlymodern – and not archaic – theory of language. (p. 243)

Hjelmslev's glossematics is one of the central components of Guattari's psychoanalytic

focus on understanding the formation of subjectivity as developed in Chaosmosis, La

révolution moléculaire and other essays (Genosko, 2002). Central to Guattari's approach

to semiotics is the notion that language has to be analyzed through an examination of

power formations (1977, p. 308). Thus, for Guattari, linguistics cannot be separated from

the study of political and social issues. One has to integrate the question of power with

the problematic of meaning-making and representation (1977, p.242). Thus, the

relationship between expression and content is not arbitrary – it is realized through

political and social structures (1977, p.241). What Saussure defined as an arbitrary

relationship between signifier and signified in the process of representation is a

manifestation of specific power forces. One of Guattari's main research questions

concerns the examination of the many levels at which content and expression are

articulated. This requires a redefinition of the categories of expression and substance. In

82

particular, the category of substance of expression involves not only “semiotics and

semiology”, but also “domains that are extra-linguistic, non-human, biological,

technological, aesthetic, etc.” (1995, p. 24). The substance of content also needs to be

further developed to include not only the broad label of concepts, but also the social

values, rules and the kind of thoughts that emerge from social processes. Thus, the

process of signification intervenes through the articulation between a formalization of the

content of a social field (social values and rules) and a machinery of expression that

ultimately serves to “automatize the behaviours, interpretations, and meanings

recommended by the system” (1977, p. 307).7 The links between expression and content

are organized through social and political structures.

What is involved in the production of a homogenous field of signification that

correspond to the social, economic and moral dimensions of a specific power structure?

From what Guattari suggests in Révolution moléculaire (1977, p. 307-308), the process of

signification relies on two types of formalizations, one of which takes place at the level of

content and the other at the level of expression. At the level of expression, the first type

of formalization is a linguistic one, in that all the possibilities of language, of expression

are reduced to specific syntaxes – the proper rules for using language. The type of

formalization that takes place at the level of content involves a recentering of power

formations to establish semiotic and pragmatic equivalencies and significations in order

7“La signification, c'est toujours la rencontre entre la formalisation du champ socialdonné de système de valeurs, de systèmes de traductibilité , de règles de conduite, etd'une machine d'expression qui par elle-même n’a pas de sens, qui est, disons-nous, a-signifiante, qui automatise les conduites, les interprétations, les réponses souhaitées par lesystème” (1977, p. 307).

83

to produce signified content. Furthermore, Deleuze and Guattari see form and substance

as part of the same continuum in that they are “not really distinct”, while content and

expression are distinct and articulated so that “between them (...), there is neither a

correspondence nor a cause-effect relation nor a signified-signifier relation: there is real

distinction, reciprocal presupposition, and only isomorphy” (1987, pp. 502-503, cited in

Wise, 1997, p. 61). What happens is that an abstract semiotic machine allows for the

articulation of the linguistic machine (the proper language rules) with the structuration of

specific power formations. For Guattari, this meeting point is important as it potentially

allows for the reinforcement of a broader structure of power that goes beyond the

production of specific, contextualized significations. Who has the right and legitimacy to

articulate the linguistic machine with power formations is of crucial importance here, as

Guattari argues that it is the centralization of that articulation within a broad economic

and social machine (i.e. the state) that allows for the production of a system where the

field of signification corresponds to social, economic and moral dimensions of broad

power formations (1977, p. 308). For Guattari, then, there is no arbitrary relationship in

signification, that is, between the categories of signifier and signified. On the contrary,

the relationship between signifier and signified is a power manifestation, inasmuch as

language is not any language, but the language of a dominant class or group (1977,

p.272). Thus, the table representing the process of signification could be redesigned as

follows:

84

Table 2: Guattari and Glossematics

Substance Form

Expression

Ensemble of expressive materials:- Linguistic: signifying chain,batteries of signs. Sound, image,etc. (PS, 148)- Extra-linguistic domains:biological, political, social,technological, etc.

Specific syntax

Proper language rules

Content Social values, social rules. Signified contents: establishmentof specific equivalencies andsignifications.Legitimization of specific semioticand pragmatic interpretations.Specific rhetoric

Linguistic Machine:Harnessing of expressive

materials

Recentering, rearticulation andhierarchization of power

formations

Abstract SemioticMachine:

Process ofarticulation of thelinguistic machinewith powerformations

Production of anordered world:homogeneity ofthe field ofproduction withthe social,economic andmoral dimensionsof power.

85

Guattari's presentation of the process of signification as a process where power relations

are defined and stabilized through a linguistic machine thus multiplies the sites where

power processes take place. Power formations at the level of content are crucial in terms

of determining how to properly interpret texts and the meanings they carry. At the same

time, the level of expression is also a site of power struggle in that the processes at stake

shape expressive materials into a set of rules. An example of the power struggles that can

take place at the level of expression would be the invention of new techniques of using

expressive materials. The impressionist movement in painting introduced a new way of

using the material of paint and canvas, a revolution at the level of expression that went

counter to the agreed-upon, legitimate model of expression that focused on precise

description and mirroring of the object being painted. To go back to the main topic of this

research – the semiotics of the Web – Guattari's model for understanding signifying

semiotics is useful for defining some of the roles played by codes and protocols. At the

level of expression, the harnessing of technical potential into specific codes and protocols

echoes the kind of research questions defined by ANT regarding the relationships

between human and non-human actors through processes of translation and mediation

that are far from being neutral. Guattari's analytical framework makes it possible to

reintegrate these questions within a semiotic framework. The level of expression thus

allows for a reconciliation between the concepts of technology and language. Who

defines the proper uses of technologies is the central question in the analysis of the role

played by technology at the level of expression.

86

The above table, however, does not mention the category of matter, which plays a

central role in Guattari’s semiotic model. The kind of processes that take place between

content and expression at the levels of substance and form are but one part of the

problem. These relationships shape the signifying process. However, Guattari also

defines an a-signifying process that involves matter, content and expression. The a-

signifying process is part of Guattari's broader reworking of Hjelmslev's glossematics.

Indeed, Guattari's innovations are not limited to a redefinition of the levels of expression

and content and an analysis of the processes through which the transition from substance

to form is established. As Genosko summarizes it: “Guattari defined signification as an

encounter between diverse semiotic systems of formalization (a-signifying and

signifying) on the planes of expression and content imposed by relations of power”

(2002, p. 161). For Guattari, the semiotic process that takes place at the level of

expression and content between substance and form relies on signifying semiologies –

semiologies which are focused principally on the production of signs, or, as Guattari calls

them, “semiotically formed substances” (1996b, p. 149). There are other processes at

stake, and those involve a redefinition of the category of matter. For Hjelmslev, matter is

defined as an amorphous mass that can only be known through its formalization as

substance. For Guattari, on the contrary, matter can manifest itself without being

transformed into a substance (Genosko, 2002, p. 166). This new understanding of matter

is crucial for Guattari's model of mixed semiotics, as it allows for an examination of

matter “in terms of unformed, unorganized material intensities” (Genosko, 2002, p. 166).

In that sense, and as the multiple translation of the original Danish “mening” into both

87

“matter”, and “purport”, and especially in the French “sens”, matter makes sense, but this

sense is not created through a process of representation – it does not stand for something

other than what it is. As Dawkins (2003) argues: “Since matter is real, it does not

presuppose form for its expression. In this respect, Guattari is not doing away with form

completely, but he is reversing its precedence over matter” (p. 156).

As Guattari explains it, matter can also be divided along the lines of expression

and content, with sens or purport as matter of expression and the continuum of material

fluxes as matter of content. It now becomes possible to study the relationships between

the five criteria of matter-substance-form and expression-content. These modes of

semiotization are presented in table 3. Guattari's (1996b, p. 149-151) classification of

modes of semiotization is as follows:

Table 3: Mixed Semiotics

Matter Substance Form

Expression purport (sens)

Content

Continuum ofmaterial fluxes

a-semiotic encodings

signifyingsemiologies

a-signifying semiotics

88

1 . A-semiotics encodings: an a-semiotic encoding is non-semiotically

formed matter, that is, it is matter that “functions independently of the constitution of a

semiotic substance” (1996b, p. 149). Guattari's example is that of genetic encoding,

which is the formalization of material intensities into a code that is not an “écriture”

(1996, p. 149), or a signifying system. As Guattari further explains, a-semiotic

encodings such as DNA are composed of a biological level, and an informational one.

The biological - the material intensities - are encoded into an informational code that

thus acts as a support of expression for these material intensities. As Genosko (2002,

p. 167) further explains, genetic encodings can be transposed into signifying

substances and in that sense can be semiotically captured and disciplined, but they are

not in themselves formalized through semiotic substances. That is to say, DNA

encodings can be captured by different interests that can impose genetic interpretation

of genes with regards to, for instance, their desirability. The industry of genetic

modification, in that sense, imposes a discipline onto encodings that originally do no

signify anything, a discipline that is guided by specific interests and power relations.

2 . Signifying semiologies: this category concerns “sign systems with

semiotically formed substances on the expression and content planes” (Genosko,

2002, p. 167). They are divided into two kinds. Symbolic semiologies involve several

types of substances. Guattari refers to gestural semiotics, semiotics of sign language

and ritual semiotics among others as examples of symbolic semiologies, as their

substance of expression is not linguistic but gestural. Semiologies of signification, on

the contrary, rely on one unique substance of expression – a linguistic one, be it made

89

of sound, images, or other substances. Guattari defines this category as the

“dictatorship of the signifier” (1996b, p. 150), in that the articulations that are

established within semiologies of signification establish processes of semiotization

that rely on representation that cuts signs off from the real and from material

intensities, thus creating a “signifying ghetto” where a “despotic signifier (...) treats

everything that appears in order to represent it through a process of repetition which

refers only to itself (Guattari, in Genosko, 2002, p. 168). Semiologies of signification

involve the processes defined in table 2.

3 . A-signifying semiotics. As Guattari describes them, a-signifying

semiotics involve “a-signifying machines (that) continue to rely on signifying

semiotics, but they only use them as a tool, as an instrument of semiotic

deterritorialization allowing semiotic fluxes to establish new connections with the

most deterritorialized material fluxes” (1996b, p. 150). That is, a-signifying machines

circulate the planes of expression and content and create relationships between matter,

substance and form that are not primarily signifying. Guattari gives the example of

“physico-chemical theory”, arguing that its goal is not to offer “a mental

representation of the atom or electricity, even though, in order to express itself, it must

continue to have recourse to a language of significations and icons.” This kind of

abstract machine comes to create sign machines to support the setting up of “an

assemblage of experimental complexes and theoretical complexes” (1996b, p. 151).

As Genosko further explains, a-signifying semiotics establishes connections at the

levels of form and matter (material intensities) that “escape the overcoding functions

90

of signifying semiological systems” (1996b, p. 169) and are “unmediated by

representation.” In that sense, a-signifying semiotics “produce another organization of

reality” (Seem and Guattari, 1974, p. 39). As Guattari describes it:

The machines of mathematical signs, musical machines, or revolutionarycollective set-ups might in appearance have a meaning. But what counts,in the theory of physics for example, is not the meaning to be found at agiven link in the chain, but rather the fact that there is what CharlesSanders Peirce calls an effect of diagrammatization. Signs work andproduce within what is Real, at the same levels as the Real, with the samejustification as the Real. (...) In other words, what is real and what is signshort-circuits systems of representation, systems of mediation, let's callthe systems of referential thought, whether they be called “images”,“icons”, “signified,” or “mental representations”, there is little difference.(1974, p. 40)

Thus, a-signifying semiotics requires the deployment of a system of signs that is used to

harness material intensities to shape what comes to be called reality.

Different modes of semiotization are not mutually exclusive. For Guattari, there

are mixed semiotics, that is, semiotics which participate in both a-signifying semiotics

and signifying semiologies (1974, p. 40). That is, it is not so much that a given process

corresponds to one or the other mode of semiotization, but rather that a process involving

the formalization of material intensities and the deployment of signifying machines can

be examined through these different perspectives. In La révolution moléculaire (1977, p.

294-295), Guattari gives an analysis of money according to the three kinds of encoding.

The example of money as a phenomenon that involves multiple articulations between

material intensities and signifying machines is useful for illustrating the novelty of

Guattari's approach:

91

1 . A monetary system involves a-semiotic encodings through the

mobilization of “matters of expression that possess their own modes of encoding”8,

such as demographic fluxes, reserves of raw materials and geographic constraints (p.

294).

2. In terms of signifying semiologies, a monetary system deploys symbolic

semiologies in that it “functions as an imaginary means of subjection”9 (1977, p. 295).

Being rich, for instance, can be expressed through non-linguistic substances of

expression that act at the level of perception – specific clothing and behaviours that

differentiate between the haves and have-nots. These substances of expression are

linked to specific formalized content – they come to denote prestige and social status.

Money is an imaginary means of subjection in that the symbolic semiologies that

come to be linked with it codify relations of power.

3 . As encompassing semiologies of signification, money “interacts with

linguistic signifying encodings, for instance through a system of laws and regulations”

(1977, p. 295)10. A monetary system deploys machines of signification that imposes

specific interpretations of money. It is not only that, for instance, state regulations

impose a definition of who is rich and who is poor, but also that they literally define

what money is worth. A five-dollar bill is only worth five dollars because a

institutional machine has engraved that specific meaning onto a piece of paper.

8“Elle (l'économie monétaire) met en jeu des matières d'expression qui ont leur propremode d'encodage” (294)9“L’argent fonctionne comme un moyen d'asservissement imaginaire” (295)10“L'économie monétaire interagit constamment avec les encodages signifiants dulangage, notamment à travers le système des lois et des réglementations” (295).

92

4. When it is deployed as an a-signifying machine, money is not a means for

payment anymore, but a means for credit and financing (1977, p. 295).11 At a broad

level, the a-signifying money machine allows for the shaping of specific lifestyles that

are dictated by different instutional actors acting for state and market interests, for

instance. Money as an a-signifying machine harnesses material intensities in the sense

that it shapes a social and economic landscape. It is not that such a process is

meaningless, but that signifying machines support the connections between material

intensities and social and economic meanings and create a new reality.

There are several levels at which power relations are deployed within a mixed

semiotics. The first level of power relationships takes place at the level of signifying

semiologies, and was explained above. More importantly for Guattari, the “authority” or

dominant system also makes use of a-signifying semiotics in order to function. Science

and monetary economy, for instance, as a-signifying semiotics are “alone capable of

putting to the use of the system of Power, the metabolism of signs, within the economy of

material flows” (1974, p. 40).

Guattari's mixed semiotics allows for the examination of the abstract machine that

shape the “actualization of the diagrammatic conjunctions between sign systems and

systems of material intensities” (1977, p. 261).12 The image of the abstract machine as a

diagram is central in Deleuze and Guattari's thought, as the diagram is not only a map of

11“L'inscription monétaire fonctionne, en partie sur le mode d'une machine sémiotique a-signifiante, lorsqu'elle est utilisée non plus comme moyen de paiement, mais commemoyen de crédit et de financement” (295).12“Ce machinisme abstrait 'précède', en quelque sorte, l'actualisation des conjonctionsdiagrammatiques entre les systèmes de signes et les systèmes d,intensités matérielles”

93

power relations, “a cartography that is coextensive with the whole social field”, but more

importantly, it is “an abstract machine (...) defined by its informal functions and matter

and in terms of form makes no distinction between content and expression, a discursive

formation and a non-discursive formation” (Deleuze, 1988, p. 34). By examining how

different semiotic machines function, Guattari's work aims towards a critique of power

that is also based on “the pivotal point between semiotic representation and the

pragmatics of 'existentialization'”, to quote one of Guattari’s comments on the influence

of Foucault (1996a, p. 181). By recasting linguistic phenomena through a framework

allowing for an analysis of their conjunctions and articulations with non-linguistic

processes, Guattari's model of mixed semiotics reconciles questions regarding content

and questions regarding media. Guattari's model thus allows for a technocultural

framework to bridge questions linked with the issue of representation and material

analyses that expresses a dissatisfaction with the central role played by language in

cultural studies (Hansen 2000, Kitzmann, 2004). While Kitzmann is right in arguing that

“language is not the only medium for cultural analysis, and technology does more than

just influence modes of representation” (2004, p. 4), this should not necessarily lead to

the abandoning of linguistic modes of analysis. The mixed semiotics model makes it

possible to analyze technologies of communication not only in terms of the content they

produce, but also in terms of their shaping of the real through the mobilization of actors

and machinic processes. However, the mixed semiotics model as it is developed by

Guattari, is not particularly adapted to the study of communication technologies and new

(261).

94

media. Herein lies the theoretical and methodological challenge: adapting the model of

mixed semiotics to analyze the relationship between materialities of communication and

processes of signification in some specific case studies of the World Wide Web.

4. Mixed Semiotics and the Web

Guattari's analysis of semiotic encodings was primarily developed within a

specific psychoanalytic framework focused on critiquing the limits of traditional

structuralist analyses and on shaping a new form of analysis – schizoanalysis – that could

unearth new forms of resistance, new subjectivities that would resist the territorializing

systems put in place by dominant power forces. However, Guattari seems to make few

references to the media as such, except to point out that they can be analyzed through

mixed semiotics. Cinema and television, for instance, “put all sorts of materials of

expression into play, independently of a production of meaning”, with the overlapping of

“the semiotic of the image, a semiology of speech, a semiology of sounds, of noises,

semiotics of corporal expression and then, on another side, these mixed semiotics are also

signifying semiologies” (1974, p. 40). Furthermore, “technological machines of

information and communication operate at the heart of human subjectivity, not only

within its memory and intelligence, but within its sensibility, affects and unconscious

fantasm” (1995, p. 4). As such, media operate at different a-semiotic, signifying and a-

signifying levels, and their effects on the shaping of subjectivities are not only at the level

of the production of signification, but also at the level of harnessing the formation of

subjectivities through the flow of “diverse components of subjectivation” (1995, p. 16).

Thus, watching television not only means being caught up in the signifying flows of “the

95

narrative content of the program”, but also the experience of “a perceptual fascination

provoked by the screen's luminous animation which borders on the hypnotic” (1995, p.

16). The flows of subjectivation expressed through television thus involve both material

intensities (the animations on the screen) and signifying semiologies (narrative content).

How can Guattari's framework be used to analyze the semiotics of the World

Wide Web? Guattari's model of mixed semiotics is useful for avoiding the divide

between content and medium and for further analyzing the Web as a technocultural

entity. The examination of a-semiotic, signifying and a-signifying processes allows for

the mapping of the articulation of technologies, signifying spaces and cultural processes

and as such makes it possible to analyze in detail the power formations expressed through

a medium that give rise to specific organizations of reality - specific modes of

existentialization of cultural practices, relations of power, subjectivities and identities.

The mixed semiotics model is useful for furthering the problematic of the layer approach

to the Web that was presented in the first chapter by making it possible to examine how

technical components and cultural processes give rise to specific signifying and a-

signifying processes. By allowing for an examination of the elements constituting the

interface as a semiotic space of interaction, the mixed semiotics model allows for a multi-

faceted analysis of the different technocultural levels that create the experience of the

Web.

At the a-semiotic level, the question of materialities that are encoded as non-

signifying information can be used to analyze specific forms of data processing on the

Web. This, however, raises one central issue. It is necessary to acknowledge that encoded

96

data cannot be equated with a-semiotic encodings in a simple manner. As Guattari points

out, the transmission of information through different channels, such as the transmission

of a visual signal via cable that is then visually reconstituted on the television screen, is

not an a-semiotic encoding (1977, p. 253). Signifying semiologies are involved in the

process, which is one of translation from one mode of expression to another. What we

understand as digitization then, is not a form of a-semiotic encoding. Guattari describes

the a-semiotic process as one through which material intensities are encoded as

information. Guattari further adds that a-semiotic encodings cannot directly be transposed

within another encoding process. Within the framework for this research, I suggest that

these characteristics - material intensities transformed into a specific informational code

that is not directly transformable into a signifying system - offer a new way of looking at

the informational dynamics of the Web. Indeed, it is important to realize that the Web is

not simply a representational space, but functions through the circulation of information

that is captured and reshaped by signifying and a-signifying systems. While there are

physical materials involved in the shaping of the Web, such as hardware and electric

signals, there are also the informational fluxes of content and users that represent a

category of a-semiotic encodings worth studying, especially in their relation with

adaptive software. Informational fluxes are not simply data circulating through computer

networks, but processes put in place to measure the movements of users and information

as they circulate on the Web. The movements of users and information are a-semiotic in

the sense that the processes of tracking and measuring these movements do not directly

lead to signification, or meaning. Rather, and as will be the shown in the case studies,

97

these processes are captured within specific signifying and a-signifying power

formations. These movements are the very materials through which dynamic software,

and software that supports content production, can be deployed thanks to processes of

interpretation of those material intensities.

With regards to the signifying level, the mixed semiotics model offers a way of

mapping processes of transcoding (Manovich, 2001) as the translation of a message

across different modes of expression, and from computer code to cultural signs. This is

central to understand how cultural meanings are translated and mediated onto Web

interfaces through their reconfiguration within different signifying systems. Combined

with an actor-network approach, the mixed semiotics framework allows for a mapping of

the agency of different signifying actors - in particular software and users - as they are

articulated on the levels of content and expression. Guattari’s mixed semiotics framework

is useful for examining the articulation between different substances and forms of content

and expression. This is not limited to analyzing the ways in which different programming

languages come to be formalized at the level of expression, or the ways in which

preferred readings and textual subject positions are deployed at the level of content, but

also for examining how the articulation between expression and content give rise to

specific cultural perceptions of the signs that make up the Web interface. As such, the

mixed semiotics framework allows for further examination of the knowledge processes

present on the Web - the ways in which users’ understanding of content is shaped through

the definition of specific technocultural modes of perception. As will be made clearer in

the case studies, the question of the cultural perception of signs highlights the need to

98

examine the specific values attributed to signs. The construction of these signifying

values (social distinction, cultural attributes) results from the articulation of technical and

cultural processes at the levels of content and expression.

The a-signifying level allows for an examination of power formations on the Web

that make use of the data gathered at the a-semiotic level and of the regimes of

signification present on Web interfaces so as to produce specific modes of

existentialization. The organization of reality through a-signifying processes thus makes

it possible to see how the technologies of the Web are articulated within specific contexts

to define specific modes of communication, cultural roles and subjectivities. For instance,

an a-signifying analysis can be used to answer questions related to the subjectivities that

are created on Web interfaces and the processes of actualization and existentialization of

Web users within the specific technical, commercial, political and cultural constraints of

a web space. The analysis of a-signifying processes also allows for an exploration of the

articulations that allows for the definition of specific technocultural formats that actualize

commercial, cultural, political interests and power formations.

The purpose of this research is to examine the role played by Web codes,

languages and protocols in allowing for the deployment of a-semiotic, signifying and a-

signifying processes on the Web. Thus, processes, programs and protocols devoted to the

question of transmission or hardware will be ignored. The focus of the research is on the

software that supports content production by allowing for the shaping of data into

culturally readable information. The mixed semiotics model can be used to further

understand the informational dynamics of the Web, that is, the ways in which content is

99

embedded within specific informational modes that articulate themselves onto power

formations. The goal of this research is to show how the mixed semiotics model can be

used to enrich current research questions in the field of Internet studies. In particular, the

broader research concerned expressed by Rogers’ (2004) informational politics model

can benefit from a mixed semiotics approach. The mixed semiotics framework, by

allowing for the mapping of technical and cultural processes that shape a-semiotic,

signifying and a-signifying encodings, makes is possible to identify the processes that

make use of the front end and the back end in order to actualize specific perceptions and

uses of the Web.

In that sense, the mixed semiotics model allows for an extension of actor-network

theory to questions related to media and semiotics at the signifying and a-signifying

levels. Indeed, the two methodologies are complementary: Latour, for instance, defines

the concept of network as similar to the concept of the rhizome. At the same time, the

concept of the machine allows for an understanding of regularities that produce

homogeneity through the stabilization of power relations – something that, as Wise

recalls, is missing in ANT (1997, p. 70). Furthermore, the diagrammatic processes

through which material intensities are harnessed and shape realities through a-signifying

machines allows for a deeper understanding of the effects of media as that question has

been framed by medium theory. This includes not only the physiological effects of

media, but also the ways in which the conjuncture of different technical components

allows for the shaping of new sensitivities and affects. Furthermore, the examination of a-

signifying processes allows for the mapping of power relations as they capture the

100

material intensities present in media and reshape them into dominant models of

communication, such as commercial television, radio as a one-way mass communication

system, and the Internet as a data-minable source of large amounts of information.

Finally, the mixed semiotics model creates a robust framework for the analysis of

questions related to discourse and discourse networks: what are the characteristics of

subjects and objects as they are mobilized through flows of signifying semiologies and a-

signifying semiotics? How are users defined and created through the articulation between

cultural norms and technical artifacts (Chesher, 2003)?

5. Introducing the Case Studies

Examining the role played by technology in creating conditions for meaning

production on the Web is a task that is too broad for the purpose of this research. As a

way of testing theoretical frameworks such as Guattari’s mixed semiotics, it is necessary

to proceed by focusing on specific case studies. The approach to the case studies of this

research proceeds from an “instrumental” (Stake, 2005 p. 437) perspective, in order to

provide the grounds for more in-depth theorizing about the production of discursive

machines on the Web. That is, there is no pre-defined theory as to the characteristics of

the World Wide Web as a medium that will be proven through the case studies. Rather,

the case studies, through testing this new analytical framework, will serve to build

theories for future research. Case studies have traditionally been used to analyze a

specific event through an examination of “the interplay of all variables in order to provide

101

as complete an understanding of an event or situation as possible.”13 As such, case studies

strive to be holistic. What is different with the case studies in this research is that they are

focused primarily on technological actors, not human ones.

In terms of methodologies, the approach to the case studies will follow Stake’s

argument (2005) that “a case study is not a methodological choice, but a choice of what

is to be studied. By whatever method, we choose to study that case” (p. 435). The choice

of having multiple case studies to analyze the role played by web representational

technologies in the development of regimes of signs on the Web involves the use of

several methodologies. The mixed semiotics model provides a framework within which

research questions stemming from various methodologies such as ANT and Foucauldian

discourse analysis and focus on the relationships between technology and language can

be examined:

1. The shaping of the agencies of software within specific assemblages of human

and non-human actors creating the conditions for the production and circulation of

meaning.

2. The role played by software in the processes of formalization to create specific

regimes of signs. It is not simply a question of studying the rules of communication in

specific web environments, but more importantly of tracing how specific rules emerge

from technocultural potentialities.

3. The discursive and material relationships suggested through the deployment of

regimes of signs, among which are the delimitation of the agency of users, and the ways

13 http://writing.colostate.edu/guides/research/casestudy/com2a1.cfm_

102

in which content is supposed to be approached, not only at an interpretational level, but

also at the level of affect.

4. The ways in which these regimes of signification delineate the possibilities

offered by the medium, that is, the communicational and cultural characteristics of

representation on the Web.

5. The ways processes of signification circulating through mixed semiotics

processes give way to specific modes of existentialization of power relations and

subjectivities.

Whereas the first case study examines the relationship between the interface and

the production of consumer subjectivities through adaptive technologies on amazon.com,

the second case study examines how techno-discursive rules are rearticulated with

regards to the use of the MediaWiki software package by Wikipedia and other Wiki

websites.

Case study 1: Adaptive interfaces and the production of subjectivities - the case of

Amazon

The first case study examines the strategies put in place to represent consumers

and commercial goods through the production of adaptive and automated hyperlinked

and hypermediated environments. Founded in 1994, the Amazon website

(www.amazon.com) demonstrates the ways in which technical tools, which automatically

process users’ surfing and reading preferences, aim to create a qualitative environment

through quantitative, statistical analysis. The automatization and quantification of the

traditionally qualitative process of recommending books and other cultural products

103

highlights the interplay between different technocultural layers. At the social level, the

experience is both commercial (buying books) and cultural (as it is a search for

meaningful artifacts). The hypertextual characteristics of the website adds a multi-level

experience that is specific to the Web: the user can search by using trails of association

that can follow a specific theme, author, Amazon’s recommendations, or other user’s

recommendations. The technical layers register user’s clicks, enabling this entire cultural

experience to be increasingly customized the longer the user surfs on the website. In the

end, the user is interacting only with a set of machines processing both personal and

social behaviours so as to produce something culturally relevant. The software processes

surfing behaviour in order to define the correct cultural answer. In that sense, the

software processes users in order to represent them within the cultural space of the

website. It thus becomes necessary to analyze these different technical processes as actors

and mediators that construct objects and subjectivities by mimicking qualitative

processes. The technical layers are not simply the tools that allows for interactivity

among human actors, but become the silent actor with which human actors have to

dialogue.

Case Study 2: Mixed Semiotics and the Economies of the MediaWiki Format

While amazon.com is an instance of the commercial use of dynamic content

production techniques on the Web, MediaWiki (initially released in 2002), and Wikipedia

(founded in 2001) as its most popular embodiment, stand as symbols of a non-

commercial model of collaborative knowledge creation. While the Amazon.com case

study focuses on the circulation of the book as a cultural object as a starting point of

104

analysis, the MediaWiki case study explores the circulation of a technocultural format:

the Wiki format. The Wikipedia model is not only cultural, but also technical as

collaborative knowledge production relies on a suite of software tools - the wiki

architecture - that enable these new discursive practices. At the same time, the Wikipedia

model relies on the cultural shaping of technologies through active intervention by human

actors in order to assign specific proper uses of technological tools. The mutual shaping

of technological capacities and cultural ideals and practices puts into question any model

that would attempt to explain the Wikipedia technoculture as the simple transposability of

culture into technology. The Wikipedia model is the result of a set of articulations

between technical and cultural processes, and the case study examines how this model is

captured, modified and challenged by other websites using the same wiki architecture -

MediaWiki - as Wikipedia. In particular, the case study examines how legal and technical

processes capitalize on user-produced content as a source of revenue, thus revealing how

technical and commercial processes on the Web appropriate discursive practices.

105

Chapter 3

Cultural Objects and Software-Assisted Meaning Creation - The Case of Books onAmazon.com

1. Amazon.com and Mixed Semiotics

Amazon.com is often referred to as one of the most important success stories of

the Web. As a pioneer in e-commerce, amazon has managed to survive the dot-com curst

of the late 1990s and is ranked as one of the top 50 most visited sites on the Web

(www.alexa.com). The reason for its success are multiple, from the size of its catalogue

to its lower prices and fast delivery system. Yet, the reasons for the success of

amazon.com are not simply linked to its commercial infrastructure. What distinguishes

the online experience of amazon.com, in comparison to other online bookstores such as

Barnes & Nobles in the United States or Chapter-Indigo in Canada, is that it is also a

unique cultural space where users are offered ways to make sense of the many books and

other cultural items that are presented to them. Amazon.com articulates the cultural and

the commercial as the experience of surfing on the website is one of exploring the

meanings of books so as to select the ones that are most appropriate to one’s interests.

Indeed, the experience of searching on the Amazon website cannot be compared with the

experience of using a search engine such as Google, because the core of the experience

on amazon.com is one of browsing. That is, while it is possible to search for specific

titles on amazon.com, the main experience is one of exploring, of broadening one’s

horizon of cultural expectations rather than narrowing it down to limited selection.

Furthermore, the uniqueness of the amazon.com model is that this process of finding

106

meanings is not done by users only, but requires that users interact with a

recommendation software. The more users interact with the recommendation software on

the amazon website, the more the software can get back at users with customized and

tailored suggestions. In so doing, the recommendation software sends back not only

meanings to users, but also, through its specific modes of translating information about

users as cultural meanings, shapes subjectivities and consumer identities.

The circulation of meanings on amazon needs to be acknowledged through the

analysis of the articulations and exchanges between users and software on the website.

The networks of users and software needs to be further described by taking into account

the interactions between users and software, and the ways in which the software can be

used by users at the same time that it shapes specific user agencies that are unique to

amazon.com. It is of particular interest to examine how these articulations and exchanges

translate, in Latour’s sense of the word, the cultural search for meanings into a

commercial incentive. The goal of the present chapter is to analyze the actor-networks on

amazon.com involved in the production of meanings at the interface level through

Guattari’s mixed semiotics framework.

In terms of looking at the a-semiotic, signifying and a-signifying processes of

content production on amazon.com, there are three central articulations between users

and software that can be identified. At the a-semiotic level, the information gathered

about books and users constitutes the basis for a-semiotic encodings. A-semiotic

encodings concern the processes for gathering, storing and formalizing data. In that

sense, tools used to gather data, such as cookies (Figure 3) are sites of analysis, along

107

with other processes for transforming data into useable information as they are defined

through the amazon.com architecture.

Figure 3: Amazon.com Cookies - Screen Capture of Mozilla Firefox Cookie Window

At the signifying level, the amazon.com interface can be analyzed as resulting from a

process of capturing a-semiotic encodings within signifying semiologies, and of

articulating signifying rules and discourses with broader a-signifying power formations.

In that sense, the processes that shape the amazon.com interface are a central site of

analysis (Figure 4).

108

Figure 4: The Amazon.com Interface (Cookies Enabled)

The central site of analysis at the a-signifying level concerns the existentialization of

users. At that level, the “Hello, Ganaele” (Figure 5) appearing each time I log onto the

website does not simply acknowledge successful connection, but also recognizes me as a

user within a specific framework. The work of the software, then, is not only to offer

meanings, but also to interpret which meanings are the most appropriate for my profile.

In that sense, the recommendation software, along with other features present on the

website, is in charge of shaping the cultural perception of users. That is, in the process of

articulation between software and human actors, the software shapes the identities and

subjectivities of users. It becomes indispensable, then, to analyze how the software,

through the existentialization of the category of the user, serves to translate economic

goals as cultural subjectivities and practices within the commercial environment of

109

Amazon.

Figure 5: Personalization on Amazon.com

The remediation of books in an online environment such as amazon.com

represents a fundamental change of status in that books, on the website, are textualized.

As such, the process of selling books on Amazon requires a temporary transformation of

books into Web pages that act as repositories of cultural meanings and associations about

the content of the book itself and the broader cultural context within which the book is

inscribed. It is this transition from the physical to the virtual through textualization that

allows for the deployment of multiple ways of creating meanings, and for the definition

of specific techniques for exploring the meanings associated with books. In ANT terms, it

is the translation of books from cultural objects to textualized online pages that is the

starting point for analyzing the a-semiotic, signifying and a-signifying networks present

on the amazon.com interface.

There are several modalities for analysis that need to be examined. Following

Guattari’s framework for examining the process of signification, one has to acknowledge

that the actors participating in the production and circulation of books as signs on

amazon.com shape a machinery of signification. Thus, the starting premise for the

analysis is that the amazon.com platform is an abstract semiotic machine as it allows for

the articulation of linguistic, software and cultural processes to form a coherent space of

110

cultural and commercial consumption. As Table 4 shows, Guattari’s framework can be

used to represent the process of signification on amazon.com. At the level of expression,

the shaping of expressive materials to formulate signifying practices specific to

amazon.com involves the creation of an interface with set elements with which users can

interact (e.g. hyperlinks, search boxes, rating boxes, review spaces). These linguistic

elements allowing for the formulation of representations are articulated with specific

extra-linguistic domains, in particular the software layers in charge of processing user

behaviour (i.e. the recommendation system and the profiling system), as well as

commercial interests. At the level of content, the production of cultural meanings that are

associated to specific book titles is dictated by discursive values that delineate the sphere

of legitimate activity for users, as well as broader values related to the formal production

and consumption of meaning. As will be explained in this chapter, these cultural

sensitivities towards meaning production and consumption can be explained through

Lipovetsky’s analysis of the different processes of signification, and the different cultural

perceptions of meanings as described in his book The Empire of Fashion.

While the representation of the process of signification is useful for understanding

the specific status of books as cultural signs on amazon.com, there are some new

categories of discourse and new power relations that need to be explored through

Guattari’s mixed semiotics framework. The goal of analyzing amazon.com is not only to

understand the translation of books into cultural signs that articulate users with specific

textual and social values, but also how this process of signification reflect new discursive

relations as well as new power relations. The figure of the user of the amazon.com

111

website is central, as the amazon.com machine creates links between users and books as

signs at both the level of meaning production and the level of meaning circulation. That

is, there is a dynamic process whereby users create meanings and are further shaped

through the processing of their behaviours by a software machine. Furthermore, users

represent a new discursive category, as they are present in both the sphere of authorship

and that of readership. Conventional discursive categories have then to be revisited in

order to examine users and their practices as instances of articulation between cultural

and software processes. Thus, while Table 4 represents the processes at stake in

developing signifying semiologies on amazon.com, an analysis of the production of the

category of the user requires the deployment of a-semiotic encodings and a-signifying

semiologies (Table 5). The level of a-semiotic encoding represents the articulation

between user behaviour, book information and the layers of software in charge of

creating databases. That is, the a-semiotic encoding stage represents the transformation of

different kinds of information into data. These databases are then captured by signifying

semiologies and a-signifying semiotics through the processing of data by the

recommendation system and the profiling system. At the level of signifying semiologies,

data is captured by amazon.com’s recommendation system and is subsequently translated

into meaningful recommendations for a selection of book titles. The profiling system

assists the recommendation system in identifying the cultural interests of users and in

further defining meaningful suggestions. At the level of a-signifying semiotics, the

production of the cultural category of the user is made through the formulation of a whole

series of disciplinary and cultural processes designed to shape the sphere of activity of

112

users. In that sense, user behaviour is processed into new behaviours that are further

integrated within the amazon.com machine. This shaping of practice takes place not only

at the level of imposing rules to users, but also at the more productive level of channeling

practices and actions within a specific sphere: the production of signifying semiologies

through collaborative filtering.

The above framework offers a starting point for examining the technocultural

processes at stake in the production of meanings on amazon.com. This chapter builds on

Guattari’s framework by examining the production of books as cultural objects, the

production of the articulation between users and books through social personalization,

and the production of users through a process of profiling and collaborative filtering. By

way of anchoring the analysis of the articulation between signifying and a-signifying

semiologies, two examples will be used throughout this study. The first one is Gilles

Lipovetsky’s Empire of Fashion (Figure 6). Lipovetsky’s analysis of fashion serves as a

basis for analyzing how cultural interpretations of signs and objects are shaped by

specific ideals that are not only related to social status, but also to the individualist ethos

of contemporary society. The Empire of Fashion serves a dual purpose in this study, as

the articulation between signifying processes and cultural perception described by

Lipovetsky can, as will be argued in this chapter, be successfully applied to understand

the types of cultural interpretations that are created through Amazon’s recommendation

process. This book will serve as a point of comparison with the second title used - Harry

Potter and the Deathly Hallows (Figure 7). Not only is Harry Potter a different genre (a

novel) and a different category (children’s fantasy) than Empire of Fashion, it is also

113

subject to intense marketing. Importantly too, the book did not exist as a mass publication

when the data collection was done for this study, yet it was the number one bestseller on

amazon.com. It was therefore entirely virtual and its presence on the amazon.com

website illustrates the new cultural practices around book consumption that are developed

online.

114

Table 4: Amazon.com’s Signifying Semiologies

Substance Form

Expression

Ensemble of expressive materials:

- Linguistic domain: the amazon.com interface,including a range of visual and auditory signs(words, images, numbers, symbols such as stars,posdcasts).

-Extra-linguistic domains:Recommendation software, commercial forces (i.e.advertising, sponsored recommendations,authoritative reviews), profiling tools.

Specific syntax and language rules:

- Range of signifying practices available to users (i.e.write a review, rate items, tag items).

- Range of signifying practices available to the WebService layer (i.e. hyperlinks).

Content Social values and rules:

- Amazon.com’s rules of discourse - what can besaid by who as expressed in Amazon.com’sguidelines and in the design of the interface.

- Broader values related to the consumption ofmeaning, in particular the articulation of meaningswith cultural desires (Lipovetsky’s Empire ofFashion).

Signified contents:

- Production of book as signs that channel culturalmeanings.

Legitimization of specific semiotic and pragmaticinterpretations:

- The recommendation system interprets the behaviours ofusers as cultural meanings.

Theamazon.complatform isthe abstractsemioticmachine thatarticulates thelinguisticmachine withpowerformations.

Linguistic Machine

Ordering of discursive power formations

115

Table 5: Mixed Semiotics on amazon.com

Matter Substance Form

Expression Purport (sens):

Content

Continuum ofmaterial fluxes:- User behaviour- Book information

a-semiotic encoding: Creation of a database through the processing of users behaviour into data.

Signifying Semiologies:

Symbolic semiologies: definition of specific practices and repetitivegestures following the articulation between discursive and social andcultural rules.

Semiologies of signification: book as cultural sign.

A-signifying semiotics: production of the user as a discursive category with acircumscribed range of actions and expressions that articulate themselves ondiscursive and non-discursive rules.

The a-semioticencoding iscaptured bysignifyingsemiologies(through therecommendationsystem andprofiling system)to create newcultural meanings

116

Figure 6: The Empire of Fashion

117

Figure 7: Harry Potter and the Deathly Hallows

118

2. The Architecture of Amazon.com: Data Processing as A-semiotic Encoding

With regard to building a cultural experience, the most important feature of the

amazon.com bookstore is not the millions of titles that its catalogue offers, but the ways

in which users are assisted by software programs in their search for books so that they are

not inundated by the volume of information available on the website. That is, the core of

the amazon.com process lies in deploying techniques so that order can emerge and

meaningful links can be established to answer to users’ cultural interests through the

production of recommendations. At the technical level, this requires a specific

architecture that makes it possible to process a large amount of data - not only book titles,

price information and order processing forms (e.g. adding items to a shopping cart), but

also the different categories of meanings as expressed through texts (e.g. customer

reviews) as well as actions (e.g. click-through rate). The structure of amazon.com is what

is called a service oriented architecture composed of two levels: a back-end, offline level

that includes databases and the systems in charge of processing data to find links and

correlations, and an online service level using software components. The software

components process data from the databases to produce interface and services. As

Amazon CTO Werner Vogels explains it, the development of amazon.com as a service

oriented architecture was necessary for the processing of data in a fast manner: “the big

architectural change that Amazon went through in the past five years was to move from a

two-tier monolith to a fully distributed, decentralized services platform serving many

119

different applications.”14 This includes the services and applications that make up the

amazon platform, the services that create an interface with retail partners, and Amazon

Web Services, which are software components that amazon sells to its network of

affiliates and to other websites15.

The process for publishing content on the amazon website is complex. As Vogels

explains: “If you hit the Amazon.com gateway page, the application calls more than 100

services to collect data and construct the page for you.” Thus, it is not simply a question

of human or commercial actors entering comments about a book and of the technical

architecture of the website being able to publish these comments in almost real time.

Rather, the production of content on amazon, and in particular the production of

recommendations, requires several steps. First there needs be a collecting of data.

Information about books such as price, availability, etc. is required in order to create

Amazon web pages that can be updated and customized in almost real time. Information

about users is also necessary, and includes several aspects such as age, geographic

location, past items bought or consulted. Surfing patterns are also recorded through

surveillance devices such as cookies. In reference to Guattari’s mixed semiotics

framework, the collecting of data constitutes a first step in the formation of a-semiotic

encodings. Information stored in databases is formalized through data processing by

different services and is then used by specific applications to produce customized

14 http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=40315 http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=388

120

recommendations and Web pages.16 The service applications capture a-semiotic

encodings in order to produce signifying semiologies. Information about users and the

books they have bought, for instance, serves as a basis on which to create meaningful and

culturally relevant recommendations. The service layer in charge of formalizing content

uses specification such as WSDL (Web Services Description Languages), which, using

XML (Extensible Markup Language), “allows developers to describe the ‘functional’

characteristics of a web service - what actions or functions the service performs in terms

of the messages it receives and sends” (Weerawarana, 2005). The amazon interface that

users have access to is thus the product of numerous services and applications that adapt

web pages to the preferences of users. Those services use a language (WSDL) that

describes functions, not semantics: “a WSDL document tells, in syntactic or structural

terms, what messages go in and come out of from a service. It does not provide

information on what the semantics of that exchange are.” That is, while those services

give form to a Web page, they do not do in themselves any kind of interpretation of the

content of that page. Services and applications serve as delegates, in Latour’s words

(Latour, 1999, p. 187) that can process vast amount of information - that is, material

intensities - through algorithmic processing. This type of processing is designed to

translate the qualitative search for meaning into quantitative processes. Results from the

data processing are transformed into representations on the amazon.com interface. Thus,

the Amazon services and applications articulate a-semiotic encodings and signifying

semiologies. As we will see later in the chapter, the service layer plays an important role

16 ibid.

121

in stabilizing the cultural experience of amazon.com by providing a discursive and

practical framework (through defining the types of interactions user can engage with

among themselves and with the software layer) that ensures the experiential stability

needed for the deployment of the amazon.com’s meaning production machine.

Figure 8: A-semiotic and Signifying Processes on Amazon.com

122

3. Signifying Semiologies on Amazon.com: Shaping the Cultural Perception of

Meaning

The production of content is realized through the interactions between three

different categories of actors. The first category includes users who, for instance, write

reviews and tag and rate items. The second category of actors includes commercial

actors, for instance those using sponsored advertising and paid placements. The third

category of actor includes software, for instance programs designed to produce content

through mining databases. Those programs include amazon.com’s own recommendation

system, which is a central component of the cultural experiences created by the

amazon.com interface. To understand the machinery of meaning production and

circulation on amazon.com, it is necessary to examine how the three categories of actors

can intervene in the signifying process. In particular, one has to acknowledge the

omnipresence of software as a technical mediator of user-produced and commercial-

produced content and as an active participant in the production of meanings. In that way,

it is useful to first look at the different software actors active in the signifying process in

order to understand the technocultural shaping of user’s perception of the meanings that

are offered to them.

The search for meaningful cultural objects on amazon.com represents both a link

and a point of departure between the existence of books in physical bookstores and on

amazon.com, especially as amazon.com deploys a new category of technical actors to

produce content. As a starting point, it is useful to reflect on the difference between the

cultural practice of finding books online and that of finding books in a physical

123

bookstore. The problem online commercial environments are faced with is that they can

only partially approximate tangible cultural objects through the virtual representation of

books, such as title web pages on amazon.com. There is a need to make up for the loss of

physicality of the book as an object and of the practices associated with it - holding a

book, flipping through the pages - through the implementation of processes that are

designed to mimic these physical practices. Thus, it is possible on the amazon website to

browse sample pages - to look at a table of contents and read excerpts. However, the

innovative features of the amazon.com website are not so much related to how best it can

imitate a physical bookstore as they are focused on assisting users in defining the

meanings of a book within a discursive network. Some of these features literally surpass

the possibilities offered in the physical world. For instance, amazon.com’s “search inside

the book” feature is “transcendent of print books insofar as it can deliver salient content

that would have otherwise been unnoticed” (Marinaro, 2003, p. 4). Thus, amazon.com

imitates and reproduces existing practices of looking for and buying books but also

creates new ones, and, in the process, redefines what books stand for. From an ANT

perspective, it could be said that the mediation of books in a virtual environment requires

a detour in that the physicality of the book is replaced by informational practices that are

supposed to stand for specific practices. This detour, however, also creates a change with

regards to goals. As Latour (1999, p. 179) explains it, the translation of one set of

practices through technologies results in goal translation. In the case of Amazon, the goal

is not simply to imitate physical books and the practice associated with them, but also to

create new practices of searching for content.

124

Books are not simply remediated on amazon.com; they undergo a change from

being a particular type of referent that contains multiple signs and signification about a

range of topics to becoming signs in that they are transformed into web pages.

Furthermore, the process of turning books into signs does not simply mean that web

pages represent a physical object, but that they express the cultural meanings associated

with that object. These meanings are related to the position of a book in a network of

other books. This positioning is produced through the articulation between users’

practices and software processes. The representation of books in online environments

reveals a shift in the status of books so that they become “nodes on a network consisting

of other books, commentaries and various kinds of meta-information” (Esposito, 2003).

The new possibilities offered by the digitization of books are related to the possibility of

creating and searching for information and, by extension, to the cultural meanings of

specific books. The formation of these cultural meanings, in the case of amazon.com is

co-constitutive with the new status of books as not only signs, but also as mediators

between a selling entity (amazon.com) and users.

There are several components of the amazon.com architecture that are devoted to

producing content and meanings. Some of those are conventional systems of ordering

information into categories, for instance, through themes. Of the two book titles

discussed in this chapter, J.K. Rowling’s Harry Potter and the Deathly Hallows belongs

to the Science Fiction, Fantasy, Mystery and Horror section of the Children’s books

category, while Gilles Lipovetsky’s Empire of Fashion can be found under of the

Cultural Anthropology section of the Social Sciences category. This categorization

125

indicates that both titles are related to all the other titles in their respective categories as

they focus on similar topics (e.g. fashion for the Empire of Fashion), or genres (e.g.

children’s literature for Harry Potter). Furthermore, amazon.com offers multiple ways of

searching for items, including common online features such as a search box to search by

title, author or keywords, and the option to browse through categories.

Most importantly, amazon.com has developed its own recommendation system,

called “item-to-item collaborative filtering” (Linden, Smith and York, 2003). The

principles of item-to-item collaborative filtering have been patented by amazon.com.17

The difference between amazon.com’s recommendation system and other filtering and

collaborative systems is that “rather than matching the user to similar customers, item-to-

item collaborative filtering matches each of the user’s purchased and rated items to

similar items, then combines those items into a recommendation list (Linden, Smith and

York, 2004, p. 78). That is, amazon.com’s recommendation system proceeds by

establishing correlations through the analysis of purchasing, viewing, rating and search

patterns. In its official documentation, amazon.com asserts that this recommendation

17 For recommendations based on items bought, see: Hanks, Steve and Spils, Daniel.(2006). For recommendations based on items viewed, see: Linden, Gregory; Smith,Brent; Zada, Nida. (2005). For recommendations based on actions recorded during abrowsing session, see: Smith, Brent; Linden, Gregory and Zada, Nida. (2005) and Bezos,Jeffrey; Spiegel, Joel; McAuliffe, Jon. (2005). For recommendations based on shoppingcart content, see: Jacobi, Jennifer; Benson, Eric; Linden, Gregory. (2001). Forrecommendations based on ratings, see: Jacobi, Jennifer; Benson, Eric. (2000). Forrecommendations based on terms searched see: Whitman, Ronald; Scofield, Christopher.(2004), Ortega, Ruben; Avery, John and Robert, Frederick. (2003), Bowman, Dwayne;Ortega, Ruben; Linden, Greg; Spiegel, Joel. (2001), Bowman, Dwayne; Ortega, Ruben;Hamrick, Michael; Spiegel, Joel; Kohn, Timothy. (2001), Bowman, Dwayne; Ortega,

126

system produces better results than other collaborative filtering techniques: “our

algorithm produces recommendations in real-time, scales to massive data sets, and

generates high-quality recommendations” (2004, p. 77). Furthermore, according to

amazon.com, “the click-through and conversion rates - two important measures of Web-

based and email advertising effectiveness - vastly exceed those untargeted content such

as banner advertisements and top-seller lists” (2004, p. 79). Item-to-item collaborative

filtering works by analyzing the similarities between items. These similarities can be

defined through identifying which items customers buy together, which items are placed

in a shopping cart, which items are viewed in the same browsing session and which items

are similarly rated. In so doing, amazon.com can provide a seemingly infinite number of

recommendations, because those recommendations change as the browsing patterns and

list of available titles change on the amazon.com website. The difference between

recommendations established on items bought rather as opposed to items viewed are

supposed to be complementary:

Another benefit to using viewing histories is that the item relationshipsidentified include relationships between items that are pure substitutesfor each other. This is in contrast to purely purchase basedrelationships, which are typically exclusively between items that arecomplements of one another. (Smith, Brent; Linden, Gregory and Zada,Nida, 2005).

Thus, any item purchased, placed in a shopping cart or viewed is accompanied by a list of

recommendations (image 9).

Ruben; Hamrick, Michael; Spiegel, Joel; Kohn, Timothy. (1999) and Bowman, Dwayne;

127

Figure 9: Recommendations featured on the Empire of Fashion page.

These recommendation features include links to pages listing what customers who have

bought or viewed an item have also bought or viewed (Figure 10), recommendations on

the shopping cart webpage, as well as recommendations on one’s personal amazon.com.

Linden, Greg; Ortega, Ruben; Spiegel, Joel. (2006).

128

Figure 10: Recommendations by Items Bought for Harry Potter and the Deathly

Hallows.

The personal amazon.com (Figure 11) serves as a personalized entryway to the website

where users can refine their recommendations by rating and tagging items, as well as by

making it possible to create wish lists, post a profile and find communities of users with

the same interests. These pages encourage users to be proactive in getting more up-to-

date recommendations through rating and tagging items.

129

Figure 11: “My Profile” page on amazon.com.

These ratings and tags, as well as information about which items a user currently own

130

allow for an extremely personalized set of recommendations to emerge (Figure 12).

Figure 12: Personalized Recommendations Based on Items Rated.

The example in Figure 12 shows that the recommendation software establishes a link

between Deleuze and Guattari’s Thousand Plateaus and Jameson’s Postmodernism, or

the Cultural Logic of Late Capitalism. As other users have bought or viewed and highly

rated both books, the recommendation software considerers them as meaningfully linked

with each other and therefore complementary. The link between Thousand Plateaus and

Jameson’s Postmodernism is easy to see from a conventional perspective. Both books can

be categorized as cultural studies works focused on developing a post-marxist critique of

capitalism. In the same way, the recommendations that are listed from the two example of

131

this study include titles where the cultural links is readily apparent. The recommendations

based on items bought for Empire of Fashion include Hypermodern Times, another of

Lipovetsky’s books. Most of the other recommended titles focus on a cultural analysis of

fashion, such as, for instance, Roland Barthes’s Language of Fashion. The

recommendation list based on items bought for Harry Potter and the Deathly Hallows

lists related Harry Potter Material (e.g. the Harry Potter and the Goblet of Fire DVD) as

well as fantasy and kid’s fiction (e.g. Eldest and Lemony Snicket).

Figure 13: Recommendations Based on Items Viewed for The Empire of Fashion

The affiliations between recommended items seem to be fairly straightforward. However,

amazon.com’s recommendation system differs from traditional recommendation systems

in that by processing the buying and viewing patterns surrounding an item, it aims at

measuring the probabilities of an item being similar to another one regardless of the

132

categories within which these items are placed. As amazon.com founder Jeff Bezos

argues, amazon.com’s recommendation system offers radically novel suggestions:

We not only help readers find books, we also help books find readers,and with personalized recommendations based on the patterns we see. Iremember one of the first times this struck me. The main book on thepage was on Zen. There were other suggestions for Zen books, and inthe middle of those was a book on how to have a clutter-free desk.That’s not something a human editor would have ever picked. Butstatistically, the people who were interested in the Zen books alsowanted clutter-free desks. The computer is blind to the fact that thesethings are dissimilar in some way that’s important to humans. It looksright through that and says yes, try this. And it works. (Wired, January2005)

Bezos suggests that there is an element of meaningful incongruity that is at stake in the

recommendation process in that the recommended titles might not make sense from a

conventional perspective, but could potentially bridge different types of interests by

transcending cultural categorization. For instance, the list of recommendations based on

items viewed includes items that should not be related to the original item from a

conventional perspective. According to the recommendation list based on items viewed,

there is a link between Lipovetsky’s Empire of Fashion and the DVD of a theatre

adaptation of Jane Eyre (Figure 14). The list of recommendations based on items viewed

for Harry Potter and the Deathly Hallows list Harry Potter related titles and items, but

also Rhonda Byrne’s The Secret, another bestseller on amazon.com (Figure 15).

133

Figure 14: Recommendations based on Item Viewed for Harry Potter and the

Deathly Hallows.

The production of recommendations that ignore cultural categorization is more visible

with an in-depth visualization of the recommendation network. The images presented

below were provided by Touchgraph (www.touchgraph.com) - a visualization software

that maps the recommendations linked to a specific title on amazon.com. The

Touchgraph software is useful for providing a software-based perspective rather than a

user-based perspective. That is, the Touchgraph visualization represents potential

hyperlink paths from one recommendation to the next in their totality. The Touchgraph

visualizations do not represent the surfing pattern of a specific user, but rather depicts the

overall organizing pattern of the recommendation software. The Touchgraph

visualization software thus offers a way to examine the informational architecture of the

134

recommendation system. As is made apparent with the visualizations, the

recommendation system, by bypassing thematic boundaries, operates through a cultural

logic of ever-expanding inclusion. Unfortunately, the Touchgraph visualization software

does not include all the recommendations as it does not crawl the different

recommendation pages but only looks for the titles mentioned under the “customers who

bought this item also bought...” box on a title page. The process for producing the

networks of recommendations for Empire of Fashion and Harry Potter involved doing a

search for both “Empire of Fashion” and “Harry Potter and the Deathly Hallows” on the

Touchgraph interface. The two titles were then crawled for their recommendations. There

were 11 recommendations for Empire of Fashion and 10 recommendations for “Harry

Potter and the Deathly Hallows” (Figures 15 and 19). Those recommendations were then

crawled, thus going to a depth of two (Figure 17 and 21). The visualization software also

maps the links between all these items and is therefore useful for identifying to what

extent a cluster of recommendations is circular and the degree to which, on the contrary,

it reaches across from the original cluster of recommendations. As can be seen, the

Touchgraph software also automatically identify thematic clusters by using different

colours. As the network visualization shows, the recommendation system allows for the

existence of clusters of tightly linked items as well as cross-cluster links. The

recommendations for the Harry Potter book, for instance, include other Harry Potter

material, but also related fantasy clusters such as the one surrounding Christopher

Paolini’s Eldest cluster as well as items that are not as obviously related to the series,

such as a mystery novel from Janet Evanovich and the Casino Royale DVD. The first

135

layer of the recommendation network around Empire of Fashion is mostly made of

cultural analyses of fashion, but there are also other clusters of titles focused on a

historical approach to fashion, graphic arts as well as new economy related books (such

as Wikinomics) in the cluster around Hypermodern Times. The set of visualization by

subjects also shows the ways in which the recommendation network extends itself

outward (Figures 16, 18, 20, 22). For instance, the recommendation network around the

Empire of Fashion includes economics, while the Harry Potter and the Deathly Hallows

network includes feature films.

The Touchgraph visualizations reveal that some recommendations are going to be

culturally relevant to a specific title as they are thematically related to that title. However,

there are new suggestions that might seem to be completely unrelated to the original title

from a conventional perspective, but because they are analyzed by a recommendation

system that looks for similarities, they are presented as culturally relevant in a new way.

The important aspect of the recommendation system is that it is suggestive rather than

authoritative. For instance, the new kinds of recommendations it produces might indeed

be the result of different users using the same computer with the same IP address.

However, because there is the possibility of an actual match, the recommendation

software will present what could be anomalies as culturally linked to each other.

Furthermore, the production of cultural meanings through amazon.com’s

recommendation software proceeds through suggestions. Rather than providing a set of

authoritative explanations about why a title can be linked to another one, amazon.com

suggests affiliations. The rationale for these affiliations is, to some extent, quite artificial

136

in that it is based on the assumption that there are always potential links, but in so doing,

it also provides the basis for infinite surfing and viewing possibilities.

137

Figure 15: Recommendation Network for the Empire of Fashion (depth 1). 28.March 2007.

This v isua l iza t ion shows the

recommendations associated with The

Empire of Fashion. The recommendations

at this level are thematically linked to the

original title as they include mostly books

on fashion, as well as another of Gilles

Lipovetsky’s book

.

138

Figure 16: Recommendation Network for the Empire of Fashion (depth 1 - subjects). 28 March 2007.

This visualization represents the same

network as the previous visualization. It also

lists the subjects under which all the

recommendations are categorized. Those

subjects are also relatively homogenous,

covering the category of fashion (fashion,

Grooming, Beauty, etc. ), as well as social

sciences and humanities (i.e. cultural studies,

history, social history, sociology, art)

139

Figure 17: Recommendation Network for The Empire of Fashion (depth 2). 28 March 2007.

This v i sua l iza t ion shows the

recommendations for Empire of Fashion, as

well as their recommendations. The

recommendations are not as thematically

linked as in previous visualization. The red

and turquoise cluster are about fashion,

while the blue and green cluster show a

more eclectic selection, from books on

Internet and economics (i.e. Wikinomics and

The Wealth of Networks) to movies (i.e.

Marie-Antoinette).

140

Figure 18: Recommendation Network for The Empire of Fashion (depth 2 - subjects). 28 March 2007.

This visualization shows the subject

categories in the recommendation

network. The recommendation

software works by extending the

network of recommendations, and

thus there is a greater variety of

subjects.

141

Figure 19: Recommendation Network for Harry Potter and the Deathly Hallows (depth 1). 27 March 2007.

This visualization represents the

recommendation network for Harry Potter

and the Deathly Hallows. Compared with

The Empire of Fashion, we see a greater

range of cultural products, including DVDs

(Happy Feet, Cars). While there are some

thematic links with Harry Potter, in

particular other Harry Potter books and

children’s fantasy (Lemony Snicket, Eldest),

the recommendations also include items that

have little thematic relevance (i.e. Plum

Lovin’, Casino Royale).

142

Figure 20: Recommendation Network for Harry Potter and the Deathly Hallows (depth 1; subjects). 27 March 2007.

The subjects for the recommendations

for Harry Potter and the Deathly

Hallows covers a broader range of

categories (movies, action, children,

adventure). In comparison with Empire

of Fashion, the Harry Potter network

is less thematically coherent. The

recommendation software does not

offer any qualitative differentiation - it

cannot understand that some items

might have been purchased for

different purposes (i.e. gifts for

different people). In so doing though, the recommendation system suggests to users that there is a possibility that what might at

first seem like disparate items have a cultural link.

143

Figure 21: Recommendation Network for Harry Potter and the Deathly Hallows (depth 2). 27 March 2007.

With this second-level recommendation

visualization, it is possible to identify in the

red cluster some items that are directly

related to Harry Potter and the Deathly

Hallows, such as other Harry Potter and

Children’s books. The purple and olive

clusters also list fantasy title. It is more

difficult to see the conventional links

between Harry Potter and the items listed

in the other clusters.

144

Figure 22: Recommendation Network for Harry Potter and the Deathly Hallows (depth 2; subjects). 27 March 2007.

Visualizing the subject

ca t egor i e s fu r the r

highlights that the

recommendation system

works through a logic of

never-ending expansion

rather than trying to

narrow down a search to

specific titles. In so

d o i n g , t h e

recommendation system

aims to engage users in

browsing rather than

searching. The recommendation software multiplies the possibilities of consumption.

145

The omnipresence of a list of recommendations when surfing on amazon.com

creates a sense of infinite possibilities, especially as the more pages are viewed, the more

the recommendations can change. In that sense, it is useful to analyze the type of

meanings produced through the amazon.com’s recommendation system by using

Derrida’s concept of différance. The concept of différance expands Saussure’s argument

that the meaning of a sign is not established through a process of reference to something

out there, but through a process of differentiation among signs. As Derrida (2002)

explains it, the concept of différance is useful for further examining the ways in which

meaning emerges through the differences among signs: “Essentially and lawfully, every

concept is inscribed in a chain or in a system within which it refers to the other, to other

concepts, by means of the systematic play of differences. Such a play, différance, is thus

no longer simply a concept, but rather the possibility of conceptuality, of a conceptual

process and a system in general” (p. 148). Using such a concept for this case study does

not mean that there can be a direct equation between différance and meaning production

through amazon.com’s recommendation software, but that the recommendation system

operates by looking for differentiations among book titles that are at the same time

complementary. Thus, the recommendation system does not use radical differences, but

small differences within a continuum of similarities. One could understand différance as

encompassing the play of opposites. For instance, “good” is the opposite of “bad” and

takes its meaning from radically differentiating itself from the concept of “bad.” The

system of differentiation on amazon.com is not one that makes use of the play of

opposites, but one that mainly articulates similarities and chain connections. That is, the

146

differences between items are delineated and circumscribed by the similarities that exist

among items as they stand for similar users’ interests. For instance, a title that is

recommended when viewing Harry Potter and the Deathly Hallows is Christopher

Paolini’s Eldest. The similarity between these two items is easy to identify - both are

usually recommended for children and both belong to the fantasy/magic genre. While

these two items are not identical, their differences are circumscribed within the notion

that they complement each other. In the same way, most of the titles that are

recommended for Empire of Fashion are not only books on fashion, but also academic

books on fashion. It is this process of differentiation within similarity that constitutes the

horizon of cultural expectations on amazon.com and that serves to rationalize

recommendations that could seem incongruous from a conventional perspective.

It is through this experience of differentiation within similarities that

recommended items that at first do not seem to be related to a selected title can be

interpreted and presented as linked. In that sense, the recommendation system imposes a

specific signifying semiology that shapes the meanings of books and, in that process,

suggests a specific process to users in terms of how they can construct the cultural

meaning of books. This process can be represented as follows:

147

Table 6: Mixed Semiotics and the Recommendation System on Amazon.com

Matter Substance FormExpression Ensemble of expressive

Materials:

Existing data about abook: Title, price, order,reviews from publishersand users.

Data on user behaviour:Page viewed, itemsbought, searched for, ratedcollected through theprofiling software.

Syntax and Rules:

Similarities are established interms of interchangeability(items viewed) and/orcomplementarity (itemsbought).

The recommendation systemonly expressesdifferentiations withinsimilarities.

ContentUserbehaviour

Values and rulesembedded in therecommendation software:

Relationship among books:- The software looks forsimilarities among booktitles regardless oftraditional culturalcategories.

Interpretation of userbehaviour:- Users who share somesimilar consumption andviewing patterns havesimilar interests andtherefore similar culturaldesires.

Signified content:

- List of recommendations

The continuum of material fluxes on which the recommendation software is built is user

behaviour, which is captured and categorized in terms of pages viewed and items bought,

searched for and rated at the level of substance of expression. The correlation between a

catalog of titles with user behaviour proceeds by following specific rules of

148

interpretation. For instance, the recommendation software focuses on finding similarities

regardless of whether the books belong to different cultural categories and starts with the

premise that users who have some titles in common in terms of their buying and viewing

patterns share the same cultural interests. These rules are embedded at the level of

substance of content. The syntax and rules at the level of form of expression refer to the

algorithmic processes whereby links are established following the differentiation within

similarities rule. Thus, users are invited to interpret a list of recommendations in a

specific manner, acknowledging that the items presented have the possibility of

expressing cultural desires that were previously untapped or unseen by other

recommendation systems and, perhaps, unrecognized by users themselves.

Consequently, amazon.com’s recommendation system does not only concern the

production of cultural meanings, but also the shaping of the perceptions of users by

producing a specific kind of meaningful links which works by suggesting differentiations

through similarities, be it in the form of interchangeable or complementary items. It is

useful to consider the recommendation system as a signifying actor with which users

have to interact. There is a communicative exchange that takes place between users and

the recommendation software as the software attempts to create new meaningful links

and therefore new cultural desires. Thus, it is not simply the recommendation system that

is at the core of the cultural experience of amazon.com, but the interactions between a

non-human actor - a signifying system that embodies both a cultural and commercial

imperative - and human actors. Describing this particular actor-network and its effects

requires a consideration of how the signifying semiologies produced by the

149

recommendation software are articulated with and encapsulated into other kinds of

signifying and a-signifying semiologies, in particular the ones that involve users.

4. User-Produced Content: Meaning Proliferation and Cultural Homogeneity

The system of differentiation put in place by amazon.com is one that is delineated

by similarity. The question that is raised, in turn, is about how this form of meaning

production shapes modes of interpretation and decoding. The play of difference - of

suggesting new meanings that are similar to each other - can be further examined through

a comparison with Gilles Lipovetsky’s argument in the Empire of Fashion that

contemporary Western society can be characterized by its “infatuation with meaning.”

While Lipovetsky’s arguments were developed before the rise of the Internet, his analysis

is nevertheless helpful in that it describes how mass consumption (the universe of

fashion) produces a “graduated system made up of small distinctions and nuances” so

that “the consumer age coincides with (a) process of permanent formal renewal, a process

whose goal is the artificial triggering of a dynamic aging and market revitalization”

through “a universe of products organized in terms of micro-differences” (2002, pp. 137-

139). Lipovetsky’s analysis echoes some of the processes at stake on amazon.com,

especially those that were identified through the analysis of the recommendation network

of The Empire of Fashion. The “small distinctions and nuances” among titles are similar

to the play of differentiation within similarity that constitute the amazon.com

recommendation system. The “permanent formal renewal” does not only include the

addition of new titles, but also the algorithmic processing of the countless actions of users

in terms of pages viewed and items bought and commented upon. An example of this

150

feature of the website is the patented “Increases in Sales Rank as a Measure of Interest.”

This patent document argues that the increase or decrease in sales rank of an item can be

interpreted as an increase or decrease in interest about that particular item. The document

compares this new measure of interest to traditional best-selling lists and argues that sales

rank lists are better because they reflect “real-time or near-real-time change”, whereas

bestsellers list are “slow to change.” According to amazon.com, the sales rank list makes

it possible to identify “popular items earlier than conventional bestseller lists.” This is

clearly seen as an advantage for Amazon.com, in that by constantly adjusting the

representation of the actions of users to users, users are encouraged to regularly visit the

site. The perpetual novelty of the site is not limited to lists of popular items, and is also

generalized through Amazon.com’s recommendation system, where recommendations

are always changing since they are based on processing the actions of users.

Amazon.com offers a space where users are actively involved in meaning creation

through new kinds of practices and actions, such as writing customer reviews and rating

and tagging items. Furthermore, the combination and analysis of these new verbal (e.g.

writing reviews, tagging) and non-verbal practices (clicking through, putting items in a

shopping cart) through the deployment of algorithms to find similarities creates new sites

and new networks of meaning production. That is, amazon.com works by processing and

analyzing the actions of users and in so doing creates a new form of software-assisted

sociality, that is, a network of social actors where the cultural meanings of books is partly

constructed by a software layer. While both the software system and human actors can

also engage in the production of meaningful links, the anchoring of these meanings into

151

something more articulate and in-depth than a hyperlink is entirely within the sphere of

human activity. Within the circulation of meanings on amazon.com, certain types of

customer actions serve to create a sense of depth. As opposed to the image of the

network, which was used to represent the recommendation system at work on

amazon.com, perhaps this type of user activity can be best defined by its verticality.

Whereas the action that can be identified with the recommendation software is that of

clicking on a link, accessing the content produced by users is essentially an act of

scrolling down a page to get to more detailed meanings. Thus, the particularity of the

amazon.com system is that it offers multiple ways - both horizontal and vertical - of

exploring the cultural relationship among books. An Amazon product page contains up to

31 categories to provide more information about a title and support users:

Table 7: Surfing Paths on Amazon.com

SearchFunctions

Information aboutthe selected title

Recommendationsbased on the selectedtitle

Other

• SearchBox• A9 SearchBox• SearchListmania• SearchGuides• Askville

• Book information(author, price,availability)

• Editorial reviews

• Product details

• Look insidefeature

• AmazonConnect

• Spotlight reviews

• Customerreviews

• Better together/Buythis book with...

• Customers whobought this item alsobought

• Citations: booksthat cite this book

• What customersultimately buy afterviewing this item

• Help others findthis item

• Make itavailable as anebook (if you arethe publisher or theauthor)

• Sponsored links

• Sponsoredadvertising

• Feedback(customer service)

152

• Customerdiscussion

• Product Wiki(Became Amapediaas of February 2007)

• Tag this product

• Rate this item toimprove yourrecommendations

• Lismania: Productsyou find interesting

• So you’d like to...(guide)

• Your recentlyviewed items

• Look for similaritems by category

• Look for similaritems by subject

• Your recent history

Of these 31 categories, 18 are designed to create networks to identify titles that could be

of potential interest to users. These categories are placed within the “Search Functions”

column and the “Recommendations Based on Selected Title” column of the above table.

The nine categories in the “Information about the Selected Title” correspond to potential

ways of accessing more in-depth information about the book.

Three of the categories in the “Information about the selected title” (customer

reviews, customer discussion and product wiki) are sites of user activity in that, for

instance, users can both read customer reviews and write a review themselves. While

there was no review of Harry Potter and the Deathly Hallows at the time of the study

because the book had not been published, there were two reviews for Lipovetsky’s

Empire of Fashion (Figure 23). The first review tries to summarize the main argument in

153

the book: “The basic idea of his thought is that fragmentation of society does not, in the

way it is thought commonly, means destruction of morals or democracy. On the contrary,

democracy is formed by the powers that are able to join fragmentation and continuity.”

The second review provides a critical context for the book by arguing that Empire of

Fashion is “all and all, an outstanding and entertaining rejection of the tedious, reductive

Marxist explanations of fashion.” Both reviews thus give more in-depth information

about the content of a book. The Harry Potter and the Deathly Hallows page feature a list

of discussion topics (Figure 24) focused on the potential content of the book (for

instance, which character is going to die next) and on the Harry Potter series in general

(for instance, a discussion title is: “Top 119 moments in Harry Potter” and about author

J.K Rowling. The study of the customer reviews of both the Empire of Fashion and the

Harry Potter and the Deathly Hallows recommendation networks reveal that there was

no correlation between the list items produced by the recommendation software and the

content of the customer reviews in that none of the customer reviews mentioned The

Empire of Fashion or Harry Potter and the Deathly Hallows. This does not mean that

customers never compare items, but rather indicates that the practices of writing customer

reviews seems to be geared mostly toward analyzing the content of a selected title.

154

Figure 23: Customer Reviews for Lipovetsky’s Empire of Fashion

Figure 24: Customer Discussions for Harry Potter and the Deathly Hallows.

Another sphere of activity for users concerns the production of recommendations

155

through tagging, rating and producing listmanias and “So you’d like to” guides. These

features work alongside the recommendation system and follow the same pattern of

creating networks of recommendations. However, whereas the recommendation system

cannot spell out the links between items other than as the processing of browsing

patterns, the recommendations produced by users have a more explicit approach,

especially in the case of creating listmanias and “So you’d like to” guides. As explained

on the amazon.com website, a listmania:

... Includes products you find interesting (...) Each list can cover all kinds ofcategories, and can be as specific (“Dorm Room Essentials for Everyfreshman”) or as general “(The Best Novels I’ve Read This Year) as youlike.18

“So you’d like to” guides are similar to listmanias, but are described as:

... A way for you to help customers find all the items and information theymight need for something they are interested in. Maybe there is anindispensable set of reference materials that you’d recommend to a newcollege freshman wishing to study literature. Maybe there are several itemsyou think are necessary for the perfect barbecue. 19

The listmania and “So you’d like to” guides allow for the formulation of meanings that

are designed to help users choose products by explaining the functions these products

fulfill and the kind of consumer group (i.e. the college freshman) they are most relevant

for. In that sense, these two features allow for the positioning of a title within a network

of other titles, whereas the recommendation software can only shape the network within

which a title is embedded. That is, the listmania and “So you’d like to” guides allow for

18 http://www.amazon.com/gp/help/customer/display.html/002-5666753-9443228?ie=UTF8&nodeId=1427965119 http://www.amazon.com/gp/help/customer/display.html/002-5666753-

156

the positioning of items according to a range cultural variables defined by the users

producing those recommendation lists. The listmanias on the day in which the Harry

Potter and the Deathly Hallows page was recorded (Figure 25) include a list of “Great

Sci/Fi Fantasy for Teens, Young Adults”, thus placing the Harry Potter book within the

broader category of science fiction. Similar to the recommendation software, though,

listmanias can go beyond categorization. This includes, for Harry Potter, lists such as

“Book I’ve Read or Plan to Read” and “Cracking Good Read for 2007.” The same

process multiple positioning of the Harry Potter books appear in the “So you’d like to”

guides, which includes the general “Turn the Pages Late into the Night” and “Enjoy

Powerful Writing! Mixed Genres!” as well as the more category-focused “Read Books

Featured on TWEEN TIME bookshelf.” There were no “So you’d like to” guides

associated with Empire of Fashion, but there were three listmanias that focused on some

of the central themes of the book. “The Unabashed Social Climber” lists books on the

question of social mobility and social status. “Corpography II” focuses on the question of

embodiment, and “Ultimate Secrets to French Fashion and Style” lists items related to

French fashion. The recommendation lists produced by users thus represent instances

where users themselves bring a sense of cultural order by positioning a book within a

network of other books through the use of a range of cultural variables. The paradoxical

aspect of those recommendation lists is that they are produced by specific individuals,

yet, at the same time, they intend to reflect general interests, such as an interest in fashion

or an interest in fantasy for teenager. In that sense, the user-generated recommendations

9443228?ie=UTF8&nodeId=14279691

157

are inscribed within a continuum between the individual and the community.

Figure 25: Listmanias and So You’d Like To guides - Harry Potter and the Deathly

Hallows

The cultural position of a selected item is not only reflected in the title of a

listmania, but can also be present in the ways in which the author of the listmania

presents him/herself. For instance, the listmania “Books I’ve Read or Plan to Read” looks

fairly general in its scope at first sight, but the author describes herself as a “3RS business

solution owner.” This type of identification serves to further position a list of items

within a more specific social field. The general scope of the list is thus narrowed down

through the identification of social status and class, and the list of items can be

interpreted as representing the interest of a particular social group. In that sense, there are

two processes of signification at stake on amazon.com. The overarching process of

signification is the one in which meanings are produced through the dynamic of

158

differentiation within similarity. Within this overall process, users can assign what

Baudrillard in For a Critique of the Political Economy of the Sign (1981) described as the

sign-value of an object, that is the social status and social rank associated with an object.

This form of signification can potentially contradict the dynamic of differentiation within

similarities in that it inscribes books within social boundaries. At the same time, those

instances of differentiation through opposition are integrated within the differentiation

through similarity system, in that the recommendation software is always omnipresent

and, at the level of the interface, literally wraps the content of user-generated

recommendations. While users can attribute sign-values to objects, they are not limited to

this type of signification. As the listmania titles show, the dimensions of pleasure (“Turn

the Pages Late into the Night”) and practicality of use (“Great Sci/Fi Fantasy for Teens”)

are also present. This type of signification is one of the main arguments in Lipovetsky’s

Empire of Fashion. Furthermore, while Lipovetsky sees a historical difference between

Baudrillard’s concept of sign value and a new “trend in which consumption has been

desocialized, in which the age-old primacy of the status value of objects has given way to

the dominant value of pleasure (for individuals) and use (for objects)” (2002, p. 145), it

appears that on amazon.com, those two systems of signification can coexist because they

are articulated and inscribed within a broader system of small differentiations within

similarity.

Whereas the recommendation guides produced by users reintroduce more

conventional cultural and social aspects into the production of culturally meaningful links

among items on amazon.com, the other two categories of rating and tagging items

159

operate through different channels. Rating and tagging are brief labels imposed on items,

as opposed the more verbal practices of producing guides and reviews. Rating on

amazon.com is presented as useful for users in that they can get a quick visual clue about

the perceived quality of a book. Rating is also used for the personalization and

customization of recommendations on the amazon.com website in that the ratings

submitted by a user are then correlated with other rating, buying and viewing patterns so

as to produce a list of recommendations. Thus, as seen previously, Jameson’s

Postmodernity is recommended to users who have highly rated Deleuze and Guattari’s

Thousand Plateaus. Tags are described by amazon.com as “keyword or category labels”

that “can both help you find items on the Amazon site as well as provide an easy way for

you to “remember” or classify items for later recall”20 Tagging thus allows for the

creation of a new form of recommendation process by allowing for the creation of

networks of items sharing a common descriptor defined by users. Tagging as a

semiological practice allows for the imposition of meanings onto titles. While Empire of

Fashion did not have any tags associated with it, Harry Potter and the Deathly Hallows

had 118 tags (Figure 26). Some of those tags, such as “harry potter” or “harry potter book

7” are descriptive. Others, such as “snape is innocent” express the opinion of a reader

about future plot development. Tags can also be used in a critical manner as, for instance,

when the Harry Potter book is tagged as “overpriced.” Tags not only inscribe a title

within different discursive spheres and cultural interpretation about the title itself, but

20 http://www.amazon.com/gp/help/customer/display.html/102-2699882-3855309?ie=UTF8&nodeId=16238571

160

also position the title in relation to other elements. A common tagging practice involves

creating recommendations by using the title of a book as a tag. The Harry Potter book,

for instance, is tagged as “eragon” and “abacar the wizard”, and these tags refer to two

fantasy titles. The idea is to suggest to users that Harry Potter and Eragon or Abacar the

Wizard share common features and thus answer to similar cultural interests. Out of the

250 items that have the tag “harry potter”, for instance, 50 are not Harry Potter related

and include, apart from other fantasy novels, candy, cosmetics and toys.

Figure 26: Harry Potter Tags.

Users thus have a range of semiological activities that are offered to them on the

amazon.com website. Those activities can differ from the recommendation system as in

161

the case of customer reviews in that they are meant to formulate the cultural meaning of a

specific title in more depth in terms of how the book fits within a range of cultural

considerations. Others, such as user-created recommendation guides, ratings and tags,

complement the recommendation software in that they produce different types of

networks of titles that use a range of cultural factors, from literary genres to social status.

Those networks translate users’ interpretations of the status of a book. These different

types of practices can be summarized using Guattari’s framework in the following

manner:

Table 8: Mixed Semiotics and Users on Amazon.com

Substance FormExpression Ensemble of Expressive Materials:

• The spaces on the amazon.cominterface devoted to user expression.

Syntax and Language Rules:

• Range of signifying practicesavailable to users: rate, tag, write areview, start a discussion,contribute to the Wiki.

Content Social Values and Rules:

• Rules of discourse as directed byAmazon.com.• The user’s experience of a bookand the cultural interpretations thataccompany it and can be dictated bythe broader cultural context withinwhich the user and a specific titleare located.• The content produced byauthoritative sources, such aseditorial reviews.• The content produced by therecommendation software.• The content produced bycommercial forces.

Signified content:• Imposition on a specific book ofmultiple meanings that reflect arange of users’ culturalinterpretations about a book.

• This leads to the production ofbooks as signs and channels forcultural meanings.

162

As the table shows, the level of expression in terms of the semiological practices

available to users is fairly straightforward in that it involves the spaces on amazon.com

designated for user expression and a range of syntactic tools, such as numbers (for rating)

or words. At the level of content, it is important to notice that the process of producing

content is not simply one of a user expressing his or her interpretation of a book as

indicative of a specific cultural conjuncture, but involves the circling of users by the

amazon.com machine. There are rules of discourse on amazon.com in terms of, for

instance, how long a review can be. There are also editorial reviews, located above the

customer reviews on a product page, that, because they come from institutional sources

such as Publishers’ Weekly or, in the case of Empire of Fashion, the Library Journal

(Figure 27), act as a more authoritative set of information and cultural meanings about a

selected title. Furthermore, because the recommendation software is omnipresent on the

amazon.com website, it is possible that users are influenced by it in terms of how they

further interpret the content of a book in relation to a network of other titles. The

commercial forces present on amazon.com also play an important role in attempting to

shape users’ cultural preferences. Amazon.com allows publishers to buy paid placements

for titles on the website that replace the “better together” section of the website with a

“best value” section. The paid placement works in the following manner: if title A is a

bestseller and title B is related to title A, the publisher of title B can pay amazon.com to

say that titles A and B are a “best value” pair that can be bought at a discount. In this

way, commercial interests can override the recommendation software. Another form of

commercialization on the amazon.com website concerns the placement of products on the

163

amazon.com homepage and sub-categories homepages (Figure 28). This increases the

chance of users clicking on those titles, thus making those titles more prominent in the

recommendation lists produced by the recommendation software. Finally, and this was

particularly prominent with Harry Potter and the Deathly Hallows, the marketing of a

bestseller involves the marketing of other titles and items as well (Figure 29). All the

Harry Potter books are automatically listed on the Harry Potter and the Deathly Hallows

page, thus increasing the chance that those items will be viewed. As well, there is a

section on the page devoted to J.K. Rowling’s favourite books.

Figure 27: Editorial Reviews for The Empire of Fashion

164

Figure 28: Product Placement on Amazon.com Homepage

Figure 29: Harry Potter Product Placement on the Harry Potter and the Deathly

Hallows Page

In terms of the signifying practices that users have access to, it is also important to notice

that the signified content produced does not only consist of a range of meanings and

165

interpretations, but also concerns the production of books as a particular type of sign that

can be defined as channels of cultural meanings and discourses. This last characteristic is

not only produced by users, but also through the articulation between users’ signifying

semiologies and the signifying semiologies produced by the recommendation software.

This process of creating multiple channels for meanings works to undermine any sense of

authoritative meaning production on amazon.com. While there are editorial reviews that

are authoritative, these do not stand the comparison with the multitudes of meanings

circulating through the recommendation software and user-produced recommendations.

The amazon.com interface thus offers suggestive paths of signification. In that sense, the

amazon.com website does not seem to have any boundaries in terms of the possibilities of

following hyperlinks of recommendations. However, the circulation of meanings on the

amazon.com website might seem infinite, but it is far from being chaotic. The paradox of

amazon.com is that the openness of meanings it provides is accompanied by processes

designed to foster a sense of stability and closure. Those processes partly belong to a

commercial imperative, in that users are constantly encouraged to buy items or to at least

place items in wish lists and shopping carts, especially as those features are located next

to the product information at the top of a page. But those processes of designing stability

and closure are also part of the very specific production of signification that Amazon.com

produces - the idea that differentiation happens within similarities, that items can always

potentially be linked to each other through the very proliferation of meanings. This

seeming contradiction between openness and closure can be best explained by

Lipovetsky’s analysis of the paradox of the multi-channel TV universe (2002).

166

Lipovetsky (2002) argues that audience fragmentation and mass homogenization are not

incompatible, but rather the result of the interplay between the form and the content of

TV as a medium:

If we grant that the media individualize human beings through the diversityof their contents but that they recreate a certain cultural unity by the way theirmessages are presented, we may be able to clarify the current debate on thesocial effects of ‘fragmented television’. (p. 194)

Lipovetsky argues that the fragmentation of the audience through the proliferation of

content and therefore cultural meanings is stabilized through a common formatting. This

analysis can be applied to the case of amazon.com in that the proliferation of meanings

on amazon.com is expressed by the overall format of differentiation within similarity.

Oppositions and negations are never expressed through the amazon.com recommendation

software, and the user practices of tagging, rating and producing recommendations follow

the same format, in that users can only express links between items. This leaves only

customer reviews as potential sites of disagreement about the merit and quality of a book.

The overall format, then, is one that is always inclusive and where exclusion is relegated

to the margins. The formalization and homogenization of meaning formations is but one

of the processes at stake in the stabilization of the websites. In order to examine those

other processes, it is necessary to look at the a-signifying semiotics that delineate and

articulate user-generated and software-generated signifying semiologies.

167

5. Amazon.com’ A-Signifying Semiologies: Shaping Sociality and Individuality within

a Commercial Space

As Guattari argues, a-signifying semiotics involve “a-signifying machines (that)

continue to rely on signifying semiotics, but they only use them as a tool, as an

instrument of semiotic deterritorialization allowing semiotic fluxes to establish new

connections with the most deterritorialized material fluxes” (1996b, p. 150). In that sense,

a-signifying semiotics “produce another organization of reality” (1974, p. 39) by using

signifying semiologies and harnessing material intensities to create new economic, social,

cultural and political dynamics and relations of power. In that sense, a-signifying

machines are not separate from signifying semiologies, but they organize signifying

semiologies to create relations of power. In the case of amazon.com as described in figure

2, the a-signifying semiologies represent processes of shaping users and their sphere of

activity through constant profiling (the harnessing of material fluxes) and the delineation

of signifying semiologies (the articulation between software-generated and user-

generated content and practices). The core of the a-signifying dynamics on amazon.com

is to articulate the cultural search for meanings with a commercial imperative. On that

level, there is a process of composition (Latour, 1999, p. 181) whereby the human actors

on the website have to delegate their search for cultural meanings to specific software

layers. In this process, the goal of looking for cultural meaning is articulated with the

broader purpose of the amazon website of selling items. A-signifying processes represent

a site of articulation between signifying processes and the shaping of consumer practices

and subjectivities so that the cultural and the commercial are inseparable on the

168

amazon.com website. The a-signifying semiologies of amazon.com operate at two levels:

at the level of locating users through a process of restriction and at the level of granting a

specific site of agency to users that is centered exclusively on the production of

meanings.

Within the scope of this case study, a-signifying semiologies can be seen as

operating at the level of the articulation between discourse, technology and social and

cultural relations of power. Foucault’s notion of discourse is useful for examining the

rules that govern the activities of authors and readers and, by extension, users. Discursive

rules establish legitimacy - who has the right to write, use or read a text - as well as

specific methodologies - how to write, read and interpret a text. Through these discursive

rules, texts can be seen as expressing the articulations between broader narratives or

ideologies and power relations among actors within a specific context. The advantage of

Guattari’s framework is that it allows for a strong analytical grounding that does not

separate a broader context from a particular textual situation, but rather shows how social

relations of power are defined through the disciplining of human and non-human actors

and through the shaping of specific materialities and signifying systems. Furthermore,

Guattari’s framework makes it possible to see that signifying semiologies are inscribed

within technocultural a-signifying machines in that meaning production systems are

dependant upon specific cultural sensitivities, affinities and disciplines. In the case of

amazon.com, the cultural shaping of users is mediated through cultural as well as

technological factors. The a-signifying framework is therefore useful for examining the

“hidden pedagogies”, that is, “the law-like codes regulating online behaviour and access

169

to information” (Longford, 2005, p. 69).

The a-signifying semiologies deployed by amazon.com to control the practices of

users on the amazon.com website require constant surveillance. The shaping of users

requires the deployment of a system for tracking user behaviour. In that sense, as

Humphreys argues, “consumer agency (is) shaped by techniques of surveillance and

individuation” (2006, p. 296). Some of the more common forms of surveillance on

amazon.com include the use of cookies to store information about users so that web

pages can be customized. At the time of this study, amazon.com installed five cookies on

users’ computers, two of which with an extremely distant expiry date (1 January 2036),

thus ensuring the tracking of users over the long-term. Furthermore, personalization and

customization on amazon.com are such that, as Elmer argues, “consumer surveillance is

predicated on the active solicitation of personal information from individuals in exchange

for the promise of some sort of reward” (2004, p. 74). The reward offered by

amazon.com is a customized access to the website and the cultural experience it provides.

At the same time, maintaining a level of privacy by refusing to accept, for instance,

cookies, is described on the amazon.com website as detrimental to users. As Amazon

declares: “you might choose not to provide information, even if it might be needed to

make a purchase or take advantage of such amazon.com features as Your Profile, Wish

Lists, Customer Reviews, and Amazon Prime.”21 The freedom of choice offered to user

here is quite illusory, in that it becomes impossible to use the website without accepting

21 http://www.amazon.com/gp/help/customer/display.html/103-3604327-1223045?ie=UTF8&nodeId=468496

170

amazon.com’s surveillance tools.

Amazon.com tracks geographic, demographic, psychographic and consumer

behaviour data (Elmer, p. 79) through cookies, invitations to give information on the “My

Profile” pages of the website, and the recording of items bought and viewed. As

explained in the amazon.com’s privacy notice, amazon.com collects different kinds of

data on users, including information given by users through, for instance, their wish lists

and profile pages; what amazon.com calls “automatic information” that is collected by

the website without asking the permission of users (i.e. cookies); e-mail communications,

including the capacity to know whether a user opens e-mails received from amazon.com;

and finally information from other sources such as merchants with which amazon.com

has agreements and amazon.com subsidiaries (i.e. Alexa Internet).22 Amazon.com was

also criticized in 2005 for its proposal to track not only users, but also item recipients

through the recording of gift-giving habits. In particular, patent 6,865,546: “Methods and

Systems of Assisting Users in Purchasing Items” offered a method for “determining the

age of an item recipient, such as a gift recipient” so as to, a year later, remind user of an

impending birthday and offer recommendations based on the age of the recipient.

Users are thus constantly monitored on amazon.com, and this monitoring is

accompanied by a set of rules on how to behave on the website. For instance,

amazon.com has a limit of 1000 words on customer reviews with a recommended length

of 75 to 300 words and does not accept reviews or discussion posts with “profanities,

22 http://www.amazon.com/gp/help/customer/display.html?nodeId=468496

171

obscenities or spiteful remarks.”23 Altogether, surveillance tools and the rules of

participation present on amazon.com serve to not only transform the user into an object

of knowledge, as Humphreys argues, but also to discipline users into adopting specific

behaviours. At the same time, the a-signifying machine on amazon.com does not simply

employ processes for restricting user activity within specific frameworks of discourse,

but also creates channels through which users can be productive. That is, amazon.com

cannot simply be seen as a repressive system, but also as a creative and productive

system that fosters specific kinds of user activities as well as new cultural practices and

values.

Acknowledging that a-signifying semiologies on amazon.com stabilize a cultural

and commercial experience based on specific signifying semiologies makes it possible us

to further examine the paradox of the homogenization of the proliferation of meanings.

This takes place in particular at the level of the shaping of the cultural affinities of users.

That is, the amazon.com web architecture distributes spheres of activities for users and

software machines act as agents of cultural stabilization. This process of stabilizing the

experience of users requires the definition of a specific horizon of expectations.

Lipovetsky’s argument that the plurality of meanings that circulate within Western

democracies is made possible through acceptance of specific principles is useful here. As

Lipovetsky (2002) argues:

Here is the paradox of consummate fashion: whereas democratic society ismore and more capricious in its relation to collectively intelligible discourses,

23 http://www.amazon.com/gp/customer-reviews/guidelines/review-guidelines.html/103-3604327-1223045

172

at the same time it is more and more balanced, consistent and firm in itsideological underpinnings. Parodying Nietzsche, one might say that homodemocraticus is superficial by way of depth: the securing of the principles ofindividualist ideology is what allows meanings to enter into their merry dance(p. 204).

Lipovetsky usefully points out that the play of meanings expressed in contemporary

consumer society is dependent on accepted and unquestioned cultural values, among

which the claim to individuality. The pursuit of individualism as expressed by Lipovetsky

includes not only the quest for social status and social legitimacy, but also the pursuit of

“personal pleasure” through “psychological gratification” (2002, p. 145). As seen above,

assigning meanings to books on amazon.com represents an instance where those elements

of individuality are expressed. Processes of individualization on amazon.com are

included within a process of cultural homogenization and stabilization. That is, the

individuality of users as expressed through reviews, listmanias, etc. is always inscribed in

a process of homogenizing individualities within the amazon.com community.

Individualism can only exist on amazon.com but through the homogenization and careful

definition of the channels through which individualities can be expressed.

The legitimacy of individuality is partly expressed on amazon.com through

processes of personalization and customization. In particular, the recording of surfing and

viewing patterns on amazon.com is made for the purpose of identifying the interests and

desires of users so as produce lists of items that might correspond to those desires and

interests. In that sense, the cultural experience provided by amazon.com proceeds through

a dual dynamic of not only supporting users in their search for meaningful items, but also

of predicting desires (Humphreys, 2006, p. 299) through the software machine. The

173

recommendation software, for instance, interprets users’ behaviours and translates them

into interests and desires through personalized recommendations that are, in turn, further

inscribed within user-generated networks of meanings. In that sense, there is a “(self)

revolutionary and spiritual power of consumer profiling technologies - the ability of

hypercustomized products and services to unearth the real self” (Elmer, 2004, p. 7). In

“The Consumer as a Foucauldian Object of Knowledge”, Humphreys argues that the

process of individuation through the documentation of the user’s every move “serves the

purpose of chronicling past and future tendencies and essentializing them to the

individual, in the service of predicting future tendencies” (2006, p. 298). Humphreys

underlines that the individuality promoted on amazon.com is one that is centered on a

commercial imperative. Indeed, cultural tastes and interests are always expressed through

lists of either software-generated or user-generated recommendations - through

commodities to be bought. Furthermore, the process of recommendation as the constant

production of new meanings is delineated on amazon.com by a commercial imperative.

The omnipresence of the shopping cart on the amazon.com interface, the encouragement

to buy several books in order to receive discounts on the price of the books or on the

shipping costs, the push towards buying items within a certain time frame in order to

have them delivered within 24 hours all act a reminders to users that temporary closure in

the form of buying an book is the goal of the experience of the proliferation of meanings.

Last but not least, not all users of the website can write customer reviews - only users

who have previously bought something on the amazon.com website are allowed to

participate. Legitimizing oneself as a producer of content on the website requires active

174

consumption.

Furthermore, any form of community exchange and communication created on

amazon.com serves as a reinforcement of the shaping of users as consumers. This process

can be best understood as social personalization, which involves both the process of

shaping a user’s individuality through constant comparison with other users and the

process of individualizing any form of sociality. The user profile page on amazon.com,

for instance, looks like a typical social networking page with a few amazon.com add-ons.

A user can post his/her picture and keep track and get in touch with friends. As

amazon.com declares:

Your Profile page is where your friends and other people find you and learnmore about you. It’s also where you access and manage your communitycontent, recent purchases, reminders, friends and other people onAmazon.com. You can see what your friends are up to, add new friends andfavorites, create new content, and update your public information.24

This description seems at first to provide users with forms of sociality that are commonly

offered on Web 2.0 sites such as MySpace or Facebook. However, the kind of

information that users can provide to their network of friends on amazon.com includes

what amazon.com calls “public activities” such as reviews, search suggestions, product

tags, important dates, listmanias and Wish Lists. The network of sociality offered on

amazon.com is therefore one that is exclusively centered on objects already bought or to

be bought, either for oneself or for one’s friends. For instance, as Humphreys argues, the

Wish List “represents the sum and essence of the individual” on amazon.com (2006, p.

24

http://www.amazon.com/gp/help/customer/display.html/ref=cm_pdp_whatsThis_moreHe

175

298). Publicity on amazon.com therefore means the representation of oneself as a

consuming actor. In the same way, any form of sociality on amazon.com is one that is

directed towards the consumption of objects. For instance, the rules for writing customer

reviews strongly encourage users to focus on objects rather than on the content of other

reviews. Thus, one should not comment “on other reviews visible on the page” because

“other reviews and their position on the page are subject to change without notice.” Thus,

a customer review “should focus on specific features of the item and your experience

with it.” 25 The signifying semiologies offered to users on amazon.com do not simply

deal with the production of meanings but are designed, through an a-signifying system of

discursive rules and social conventions, to promote books as objects of consumption and

users as consuming actors.

Users are not only tracked on amazon.com, they are also encouraged to participate

in their own individualization and socialization as consuming agents through writing

comments, tagging, rating, etc. Thus, the stabilization of signifying semiologies on

amazon.com is done, at the a-signifying level, through the development of commonalities

not at the level of content, but at the level of form. For all its diversity of content, the

amazon.com interface offers a narrow range of interaction to its users: search for titles,

build content so that titles are inscribed within a process of consumption or buy items.

The seeming infinite activity of users at the level of content is thus counter-balanced by a

narrow set of practices offered to users. In that sense, users are integrated within a

lp/002-1675217-0578427?ie=UTF8&nodeId=1646524125 http://www.amazon.com/gp/help/customer/display.html/002-1675217-

176

commercial model and it becomes impossible to conceptualize a sphere of activity for

users that is not already articulated with a software machine that translate commercial

imperatives into a quest of individuality.

The integration of users within a commercial system is not limited to amazon.com

but is also central to any online commercial system that uses user-generated content to

build cultural meanings. Thus, this process is not only related to book reviews on a

website such as amazon.com, but also to information about social networks on sites such

as Facebook, or user-generated gaming content on spaces such as Second Life. As

Coombe, Herman and Kaye declare: “participatory culture exists in an uneasy but

dynamic relationship with “commodity culture.” The former continually appropriates and

remakes what is produced and articulated by media corporations, while media

corporations continually try to incorporate consumer productivity and creativity into

profitable commodity forms” (2006, p. 193). The difference between amazon.com and

the gaming space described by Coombe et al. is that the dynamic relationship between

user productivity and commercial forces is integrated within the amazon.com machine so

that any form of resistance such as, for instance, poaching, or using the software for other

means, is impossible. Similarly, any kind of ironic use of content on amazon.com would

be limited, insofar as the overall process of meaning production on the website operates

through the integration of specific users within a larger social group. Individual

resistance, for instance, would not be immediately visible on a website that proceeds by

examining similarities and excludes strong differentiations from its internal logic. Along

0578427?ie=UTF8&nodeId=16465311

177

with a strong history of patenting all aspects of its architecture, amazon.com’s conditions

of use grants users “limited license to access and make personal use of this site and not to

download (other than page caching) or modify it, or any portion of it, except with express

written consent of amazon.com.”26 Furthermore, users do not even own intellectual

property of the content they produce through interaction with the recommendation

software or through writing customer reviews and producing other forms of

communication. As the conditions of use on the amazon website state: “Amazon.com has

the right but not the obligation to monitor and edit or remove any activity or content” and

“takes no responsibility and assumes no liability for any content posted by you or any

third party. 27 Users are therefore made responsible for the user-produced content posted

on the website, but are deprived of their intellectual property of that very content:

If you do post of submit material, and unless we indicate otherwise, yougrant Amazon.com and its affiliate a nonexclusive, royalty-free,perpetual, irrevocable, and fully sublicensable right to use, reproduce,modify, adapt, publish, translate, create derivative works from,distribute, and display such content throughout the world in anymedia.28

The shaping of user practices and the commodification of user-generated content thus

transform users into delegates of the amazon.com a-signifying machine. This, in some

ways, is a reversal of Latour’s definition of delegates as object that stand in for actors,

that is, of delegates as “technical delegates” (2004, p. 189). In the case of amazon.com,

users as human actors are folded within a system, they are shaped so that they translate a

26 http://amazon.com/gp/help/customer/display.html/102-5610241-3194509?ie=UTF8&nodeId=50808827 Ibid.

178

commercial imperative into action, so that they become not only the subjects but also the

agents of the process of commodification on amazon.com. This process of delegation also

operates through the dynamic of disciplining users as well as granting them open spaces

of agency through access to signifying semiologies. As Humphreys suggests (2006, p.

304), there is a process of internalization of the marketing gaze (i.e. the profiling and

recommendation software) so that users internalize the discipline of consuming that is

imposed on them by being constantly encouraged to gaze at objects of consumption and

to gaze at other users through engaging with user-produce content. However, this process

is accompanied by a more productive one whereby user can fulfill their sense of

individualization - their quest for social status and well-being. It is only by

acknowledging the forms of freedom allowed on amazon.com that it is possible to

understand the attraction of a space built on the erosion of privacy and the

commodification of intellectual property. As Humphreys usefully points out (2006, p.

300), amazon.com does not evaluates user-produced meanings - it simply translates them

into commodities. That is, amazon.com doe not judge the user-produced content. On the

contrary, it is designed to plug that content into the appropriate channels so that cultural

tastes can be realized through the consumption of commodities. In that sense,

amazon.com provides freedom from cultural and social evaluation, thus not only shaping

users as consumers, but also as “free” individuals liberated from the social gaze.

The process of defining users is not simply limited to the space of amazon.com,

but is also extended through a network of affiliates. Amazon.com has partnership with

28 Ibid.

179

giant off-line and online retailers such as Target and Office Depot. As well, amazon.com

has developed a network of associates so that websites can feature links to “amazon

products and services” and “receive up to 10% in referral fees in doing so.”29 The

associate network thus serves to further advertise amazon.com on the Web. More

recently, amazon.com has been marketing the services that constitute the level of

expression of the amazon.com platform as well as licensing the data (or content) recorded

on amazon.com. Amazon’s Web Services was started in 2006 and operates by selling

services developed for the amazon platform for a fee. Amazon Web Services are similar

to APIs (Application Programming Interfaces), which are smaller programs and functions

developed primarily using XML. For instance, amazon.com’s Simple Storage Service is

designed to store and retrieve data and “gives any developer access to the same highly

scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its

own global network of websites.”30 In the same way, amazon offers solutions for building

a shopping cart and order forms. In terms of selling content, amazon.com does not only

provide the possibility for third-party websites to use the content of the amazon.com

catalogue,31 but also, through the amazon.com associates network, of offering tailored

amazon.com content.32 For instance, when a user goes on an amazon.com associate

website, the amazon.com cookies on that user’s computer are activated so as to offer

personalized content. The marketing of both the layer of expression and the layer of

29 http://affiliate-program.amazon.com/gp/associates/join/ref=sv_hp_2/102-5610241-319450930 http://www.amazon.com/gp/browse.html?node=1642726131 http://www.amazon.com/E-Commerce-Service-AWS-home-page/b/ref=sc_fe_l_2/102-5610241-3194509?ie=UTF8&node=12738641&no=3435361&me=A36L942TSJ2AJA

180

content by amazon.com on the broader Web serves as a means to export amazon.com’s a-

signifying semiologies, and therefore the specific cultural affinities (of individualization

through consumption) that are associated with it. Through those strategies, amazon.com

adds another mode of being for Web users, which, for instance, departs from the kinds of

use that are no centered on notion of social status acquisition or well-being. This

multiplication of modes of being for users are thus expressed through the juxtaposition of

multiple networks - not only a network of websites produced by user surfing, but also

networks of commercialization that superimpose themselves onto Web flows.

Applying Guattari’s mixed semiotics framework to the case of meaning

production on amazon.com thus reveals how signifying semiologies operate through a

specific cultural mode (the meaning of books is established through differentiation within

similarity) and shape, through their articulation within a-signifying semiologies, the

cultural affinities of users as consuming individuals. The amazon.com platform thus

homogenize the production of meanings not at the level of the content of the meanings

being produced, but at the level of the format of expression of those meanings, that is,

through the shaping of the cultural perception of meanings. While the consequences of

these new types of power relationships will be examined in more detail in the synthesis

chapter of the dissertation, it is important to underline here the need to reconsider the role

of user activity in an environment such as amazon.com. As seen throughout the last

section of this chapter, amazon.com does not simply restrict users, it also offers some

sense of freedom to pursue meanings and explore cultural tastes. In that sense, there is a

32 http://rcm.amazon.com/e/cm/privacy-policy.html?o=1

181

level of indeterminacy on the website in that, for instance, users are not forced to buy

products, but simply encouraged to buy products. Such a space of indeterminacy allows

one, for instance, to simply search for a reference or compile a list of books to be

borrowed from the public library or ordered from an independent bookstore. These

instances of indeterminacy, however, should not be confused as possibilities of

resistance, as users are always interpellated as consuming individuals on the amazon

website. Rather, it is necessary to examine the multiple layers at which political and

cultural interventions could take place but do not. This will be done in the final chapter of

this dissertation.

The examination the remediation of the book as a cultural object within an online

environment reveals the complexity of the networks that allow for a specific mixed

semiotics dynamics to emerge. Examining amazon.com as an actor-network requires

acknowledging the complexity of a-semiotic, signifying and a-signifying articulations as

they redefine the agency of actors. The agency of human actors in particular is located

within the signifying sphere, with specific a-signifying constraints, under the form of the

commercial imperative, put upon them. In that sense the production and circulation of

meanings on amazon.com can only be done through an actor-network analysis of the

software, commercial and human actors that remediate books within an online

environment. The tracing of the roles played by different actors in the remediation of

books as the circulation of meanings on the website leads to acknowledging the role

played by a-semiotic and a-signifying processes in the shaping of specific signifying

semiologies. The articulation between the cultural search for meanings and a commercial

182

imperative on amazon.com can thus be studied through a focus on the circulation of

cultural objects, such as books. However, the expansion of Amazon onto the broader

Web highlights the need to examine the circulation of not only cultural objects within

specific online spaces, but also of specific a-signifying formats within the broader Web.

While Amazon expands itself through making its services available and giving some

restricted access to its database to third-party sellers, it is not the only model of format

expansion that exist on the Web. The case of Wikipedia, in that sense, provides another

perspective through which one can examine other articulations between a-semiotic,

signifying and a-signifying processes.

183

Chapter 4

Mixed Semiotics and the Economies of the MediaWiki Format

The amazon.com case study revealed how cultural practices within a commercial

environment are shaped through their mediation by layers of software and systems of

signification. The discursive and cultural practices of being a user on amazon.com are

important not only because amazon.com is one of the most popular online retailers of

cultural entertainment, but also because it has been extremely active in promoting its

model on the Web, both in terms of exporting a technocultural format (the amazon.com

Web services) and a brand (i.e. amazon search boxes that can be integrated into a

website). The duplication of web architectures on the Web highlights the importance of

analyzing the relationships between cultural forms and technical formats.

While the circulation of the amazon.com format is focused solely on a

commercial model, other technical platforms exist can be adapted to a broader range of

cultural goals. The Wikipedia model is one of those. Wikipedia makes use of a specific

wiki architecture to produce content. Wikis first appeared in 199533 and were designed to

allow multiple users to add, delete and edit content. Wikipedia has been developed by the

Wikimedia foundation as one of their projects to “empower and engage people around

the world to collect and develop educational content under a free license or in the public

domain, and to disseminate it effectively and globally.”34 To achieve this goal, the

Wikimedia Foundation has not only created free-content projects, but also developed the

33 http://en.wikipedia.org/wiki/Wiki

184

wiki platform to support those projects - MediaWiki. MediaWiki is an open-source

project licensed under the GNU General Public License (GPL) and as such can be

downloaded freely and can be modified and distributed under the same licensing

conditions.

How a cultural ideal of creating and storing knowledge and a technical platform

such as MediaWiki can be articulated with each other to create a cultural form such as

Wikipedia is the starting question for this case study. The technical layer enables multiple

users to participate in the building of content, and thus creates new discursive practices of

collaborative authorship. It is important to ask, in turn, how these technical features

enabling specific types of discursive practices are articulated with the broader technical,

commercial and cultural networks of the Web to become cultural forms. Wikipedia is an

exemplar of the articulation of a Wiki platform with a cultural desire to create the largest

repository of the world’s knowledge. While there is a significant body of research on

Wikipedia as a new cultural form, there has not been much in the way of a critical

exploration of the adoption of the MediaWiki software package on the Web. This case

study is intended to examine the circulation of the MediaWiki Web architecture and its

articulation with commercial, cultural and discursive values and practices. This requires

an examination of the links between Wikipedia’s technical features, discursive practices

and cultural form. This serves as the basis of comparison for examining how other online

projects use the MediaWiki platform. Furthermore, using Guattari’s mixed semiotics

framework allows for an analysis of not only the changes in discursive rules and cultural

34 http://wikimediafoundation.org/wiki/About_Wikimedia

185

practices in a sample of MediaWiki websites, but also for an exploration of the ways in

which MediaWiki’s technocultural capacities are captured and channeled within

commercial and non-commercial webs.

1. Technodiscursive Mediations and the Production of Wikipedia as a Technocultural

Form

The examination of the articulation between technical features, discursive rules

and cultural forms in the case of Wikipedia first requires an acknowledgement of the

complementariness between Guattari’s mixed semiotics framework and Foucault’s notion

of discourse. Foucault’s notion of discourse encompasses a body of thoughts, writings

and institutions that have a shared object of study. Discourse is also to be understood as

the space where power and knowledge are joined together (1990, p. 100). This includes

the relations among subjects and between subjects and objects as well as the legitimate

methodology or rules through which one can talk meaningfully about objects and

construct representations. In that sense, discourse is the ensemble of processes and

dynamics through which a “reality” is created. In relations to Guattari’s analysis of

signifying semiologies, discursive rules can be seen as articulating the levels of

expression and content - defining the proper rules of expression and who can use these

rules (i.e. the rules of authorship and readership), as well as the values to be expressed. In

that sense, examining discursive rules means mapping out the agents and processes

through which the linguistic machine is articulated with power formations, that is, how

the field of signification is articulated with the social, economic and moral dimensions of

power in order to shape a homogeneous “reality”, or, in the case of Wikipedia, a

186

homogenous cultural form. As seen in the first chapter, the missing dimension in

Foucault’s examination of discourse is the technical dimension - the role played by media

in enabling and restricting discursive rules and roles. Subsequently, the question that is

raised is about how to reconsider the question of discourse through its shaping within

technocultural processes. In relation to Guattari’s mixed semiotics framework, Foucault’s

notion of discourse does not explicitly recognize the importance of the category of

matter, and as such the a-semiotic and a-signifying processes that involve matter.

Nevertheless, the question of discursive rules constitutes a starting point for examining

the constitution of cultural forms on the Web.

The circulation of a cultural form such as Wikipedia requires a critical

reexamination of the theoretical framework behind the notion of discourse. This was

made clear with a study realized by Greg Elmer and I on the circulation of Wikipedia

content on the Web (2007). The idea behind this study was to assess the legitimacy of the

Wikipedia model by tracking how Wikipedia content was being used on the Web -

whether it was cited as a source, criticized or plagiarized. In so doing, our expectations

where to gather texts using Wikipedia content in order to do a discourse analysis of how

content was reframed through being re-inscribed within new textual environments. We

entered two sentences lifted from two Wikipedia articles in the Google search engine and

analyzed the top eleven returns. Our findings showed that Wikipedia content was used for

purposes of information and argumentation only in a minority of sources (three out of

22). The rest of the time, Wikipedia content was copied identically into websites that

generally presented themselves as encyclopedias. Undertaking a discourse analysis of

187

those websites was unnecessary, since the content of the websites was made up of free

content from Wikipedia and other open source content sites. The textual context within

which Wikipedia content was relocated thus did not vary. However, this did not mean

that there were no changes in the discursive status of Wikipedia content. Our findings

revealed that Wikipedia content was used first for purposes of commercialization through

the framing of content with advertising, and second for purposes of search engine

optimization where Wikipedia content was used so that a website could be listed on

search engine listings and thus attract traffic to be redirected to networks of advertising.

This change in the discursive status of Wikipedia from freely accessible

knowledge to traffic magnet within a commercial network is done through a series of

technical interventions to reframe content. The websites under study used some form of

dynamic content creation, so as to automatically format the content and form of a

website, and thus automatically frame content with advertising. With regards to

Guattari’s mixed semiotics framework, such a discursive change takes place through an

intervention at the level of expression. That is, the ensemble of expressive materials

available on the Wikipedia website is replaced by another ensemble composed of

programs acting at the linguistic level - programs that shape data into a user-readable

Web interface - and programs intervening in an extra-linguistic dimension - for instance,

commercial software to insert sponsored and targeted advertising. In turn, these series of

interventions at the expression level change the content of the Wikipedia text not in terms

of what is being signified, but in terms of the value of the signified content, from free

content to commercialized content. This operation is part of a broader a-signifying

188

machine that involves not only materials for signification, but also the material intensities

contained within the category of matter. In particular, the reduplication of Wikipedia

content onto other websites serves to manipulate specific kinds of material intensities,

such as the size of a website and user traffic. The a-signifying machine through which the

online commercialization of Wikipedia content can be achieved thus uses signified

content to create new material intensities. Furthermore, the use of Wikipedia content

takes place through a series of possibilities that are not only technical, but also legal, as

Wikipedia content is under a copyleft license and can thus be reduplicated for free as

long as it is kept under the same license. The a-signifying machine for commercializing

Wikipedia content thus encompasses technical, commercial and legal processes in order

to transform the status of Wikipedia content.

This study of the circulation of Wikipedia content on the Web through the

mediation of the Google search engine revealed that the concept of discourse needs to be

critically reexamined to take into account its technical mediation. The conventional

methodology of discourse analysis could not have been usefully applied in this online

context, and the mixed semiotics framework allowed for a more comprehensive

framework to trace the technocultural networks at the basis of such online commercial

machine. However, this study of Wikipedia content was limited in that it used only two

articles from Wikipedia and focused exclusively on the question of content and not on the

question of the Wikipedia platform - the MediaWiki software package. Indeed, if the

circulation of Wikipedia content on the Web is mostly dominated by processes of

reformatting the same content, the dynamic that needs to be further studied concerns the

189

circulation of form through the articulations between software and commercial, cultural

and political interests. As such, the present case study expands a previous analysis of the

circulation of Wikipedia content by examining the circulation of the Wikipedia format on

the Web.

Examining the circulation of the Wikipedia format requires a comparison between

the techno-discursive practices present within Wikipedia and those that exist on

MediaWiki websites not officially related to Wikipedia and other Wikimedia projects.

The internal logic of Wikipedia is a starting point for examining how discursive rules and

cultural values are mediated and embodied through technical layers. However, before it is

possible to map the effects of the techno-discursive networks produced on Wikipedia

through Guattari’s mixed semiotics framework, it is necessary to analyze the relationship

between cultural values, discursive rules and technical layers through an Actor-network

theory approach. That is, in order to study the mixed semiotics of Wikipedia, it is

necessary to examine how a technical platform - the wiki format - has been designed to

embody a cultural form - Wikipedia. The articulations, delegations and translations

between the cultural and the discursive need to be identified in order to recognize the

range of discursive actions made available by the system.

The genealogy of Wikipedia as a free-content encyclopedia project is complex, as

it involves long-standing cultural concerns about creating, storing and transmitting

knowledge as well as the revival of those concerns within the ideal of freer and better

communication that have been associated with the growing popularity of the Internet and

the World Wide Web in the late 1990s. Central to Wikipedia as a cultural model is the

190

idea that new communication possibilities such as hypertextual communication and

collaborative authorship can create new and better models of organizing knowledge

production and circulation. Wikipedia was launched in January 2001 as a complement to

an online peer-reviewed encyclopedia project - Nupedia. Wikipedia’s popularity has

grown exponentially since its creation and it now boasts more than 7.5 million articles in

253 languages.35 The first characteristic of Wikipedia is the new mode of knowledge

production it implements. As Yochlai Benkler (2006, p. 70) describes it, Wikipedia has

three characteristics:

- Wikipedia is a collaborative authorship project where anyone can edit, where

changes to texts are visible and all versions are accessible. Anybody can thus add content

to Wikipedia in a transparent manner.

- The process of collaborative authorship departs from a traditional model of

producing knowledge by relying on authors with credentials or through a peer-review

process. The goal of Wikipedia is to strive for consensus on a neutral point of view

whereby all significant views must be represented fairly and without bias.36

- Finally, Wikipedia content is freely accessible through its release under the

GNU Free Documentation License (GFDL). According to the GFDL, Wikipedia content

can be used by third parties if they comply with the following requirements: “Any

derivative works from Wikipedia must be released under the same license, must state that

it is released under that license and reproduce a complete copy of the license in all copies

35 http://en.wikipedia.org/wiki/Wikipedia36 http://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view

191

of the work, and must acknowledge the main author(s) (which some claim can be

accomplished with a link back to that article on Wikipedia)”.37

There is a direct affiliation between Wikipedia as a project for knowledge

production and the production processes adopted by the free software movement. In

particular, collaborative authorship and making products freely available through copyleft

have been the core characteristics of the free software movement. The open source

software movement is based on the idea that progress depends on making resources

available for free and that improving on those resources can best be done through a

commons of volunteers (Benkler, 2006; Jesiek, 2003). This finds a direct echo in

Wikipedia’s reliance on anonymous volunteers to build content and on their non-

proprietary approach to content circulation. Furthermore, the technical platform for

Wikipedia - the MediaWiki software package - has also been released under a GNU

General Public License (GPL). Thus, anybody can modify the original software package

as long as the source code is made available under a GPL. Wikipedia can be seen as

another instance of the “high-tech gift economy” (Barbrook, 1998) where the

commodification and privatization of information and communication technologies is

replaced by free exchange of information and communication technologies. The

characteristic of Wikipedia as an extension of the free software movement is that it deals

not only with technical layers, but also with the content layer. By making content

available under the GFDL, Wikipedia represents an instance where processes put in place

for the development of open source software are exported onto the level of content in

37 http://en.wikipedia.org/wiki/Wikipedia:Mirrors_and_forks

192

order to produce new discursive rules and cultural values of knowledge production.

The affiliation between the free software movement and the Wikipedia model

thus shows a first series of translations of ideals of knowledge production onto the

technical field and then onto a discursive one. As suggested by Latour (1999, p. 311), the

translation of ideals of collaborative non-proprietary production about a technical

platform (i.e. Linux) to another one (MediaWiki), and from a specific type of object

(software in the case of MediaWiki) to another one (signified contents on Wikipedia)

represents an instance where cultural interests are displaced and modified. Because there

is a shift from the field of the technical to the field of the techno-discursive in the case of

Wikipedia, the cultural impact of the Wikipedia model also challenges a longstanding

principle of authorship. In the case of software, collaborative work is envisioned as a

process whereby people work in common in a voluntary and free (as in not in exchange

of a salary) manner in order to achieve a better product than what would be produced in a

private and proprietary context. However, when collaborative work becomes

collaborative authorship as in the case of Wikipedia, it puts into question the very model

of encyclopedic knowledge that Wikipedia is attempting to enhance. As it relies on

collaborative authorship rather than on the credentials of experts to produce articles,

Wikipedia puts into question the model of legitimizing truth claims as it has traditionally

been developed in modern Western societies. As Tom Cross (2006) puts it:

Our society has developed a certain expectation of what anencyclopedia should be. We expect it to be an authoritative, reliablereference that provides basic information about a wide variety ofsubjects. Encyclopedias have traditionally been produced by companieswith teams of subject matters experts who compile information and factcheck its accuracy. The idea that comparable authority could come

193

from a resource that can literally be edited by anyone, regardless oftheir level of expertise, seems to defy logic.

While the encyclopedia model of authorship is different from the discursive function of

the author within a fictional context as described by Foucault (2003), both types of author

function nevertheless share the same characteristic of presenting the figure of the author

as defining the specific status and discursive function of a text. Knowledge production in

the conventional encyclopedic context does not so much requires a recognizable figure as

it does involve a set of scholarly credentials that demonstrate expertise on a given topic.

Those credentials validate the truth claims that are made in an encyclopedic article. By

contrast, Wikipedia’s model is such that anybody can participate in content creation.

There is not a single recognizable author with a set of credentials on Wikipedia, but an

anonymous stream of volunteers whose credentials are not listed or recognized in the

genealogy of an article. Furthermore, by offering a model of knowledge production that

operates outside of the traditional model of authorship, Wikipedia also puts into question

the conventional dichotomy between authors and readers. Instead of a strong separation

between knowledge producers and receivers, the broader category of the user emerges,

from the “lurker” who only reads content to contributors modifying content on the

Wikipedia platform and exporting content onto other online and offline formats (i.e.

websites, academic papers, news sources). Knowledge production and circulation are

thus part of the same continuum on Wikipedia. As Ito (in Cross 2006) points out, the

authority of a Wikipedia article does not come from the expertise of content producers

but from the capacity of an article to remain unchanged as it is being viewed by

thousands of users who have the ability to edit the content they are reading. Thus, the

194

production of Wikipedia content in a collaborative setting distributes the discursive

function of authority across a spectrum of users as opposed to locating it within the

category of the author as distinct from that of the reader.

The cultural and discursive changes brought about by Wikipedia not only concern

the category of the user and the ways in which the authority of a text is established, but

also how knowledge circulates in a hypertextual environment. Wikipedia relies heavily

on embedding hyperlinks within textual entries as a way of navigating its websites.

Figure 30 - The Wikipedia Homepage

This hypertextual organization can be seen as linking the cultural ideal of making the

“sum of all human knowledge”38 accessible and the new cultural possibilities offered by

38 http://www.msnbc.msn.com/id/16926950/site/newsweek/

195

hypertext technologies. Indeed, the encyclopedic model offered by Wikipedia is

reminiscent of Vannevar Bush’s concept of the Memex (1945) as a way of accessing vast

amounts of information through trails of association crossing through conventional

boundaries and categories. Nelson’s work on hypertext as non-sequential writing as well

as the collaborative aspect of his Project Xanadu (1965) can be seen as a cultural

influence on Wikipedia’s hypertextual organization. Furthermore, the fluidity of the

circulation of information on Wikipedia also changes the discursive status of

encyclopedic texts. Because of the constraints of print in terms of the slowness at which

text can be modified or created, traditional encyclopedias present texts as stable units,

where the information contained in the text is supposed to be valid for a long period of

time. On the contrary, Wikipedia text is subject to change in an instantaneous manner as

Internet technologies make textual changes easy and cheaper to produce than with paper

technology. The process on Wikipedia for creating content proceeds by calling for

participation through the creation of a “stubs” that describe a given topic in a general

manner. Users are then invited to contribute and content can always be added or modified

to include the latest events. There is a fluidity of meaning that is built into Wikipedia and

thus a “new modality of social production of knowledge enabled by the contribution of

social software, digital media and peer-to-peer collaboration” (Alevizou, 2006). As

Alevizou (2006) further argue, Pierre Lévy’s notion of collective intelligence as

“universally distributed intelligence, constantly enhanced, coordinated in real time” thus

finds an echo in Wikipedia. Indeed, constant progress rather than stabilization is the norm

on Wikipedia. This fluidity of meaning represents an articulation between the dynamics

196

of the free software movement and the new cultural status of online encyclopedias.

Constant changes are at the core of the free software process, with a constant stream of

beta versions, updated versions, patches to fix bugs and add-ons to create new features.

Constant upgrading of free software is thus a norm and this process can be characterized

as one of fixing all the bugs that keep appearing as the software has to fit into new

environments (i.e. a new operating system, other software). Displaced onto the

encyclopedic model, the process of constant amelioration goes against the convention of

freezing meaning into a stable text capable of enduring change without loosing its

accuracy.

The genealogy of Wikipedia as a cultural model is thus complex, and represents

the articulation of different cultural ideals of knowledge production and circulation as

they have emerged within or been reformulated by the new processes made available by

information and communication technologies and by the free software movement. There

is thus a series of translations that take place in the formation of Wikipedia from

longstanding cultural concerns about creating a repository of the world’s knowledge to

the new ideals of democratic and collaborative knowledge production as they are

envisioned with the rise of new communication technologies. The Wiki format used by

Wikipedia can first be seen as a means to embody those cultural and discursive values.

As explained on the MediaWiki website,39 online publishing makes information easily

accessible because of the low cost of adding new information. Collaborative knowledge

production and collaborative authorship are made easier through not having to login to

197

edit content, the ease of editing and changing content through the implementation of a

simplified syntax that is more user-friendly than HTML coding, and through the tracking

of all edits and versions as well as the ease of reversal to previous versions. Such a

system makes it possible to have discussions about the content of an article in order to

reach consensus and to reach the discursive ideal on Wikipedia of a “neutral point of

view” that represents “fairly, and as far as possible without bias, all significant views that

have been published by reliable sources40”. Finally, as there are multiple users changing

the content of a wiki, thus making change a common feature, the traditional hierarchical

navigation menu is not able to integrate all those changes. Hyperlinks, search tools and

tags are thus the preferred modes of navigation and organization.

It would be too simple, however, to see a direct equivalency between cultural

ideals and the implementation of the discursive rules stemming from these cultural ideals

through new technologies of communication. The question that is raised by Wikipedia is

about how the discursive and the technical are articulated so that they shape a stable

cultural form. In the case of amazon.com, such articulations were explained through the

mapping the different kinds of semiotic, a-semiotic and a-signifying machines and the

question of the stability of the system did not appear. Indeed, as the Amazon.com

architecture is entirely private, the articulations between the level of expression and

content are made by a sole entity - Amazon. In the case of Wikipedia, the very openness

of the system makes stability a recurrent issue. As anybody can edit content, multiple

39 http://en.wikipedia.org/wiki/Wiki40 http://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view

198

articulations are made possible. Thus, the use of Guattari’s mixed semiotics framework is

different in the case of Wikipedia than it is in the case of amazon.com. In the original

formulation, Guattari presents the mixed semiotics framework as allowing for the

identification of the actors who have the right and legitimacy to articulate the linguistic

machine with broader power formations so as to establish a homogeneous reality.

Guattari identifies the state as a central actor in this articulation and invites us to use the

mixed semiotics framework to identify the spheres of influence of those central actors. In

the case of amazon, the central actor was amazon.com itself, as the agency granted to

users on amazon.com is orchestrated to fit into the broader commercial machine defined

by Amazon. In the case of Wikipedia, it becomes problematic to try to identify a central

actor in charge of articulating the level of expression with that of content, since the

Wikipedia system is collaborative and includes the possibility of change at the content

and software levels. That is, anybody can change content on Wikipedia, and anybody can

use and change the MediaWiki software package for their own particular uses. A

common problem on Wikipedia is vandalism, a famous example being the alteration of a

Wikipedia article on John Seigenthaler - a former aid to U.S. Senator Robert Kennedy -

to suggest that he was a suspect in the murders of John F. Kennedy and Robert F.

Kennedy (Langlois and Elmer, 2007). Vandalism is an instance where the stability of the

Wikipedia model is put into question. That is, the articulation between the level of

expression - the linguistic and technical tools made available by the Wiki format - and the

level of content - the discursive status of text as collaborative knowledge propagating

valid truth-claims - is undermined through vandalism.

199

While the Amazon.com case study focused on examining how a commercial actor

defined specific semiotic, discursive and cultural rules, the main research question about

Wikipedia concerns knowing how a range of actors can rearticulate the levels of

expression and content, as well as the discursive and technical domains. In that sense, the

mixed semiotics framework can benefit from the methodological insights provided by

Actor-network theory and, in particular, Latour’s exploration of the processes of

mediation whereby human and non-human actors are assembled in order to realize a

specific program of actions (1999, pp. 178-193). The four meanings of mediation as

defined by Latour are particularly relevant to the case of the stabilization of content

production and circulation on Wikipedia. The first meaning of mediation as “goal

translation”, whereby an original goal is modified as more actors are enlisted to realize

that very goal highlights the need to examine the minute changes that are produced when

a technical device is created to embody a cultural ideal. In the case of Wikipedia, this

takes place especially in specific uses of Wikipedia as a real-time communication

platform, which go beyond the domains of knowledge traditionally covered by

encyclopedias. As Holloway, Bozicevic, and Borner (2005) show in the case of the most

popular categories for new articles on Wikipedia and as Anselm Spoerri (2007a)

demonstrate in the case of the most popular Wikipedia pages in terms of readers, the

category of entertainment (i.e. film, actors, television show, sport, video games) is the

most popular category on Wikipedia. Thus, 43 percent of the most visited pages on

Wikipedia are related to entertainment, followed by 15 percent of politics and history

pages, 12 percent of geography pages and 10 percent of sexuality pages (Spoerri, 2007a).

200

The kind of uses that are being made of Wikipedia in terms of content creation and

readership thus depart from the traditional goals of an encyclopedia. Furthermore, as

Spoerri (2007b) shows, patterns of information search on Wikipedia closely follows

pattern of information search on major search engines such as Google with regards to the

most popular search terms. Thus, the goal of Wikipedia as an encyclopedia is changed

through a series of mediations that take place both at the level of the cultural uses of

Wikipedia and the level of the cultural practices of the Web.

Latour also explains that the process of mediation can involve a process of

delegation where the introduction of a second (non-human) actor to realize a goal or

meaning changes the very nature of that meaning through a modification of the matter of

expression (1999, p. 187). Latour gives the example of the speed bump as opposed to a

“slow down” sign on the road as an instance where the goal of having cars drive slower is

realized through a series of shifts at the level of matter of expression (from a linguistic

sign to a material bump) and at the level of the meaning expressed (from “slow down so

as to not endanger people” to “slow down if you want to protect your car’s suspension”).

Latour points out how the same program of action can take place in different

technocultural settings depending on the actors being enlisted. Such a process can be

applied to Wikipedia, particularly in the ways Wikipedia not only extends its domain of

knowledge to cover categories usually minimized or ignored by the traditional

encyclopedic format, but is transformed into a new cultural form altogether. As the

Wikipedia platform does not only enable collaborative authorship but also real-time

publishing, it has been used as a real-time media for current events. As Cross (2006)

201

argues, Wikipedia “fills in the time gap between real time news media and the slow

publication of authoritative encyclopedia sources by providing a central collection data

point about a recent event that is available immediately”. As such, Wikipedia is not

simply an encyclopedia, but can be considered as possessing some elements of

participatory journalism (Lih, 2004). Furthermore, a common criticism against Wikipedia

has been that the ease with which it can be manipulated by special interest groups and

thus become a site of ideological struggle. Such possibility is made possible by the

easiness of adding content on Wikipedia. This fundamentally questions the encyclopedic

model, as texts published on Wikipedia are not stabilized and free of bias. The ultimate

goal as stated by Wikipedia is to represent a neutral point of view, but the process to

achieve such a goal can mean constant editing and long discussions to resolve ideological

struggles. Through these new technocultural possibilities, Wikipedia is thus mediated into

a new mode of representation - one that is dynamic as opposed to the rigidity of

traditional encyclopedia models. This is illustrated by the study and visualization done by

Bruce Herr and Todd Holloway (2007) of the power struggles in Wikipedia articles.41,42

In the visualization, the large circles represent articles with a high revision activity due to

vandalism, controversy or evolution of the topic that requires a change in content. As

Herr and Holloway (2007) show, the top 20 most revised articles included controversial

figures such as Adolf Hitler and Saddam Hussein, as well as controversial topics

41 http://www.newscientist.com/article/mg19426041.600-power-struggle.html, 19 may2007.42 For a full picture of the visualization:http://abeautifulwww.com/2007/05/20/visualizing-the-power-struggle-in-wikipedia/

202

(anarchism) and important events (Hurricane Katrina, 2004 Indian Ocean Earthquake).

Figure 31: Power Struggles on Wikipedia (Herr and Holloway, 2007)

Latour’s two other understandings of mediation as composition, whereby actions are

produced by a collective of non-human and human actors that form a network cannot be

attributed to a single actor, and reversible blackboxing as the process through which the

collective of actors is punctualized, or integrated into a single entity, are crucial for

203

understanding the processes of goal translation and delegation. In terms of composition, it

is the articulation of technical features with discursive rules and cultural values that

makes Wikipedia possible. With regards to Wikipedia itself, it is important to recognize

that the main signifying machine that is implemented does not only articulate a level of

techniques such as automated formatting and open content production and a domain of

production of signified discourses, but also a metadiscursive levels. That is, all the efforts

at making technical possibilities and new ideals of discourse coincide also include an

extensive set of metadiscursive rules that need to be implemented on Wikipedia.

Wikipedia’s extensive guidelines about what an article should look like, processes of

conflict resolution and the hierarchy of roles involved in regulating changes in content

(Viégas, Wattenberg, Dave, 2004) are designed to support the goal of making the

technical and the discursive coincide. This leads to acknowledging the specific process of

reversible blackboxing at stake in Wikipedia, which is characterized by transparency and

openness, as opposed to the kind of blackboxing of the dynamics at stake at both the level

content and expression that took place on amazon.com. The openness of Wikipedia

makes it so that it can never fully be blackboxed as a homogenous technocultural entity.

The openness of a fluid level of content that can potentially be changed at any time

through addition or reversion to previous versions is also accompanied by a technical

openness with regards to making the level of expression (the wiki platform) available to

anybody. That is to say, reversible blackboxing is constantly at play on Wikipedia, with

the articulation between technical and discursive actors being always open for

interventions. As such, the constant reversible blackboxing available through the

204

openness of the Wikipedia platform multiplies the possible mixed semiotics frameworks

that can be applied to it, both at the levels of Wikipedia itself and in terms of the

MediaWiki software. For instance, political actors have been enlisting Wikipedia to

further political goals, as in the case of the editing of the entry on then Montana senator

Conrad Burns by his own staff,43 or in the case of the 2007 French presidential debate

where the Wikipedia entry on the French nuclear industry system was changed during the

debate so as to support the argument of one of the candidates. Other examples of

intervention that rearticulate Wikipedia to a new a-signifying machine to reshape a truth

claim made by a specific actor include Microsoft offering financial incentives to work on

certain Wikipedia articles.44 Finally, Wikipedia’s constant fight against vandalism reveals

the ways in which human and non-human actors on Wikipedia can be rearticulated for

radically different goals. Latour’s four understandings of mediation are thus theoretically

important in order to understand that several a-signifying machines can be grafted onto

Wikipedia, thus producing difference mixed semiotics systems. The mapping of these

interventions is crucial for understanding the circulation of the Wikipedia format on the

broader Web. As such, one of the differences between the MediaWiki case study and the

amazon.com case study is that while the production of articulations to produce specific

semiotic and a-signifying machines on amazon.com could not really be analyzed because

of the proprietary secret system developed by amazon.com, analyzing Wikipedia can

reveal the ways in which articulations are produced between technical, discursive and

43 http://en.wikipedia.org/wiki/Wikipedia44 http://en.wikipedia.org/wiki/Wikipedia

205

social actors. The goal of the study is to explore the production of specific machinic

constructions, to use Deleuze and Guattari’s vocabulary. The question is not to find a

causal hierarchy among heterogeneous elements such as the technical, the discursive and

the metadiscursive, but to study how they get articulated to produce new contexts.

Identifying the abstract machines that articulate these heterogeneous elements to produce

cultural forms that offer variations on the Wikipedia model is central to the examination

of Wikipedia format on the Web.

2. The Circulation of the MediaWiki Software and the Rearticulation of Technical,

Discursive and Cultural Domains

The examination of the circulation of the Wikipedia format on the Web can take

place at both the level of content and the level of format. At the level of content, the

practice of reduplicating Wikipedia content is encouraged by Wikipedia through its use

of GFDL licenses and by making content available for download on the Wikipedia site

(download.wikimedia.org). As seen in the study of the circulation of Wikipedia content

through the Google search engine (Langlois and Elmer, 2007), a common use of

Wikipedia content is for the purpose of attracting traffic on specific websites and

redirecting it through sponsored advertising networks such as Google. This is only one

way of measuring the impact of Wikipedia content on the Web, and it is limited by the

use of a search engine using a proprietary algorithm. As a counterpoint, Wikipedia itself

keeps tracks of its citations in the media:45

45 http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_in_the_media

206

Wikipedia’s content has also been used in academic studies, books,conferences and court cases. The Canadian Parliament website refers toWikipedia’s article on same-sex marriage in the “related links” sectionof its “further reading” list for Civil Marriage Act. The encyclopedia’sassertions are increasingly used as a source by organizations such as theU.S. Federal Courts and the World Intellectual Property Office - thoughmainly for supporting information rather than information decisive to acase. Wikipedia has also been used as a source in journalism,sometimes without attribution; several reporters have been dismissedfor plagiarizing Wikipedia.46

Such tracking of the circulation of Wikipedia content in the media demonstrates its

acceptance as a reliable source of knowledge, despite numerous criticisms about the ease

of vandalizing Wikipedia articles and of propagating false information.47 For instance,

the study done by scientific journal Nature in December 2005 examined differences

between scientific entry articles on Wikipedia and the Encyclopedia Britannica and did

not find significant differences in errors in both sources. Debates about Wikipedia’s

reliability demonstrate the problematization of the trustworthiness of Wikipedia and thus

the need for new practices of reading and writing and using open-content texts as

opposed to more traditional encyclopedic texts that tend to be accepted at face value.

Wikipedia text requires a critical and more engaged approach through fact checking with

other sources and invitations to improve on the Wikipedia article itself. Overall, the new

practices of creating and using Wikipedia texts as opposed to traditional print

encyclopedic sources has been the main focus of scholarly debate about Wikipedia.

As opposed to Wikipedia content and the practices involved in producing

46 http://en.wikipedia.org/wiki/Wikipedia#Cultural_significance47 See for instance La Révolution Wikipedia, by Pierre Gourdain, Florence O’Kelly,Béatrice Roman-Amat, Dephine Soulas, Tassilo von Droste zu Hulshoff.

207

Wikipedia content, the analysis of Wikipedia as a cultural form through a focus on its

format has not been central to a cultural studies approach to the Web. While the technical

specificities of the Wiki format have been acknowledged, the role played by Wikipedia as

a reference within the wiki community has not been studied. Such an analysis would

make it possible to examine the circulation of discursive practices and technocultural

ideals as they circulate from Wikipedia onto websites that adopt a similar technical

infrastructure: the MediaWiki software package. An analysis of the circulation of the

Wikipedia format makes it possible to see the rearticulations of a technical infrastructure

within cultural processes that might or might not differ from the ones present on

Wikipedia. As such, the examination of the circulation of the cultural values embedded in

Wikipedia - the ways in which the technical is made to coincide with the discursive and

metadiscursive levels to produce a new form of creating, storing and propagating

knowledge - can be done through a study of the adoption of the MediaWiki software

package.

As the MediaWiki website explains, the MediaWiki package is built using PHP

language and a relational database management system. The data, and the relationship

among the data is stored in the database management system and is retrieved through a

script written in PHP in order to be presented as Web page. As opposed to static Web

pages, “which always comprise the same information in response to all download

requests from all users”,48 a dynamic Web page created through the PHP/database system

makes it possible to have tailored Web pages automatically produced according to

208

different contexts or conditions. In the case of Wikipedia, the database system greatly

simplifies the management of all the content created on the Website. Instead of having to

format a Web page any time content is created, the MediaWiki system makes possible,

once the format of the website is implemented, for users to add content with minimal

formatting requirements such as embedding images, hyperlinks and text. Users do not

have to format a whole new Web page, which simplifies content production. In reference

to Guattari’s mixed semiotics framework, dynamic content production makes it possible

for technical actors to be included at both the levels of content and expression in ways

that were not possible before. At the level of content, it could be argued that the technical

plays an important role in transforming signified content into material intensities (data)

that can then be shaped and recombined to produce new signified content depending on

specific contexts. With regards to the level of expression, technical actors simplify the

process of authorial production by making it possible for users to focus on the linguistic

level only - on the production of coherent sentences. Other formatting issues at the level

of expression - where to locate content, which content to select and how to format content

- are the responsibility of technical tools. This delegation of content production tasks to

technical actors represents an important shift in that it makes it possible to produce

websites with large amounts of information that are relatively easy to maintain. An

analysis of the importance of those new dynamic content creation tools has not yet been

done. While the importance of HTML as a hypertextual language has been

acknowledged, the new changes brought by dynamic content production techniques have

48 http://en.wikipedia.org/wiki/Static_web_page

209

not yet been analyzed to the same extent. Dynamic content production is one of the

technical processes that enable new discursive practices and cultural values to be realized

on the Wikipedia website. It is important in turn to examine how such technical

possibilities are rearticulated when they are taken out of the Wikipedia context and

distributed onto other wikis that use MediaWiki.

The MediaWiki website has a page about websites using MediaWiki49. It also

provides a link to a list of the largest MediaWiki sites in terms of number of pages50. This

list is the primary source of data for this study. It would be difficult to fully analyze all

the MediaWiki websites, as a comprehensive list is not available. For instance, the

MediaWiki software package can also be used to create intranets on private networks that

are not published on the Web. The list of the biggest MediaWiki websites might also be

incomplete in that websites have to send in a request with their traffic statistics in order to

be listed. However, this voluntary participation makes it so that websites participating

want to showcase their importance both on the Web and in the Wiki community. The list

of biggest MediaWiki websites was retrieved on June 6, 2007 and listed a total of 855

websites. 264 of these websites were Wikimedia related projects, including Wikipedia,

Wiktionary, Wikisource, Wikiquotes and Wikibooks. Those websites were not included

in the data to be analyzed, as since they are developed by the Wikimedia foundation, it is

assumed that their discursive rules and cultural values related to knowledge production

would be similar to those implemented on Wikipedia. The list of biggest MediaWiki

49 http://www.MediaWiki.org/wiki/Sites_using_MediaWiki50 http://s23.org/wikistats/largest_html.php?th=999&lines=999

210

websites identifies the umbrella organization that produces some of the websites. For

instance, the different language versions of Wikipedia are listed under “Wikipedia”, and

the websites that do not belong to a family of projects are listed under “MediaWiki”. The

other two main families of projects listed in the sample were Richdex, which presents

itself as hosting and developing 61 wikis, and Wikia, which lists 139 sites. There were

inconsistencies with the websites listed as developed by Richdex in that they were

duplicates of websites listed in the sample. Although requests for more information were

sent to the administrator of the list, there were no explanations as to the reason for this

anomaly. It is not possible to know whether this bug in the listing was due to problems on

the side of the administration of the list, or whether Richdex submitted other wikis as

their own. Because of this, the Richdex sample was not included in the study. Wikia

represents a particular instance of rearticulating the Wikipedia format because it has close

ties with Wikipedia. There was also a problem in the Wikia sample in that the page used

by the software compiling the data was outdated, as Wikia changed its URLs. It was not

possible to do a website analysis of those faulty URLs, but Wikia is still studied

separately in the last section of this chapter. The current focus of analysis is on the 232

MediaWiki websites in English that were collected from the original list of biggest

MediaWiki sites.

In terms of methodology, these 232 websites were coded so as to reveal the ways

in which they were related to the original Wikipedia model. First, the websites were

coded in terms of skin resemblance with the original Wikipedia model. The skin of a

website is its appearance - the use of logos, images and specific fonts and the placement

211

of horizontal and vertical menu bars. The assumption was that the more the skin of a

website resembles the original Wikipedia skin, the more the website directly affiliates

itself with it. While by no means a complete indicators of the extent to which a website is

influenced by Wikipedia in terms of discursive rules and cultural values, skin similarity

has an effect on users’ perception of a website as a new online space or as a recognizable

browsing space. The second coding dimension concerned the focus of the website:

whether it was a general encyclopedia or focused on a specific topic. This revealed how

these websites characterize themselves in terms of knowledge production. Thirdly, the

format of the website was also identified, for instance an encyclopedia, a dictionary, or a

guide. This shows the range of uses of the MediaWiki software outside of the Wikipedia

format. Fourthly, the content of the website was examined in order to determine whether

it was original or poached from the Wikipedia websites. This was done through doing a

search for specific terms and comparing results from the MediaWiki website and

Wikipedia website. Fifth, the licensing of the content of the websites was analyzed, from

copyrighted content to GFDL-licensed content. This indicates the degree to which

websites are upholding Wikipedia’s value of freely accessible content. Sixth, the degree

of openness for modifying content was determined through the absence of login or

obligation to login to change content. Finally, the websites were analyzed in terms of the

presence of sponsored advertising, such as advertising banners.

2.1 Cultural Formatting as the Rearticulation of Discursive Rules

The first set of findings concerns the production of content and discursive rules on

the MediaWiki websites and their variations from the original Wikipedia model. By

212

analyzing variations at the level of content and at the level of the discursive rules offered

to users, the focus is on interventions at the level of signifying semiologies rather than a-

semiotic encodings and a-signifying semiologies. The main question for this section is

about how the MediaWiki software enables discursive changes that reflect a series of

cultural rearticulations of the original Wikipedia model. Overall, while the Wikipedia

model of encyclopedic knowledge repository is a central reference for most of those

websites, there is a minority of websites that refashion the discursive possibilities offered

by the MediaWiki software to create cultural forms that are radically different from

Wikipedia.

The first set of variations concerns the format of the MediaWiki websites, and

their departure from the encyclopedic model put forward by Wikipedia. 44 percent of the

websites present themselves as encyclopedia, that is, as focused on producing knowledge

about a specific or general range of topics.

213

Figure 32: Largest Mediawikis - FormatLargest Mediawikis - Format

0

20

40

60

80

100

120

ency

clope

dia

gaming gu

ide

shar

ed re

sour

ce

IT sup

port

guide

databa

se

dire

ctor

y

dictiona

ry

calend

armap

public

deba

te

The predominance of the encyclopedic format shows that Wikipedia’s model is partially

reduplicated through the use of the MediaWiki software. A common format that departs

from the specifically encyclopedic model but is still built on the idea of creating a

repository of knowledge is that of the guide, be it a location guide (7 percent) about a real

physical space (e.g. Saint Louis in the case of wikilou.com or Iowa State University with

rofflehaus.com) or a gaming guide (16 percent), as with, for instance, wowwiki.com, a

World of Warcraft strategy and gaming guide, or IT support guides (9 percent). The

guides are designed to help users navigate real and virtual spaces. They differ from the

encyclopedic model in that they focus on questions of practicality and usage. The IT

214

support wikis, in particular, are devoted to providing resources for developers and users.

The MediaWiki software is also used to produce spaces of shared resources (13 percent).

Some of the websites that fall into this category define themselves as wikis, and in

general, their focus is on encouraging the creation and circulation of resources on a

particular topic. Their goal is not encyclopedic, as these websites aim at fostering

operating solutions through propagating strategic knowledge. For instance, icannwiki.org

about the Internet Corporation for Assigned Names and Numbers is described on its

“about” page as a “wiki put together by ICANNWIKI Volunteers with the belief that a

public wiki can be a real benefit to the ICANN community”. Another website of interest

is sourcewatch.org, which is produced by the Center for Media and Democracy to

“document the PR and propaganda activities of public relations firms and public relations

professionals engaged in managing and manipulating public perception, opinion and

policy”. Other variations from the encyclopedic model include websites that present

themselves as dictionaries (1 percent), directories (4 percent), databases (5 percent) or

calendars (1 percent). Overall, all the MediaWiki websites have knowledge organization

as one of their core goals. The goals for creating repositories of knowledge vary from one

website to another and define the different cultural formats that they adopt. Only one

website in the sample does not use the MediaWiki software to build a repository of

knowledge, and that is wikimocracy.com, which presents as “the open debate you can

edit”.

215

Figure 33: Wikimocracy.com

While the main discursive characteristic of the other MediaWiki websites is the

possibility of creating navigationable spaces containing large amounts of knowledge,

Wikimocracy puts forward another characteristic offered by the MediaWiki software -

collaborative participation - as its central discursive principle. Thus, while the skin of the

website is similar to that of Wikipedia, its function as a space of debate where

disagreement are encouraged differs from Wikipedia model of knowledge repository

where the end goal is to resolve disputes and disagreements about the content of articles.

In terms of the aesthetic of the websites, 193 of the websites have a skin that is

similar to that of Wikipedia, with sometimes a change in background colour and logo.

The font, the placement of the menu bar and of the presence of different navigation tools

(edit, view source and history buttons, for instance) remain the same. Eight of the

websites have mixed skins in that some elements such as font or design of the menu

varied from the original Wikipedia skin. 29 of the websites have skins that are radically

different from that of Wikipedia.

216

Figure 34: A Wikipedia Skin Clone

Figure 35: A Mixed Skin Model

217

Figure 36: A MediaWiki Site with a Different Skin than Wikipedia

Skin difference or similarity with Wikipedia is a partial indicator of the cultural

affiliations between MediaWiki websites and Wikimedia projects such as Wikipedia.

Radically different skins indicate a separation from Wikipedia as a recognizable cultural

model from a user perspective. However, changing or designing a new skin requires

considerably more effort and skill than using the default MediaWiki skin, so there are

multiple reasons to explain skin similarities. In correlation with other factors such as

focus and format, skin variations can help point out cultural differences between

Wikipedia and other MediaWiki websites. All but one of the websites that have a

radically different appearance from Wikipedia have a different focus, format and model

than Wikipedia. Examples include the Mozilla Firefox website in the IT category and the

218

Marvel Universe website in the entertainment category. In those instances, the technical

infrastructure is used to create a completely different website that does not reduplicate the

Wikipedia format. The only exception is articleworld.com, which is a general

encyclopedia, but has a skin that is different from that of Wikipedia.

The MediaWiki websites have a wide range of focuses. There is only a minority

of websites (8 percent) whose focus is general and that present themselves as

encyclopedias, dictionary or directories. Therefore, a minority of websites has the kind of

general scope that Wikipedia offers. In particular, seven websites51 have a general scope

and present themselves as encyclopedias similar or complementary to Wikipedia with

some variations in terms of cultural goals. Indeed, all but one these seven websites -

bvio.com - have completely original content. Presenting itself as the “freest knowledge

depot on the Net” that can be used for “storing any kind of information”, and surrounded

by sponsored advertising, bvio.com is an instance of the reinscription of open-source

knowledge within techno-commercial networks that will be explored in more detail in the

final section. Bvio.com acknowledges that it originally used Wikipedia content but

declares that it diverged from Wikipedia and claims that “some years ago Wikipedia tried

to force some kind of copyright”.52 All but one of the general encyclopedia wikis

(articleworld.com) have the same skin as Wikipedia, with some variations in colour (i.e.

S23.org). Some of the other general-scope encyclopedic websites acknowledge their link

to either Wikipedia or the Wiki philosophy and discursive rules, thus presenting

51 www.wikinfo.org, www.articleworld.org, http://s23.org, http://www.meols.com,http://infoshop.org, http://infoshop.org, http://bvio.com., http://uncyclopedia.org/wiki/.

219

themselves as complementing Wikipedia or reformulating some of the discursive

possibilities offered by open-content creation. The wikinfo.org website, for instance,

presents itself as “intended to complement and augment Wikipedia, and ultimately

surpass it as an information resource” and declares that “dictionary definitions, links to

websites, quotations, source texts and other types of information not acceptable on

Wikipedia are welcome”.53 In terms of editorial policy, wikinfo.org differs from

Wikipedia as it asks that a topic should be presented “in a positive light” and that

“alternative or critical perspectives should be placed in linked articles”, thus departing

from the Wikipedia process of attaining a neutral point of view representative of a variety

of positions within each article. Other general encyclopedia websites develop the idea of

open collaboration from an anarchistic perspective. S23.org, for instance, declares on its

homepage that it is a “non-hierarchical geek contents disorganization by uncensored,

decentralized, transglobal multi-user hypertext editing without restrictions” that uses an

“anarchistic publishing tool”. This reinscription of the ideals of open, non-hierarchical

collaboration promoted by Wikipedia within open source cultural ideals (“non-

hierarchical geek disorganization”) enforces the articulation between the free-software

and free-content movement and anarchism and anti-capitalism. The website infoshop.org

further elaborates on this articulation by presenting itself as a “collaborative project

sponsored by infoshop.org and the Alternative Media Project (...) rest(ing) on the

principles of free software and wiki software” that publishes and builds information from

52 http://bvio.com/index.php/About_Bvio53 http://www.internet-encyclopedia.org/index.php/Main_Page

220

an anarchistic perspective. Discursive practices and political ideals are thus articulated

and mediated through the use of the MediaWiki software. Finally, two of the general

encyclopedia wikis also depart from the Wikipedia model through a declared intention of

being spoof or humoristic encyclopedias. The featured article on Roger Federer on the

main page of meols.com, for instance, starts by describing Federer as “tennis star par

excellence and fashion icon for blind people”. The homepage news section of

uncyclopedia.org asserts that the “seventh Harry Potter book (is) reportedly based on

(the) Sopranos season finale”. These spoof encyclopedias play on the main concerns

raised about Wikipedia - vandalism and veracity of content. By making false information

and parodies their core discursive principle, these websites take discursive possibilities

that Wikipedia aims to extinguish and give them a prominent role. This reversal of

discursive rules thus refocuses some of the discursive possibilities offered by

collaborative authorship on wiki platforms. The main rearticulation of the Wikipedia

format for those general encyclopedia websites thus consists of shifting discursive

practices through a redefinition of cultural ideals and metadiscursive rules. The technical

opportunities offered by the software remain unchanged - it is their articulation with new

discursive (the ways content should be presented) and metadiscursive rules (i.e. open

collaboration leading to the fulfillment of an anarchistic ideal) that produces discursive

variations on the Wikipedia’s model. Those new articulations do not take place at the

level of expression - the formatting remains the same - but rather at the level of content,

particularly at the level of substance of content: the social and discursive values and rules

that shape the formulation of signified contents.

221

Figure 37: Largest MediaWikis - FocusLargest MediaWikis - Focus

0 20 40 60 80 100 120

entertainment

computers

general

religion

location guide

other

science

institutions

arts

politics

social sciences

education

history

sexuality

geography

Apart from the eight general encyclopedia wikis that share a direct cultural affiliation

with Wikipedia, the rest of the websites differ from Wikipedia through a narrower focus.

The most common focus for MediaWiki websites is entertainment (45 percent), followed

by computers (13 percent) and religion (8 percent). There is a total of 11 percent of the

websites that focus on conventional encyclopedic categories such as sciences, social

sciences, history, geography, politics and religion. The “institutions” category, which

represents 3 percent of the total websites, includes projects sponsored by specific

institutions on specific issues. This includes the One Laptop per Child wiki,54 the United

54 http://wiki.laptop.org/index.php/

222

Nations Development Program knowledge map55, and the Youth Rights Network 56. The

categories of Politics and Religion do not only include political science or encyclopedias

about specific religions, but also religious and political groups and communities. These

engaged communities include communities wanting to establish their presence on the

Web through collaborative knowledge production. Examples include creationwiki.net,

which is devoted to propagating a creationist perspective on science, and the Canadian

Christian Workers Database. In the Politics category, groups include conservapedia.com -

a Republican encyclopedia - and dKsopedia, the Daily Kos community liberal

encyclopedia.

The importance of the category of entertainment (i.e. film, television, gaming,

sports, music, pop culture) echoes the most popular sections of Wikipedia in terms of

most visited pages and popular categories for new articles, as seen in the first section of

this chapter. Gaming dominates the entertainment section, with 40 websites devoted to

computer games out of the 100 websites categorized as entertainment. This predominance

of new information and communication technologies is also apparent with the 13 percent

of all websites focused on computer-related issues, from hardware to software and IT

support. The predominance of information and communication technologies indicates

that Wikipedia is not only an important actor in terms of knowledge production, but is

also a central tool for communities of technology-savvy users as a way to organize

information about IT. Furthermore, out of the 40 gaming websites, 35 are gaming guides

55 http://europeandcis.undp.org/WaterWiki/index.php/56 http://www.youthrights.net/index.php?title=Main_Page

223

devoted to both offline and online games such as Oblivion, World of Warcraft, Final

Fantasy and Guild Wars. The main characteristic of these games is the complex

environment they offer - from the multiple storylines of Oblivion to the massive

multiplayer spaces of Warcraft and Guild Wars. This complexity has spanned a series of

official and non-official guides, among which the gaming guides that appear in the

sample of largest MediaWiki websites. There is continuity in using wiki technology to

help users navigate complex virtual spaces such as video games. After all, the process of

creating video games involves the production of a series of small units of challenges to

the user. Creating a wiki rather than a traditional website or a paper gaming guide allows

for a collaborative effort in a form of reverse engineering, where complexities are broken

down to more manageable units. Here, there is a translation from the production logic of

the video game to the user logic of the gamer, from the organization of the gaming

system to documentation about this organization and logic. The wiki system allows for

the mediation of gaming content as a specific kind of signifying semiology (image and

sound-based) into another kind of signifying semiologies - a wiki-formatted signified

content that is primarily text-based. The wiki format plays a pivotal role in enabling this

mediation through offering the possibility of collaborative authorship, without which it

would be extremely difficult to create a comprehensive guide, and a hypertextual

organization where complex processes can be broken down into smaller units such as

articles and still be linked to each other and organized in multiple manners through

hypertext. The same process takes place with the wikis focused on computer-related

issues. Out of the 29 MediaWiki sites about computers, 19 are IT support websites. This

224

includes open-source software (i.e. Linux Man, OpenOffice) as well as private software

(i.e. C# by Microsoft, Fastmail). This common use of the wiki format to create support

websites demonstrates the close ties between software development and the

documentation of software development in the form of collaboratively produced articles.

Again, those mediations between system and signified content are made possible through

the wiki format, which enables collaborative hyperlinked knowledge production. Also

notable is the fact that 19 of the 29 computer-related websites are more specifically

focused on open-source issues and non-proprietary software. This includes not only

support websites, but also techno-libertarian websites on peer-to-peer software and

issues57 as well as hacker resources 58 and spaces devoted to the gift economy and

cyberliberties59. The affiliation is not only technological, but also cultural in that the wiki

format is used by diverse IT communities and in particular, cyberlibertarian communities

that focus on developing a link between technological possibilities and political ideals.

The final dimension in terms of the discursive and cultural rules that operate at the

level of content concern the degree of openness of participation. Wikipedia and other

Wikimedia projects are open-collaboration project where users can participate in content

creation. The particularity of Wikipedia is that there is no login required to post content,

therefore anonymous participation is possible. By contrast, only 83 of the 232 websites

have the same degree of openness with regards to who can post content. Login is required

for 149 of the websites, and this can be explained by several reasons. Wikipedia is unique

57 http://p2pfoundation.net/58 http://wiki.whatthehack.org/index.php/

225

in that it has access to an extensive number of “administrators”- 1,276 in July 200760 - to

monitor changes in content and block vandals. By comparison, the largest MediaWiki

websites generally have a lower number of administrators. While the Youth Rights

Network wiki tops the list of number of administrators with 1143 administrators, the next

ranked website is OpenNetWare with 106 administrators and Uncyclopedia with 47

administrators. On average, the websites in the sample have 13 administrators with 179

websites having 10 administrators or less. Policing a wiki with fewer administrators can

be time consuming, and the reason for having a login is that it is a deterrent against

vandalism. Thus, there are practical reasons for disabling anonymous content production,

and it is not possible within the scope of this study to know the reasons why MediaWiki

websites are set up with or without login requirements.

Overall, the rearticulation that takes place at the level of content and discursive

rules between Wikipedia and other websites through the use of the MediaWiki software

shows that Wikipedia plays an important role as a cultural format for a diverse range of

communities. That is, it is not simply the encyclopedic model put forward by Wikipedia,

but also the ways in which Wikipedia is embedded through different communities - the

IT community of technology-savvy Internet users, local communities using the wiki

format to create guides about a specific locale, and politically engaged communities.

Overall, these rearticulations and remediation of the Wikipedia model through the use of

MediaWiki take place at the level of signifying semiologies, as human actors select

59 http://www.infoanarchy.org/en/60 http://en.wikipedia.org/wiki/Wikipedia:List_of_administrators

226

specific technical possibilities to reduplicate or create new discursive possibilities, thus

operating a series of transformations that represent a shift from the Wikipedia model to

mixed models that involve cultural goals specific to diverse communities of users. This,

however, is but one level at which the Wikipedia cultural format can be articulated and

mediated.

2.2 A-signifying Processes and the Channeling of Cultural Formats

The formulation of new cultural goals can also take place at the a-signifying level.

This surfaces in the analysis of the MediaWiki websites with the political and religious

websites using the MediaWiki software to propagate specific political - be they

anarchistic or conservative - or religious points of view. The difference between, for

instance, the Buddhist Encyclopedia (buddhism.2be.net) and creationwiki.net, which

focuses on propagating a creationist perspective on science is that one operates through

the collaborative encyclopedic principle of building knowledge about a religion while the

other targets a specific domain (science) to propagate religious beliefs. Websites such as

creationwiki.net are not representative of the cultural communities that are usually

affiliated with Wikipedia and its open source model that locates itself outside of for-profit

cultural production. One of the reasons behind the creation of such websites is that

collaborative authorship allows for comparatively faster content production than

traditional HTML websites by making it possible to have many authors instead of a few.

Furthermore, the bigger a website is in terms of content, hyperlinks and referral links, the

greater its presence on the Web and on search engine listings. While this kind of logic is

present on all the websites of this study to varying degrees, it becomes more apparent

227

with wikis that are developed to propagate a specific message. This type of logic is

located at the level a-signifying semiotics rather than signifying semiologies. That is, the

underlying logic is to transform signifying semiologies such as content into a mass of

data, that is, into material intensities that can then be noticed by other websites and search

engines and thus further integrated into different the technodiscursive networks that cross

the Web. In do doing, the discursive status of the website evolves as it gains prominence

in these technodiscursive networks. Analyzing the a-signifying processes at stake in the

deployment of the MediaWiki software on the Web thus requires an examination of other

technical, cultural and political layers that shape specific discursive statuses. This goes a

step further than exploring the new discursive rules and cultural content of MediaWiki

sites presented above in that rather than comparing internal discursive rules, the

analytical process deals with broader technocultural dynamics shaping the different facets

of the Web in terms of access to content and commercialization. In that sense, the

websites are integrated in these a-signifying flows and become instances of some of the

new technocultural transformations of content and discourse on the broader Web.

Although it is not noticeable in the sample, content can be rearticulated within a-

signifying machines through processes of reduplication as seen in the study of the

circulation of Wikipedia content (Langlois and Elmer, 2007). Wikipedia content can be

used to make websites more visible and thus attract traffic. This is another instance where

content is used as a way of manipulating material intensities such as data and traffic

within commercial networks of sponsored advertising.

228

Figure 38: Largest MediaWikis - Intellectual Property RegimesLargest MediaWiki Websites - Intellectual Property Regimes

open content66%

private copyright10%

mixed open/private1%

other0%

N/A23%

The second level at which a-signifying processes come into play in terms of defining the

discursive status of the MediaWiki websites is content licensing. The Wikimedia projects

are licensed under the GFDL, whose purpose is “to make a manual, textbook, or other

functional and useful document “free” in the sense of freedom: to assure everyone the

effective freedom to copy and redistribute it, with or without modifying it, either

commercially or noncommercially”.61 There exist other free content licenses designed to

offer other possibilities of use of content. The Creative Commons licenses, for instance,

are based on combinations of four conditions - Attribution, Noncommercial, No

Derivative and Share Alike. Attribution allows for copying, distribution, performance and

creation of derivative work if attribution is given to the original author. Noncommercial

requires that content can be used for non-commercial purposes only, a condition that is

229

not possible with the GFDL. No Derivative Works means that only verbatim copies are

allowed and Share Alike requires that derivative works must have an identical license to

the original work.62 Content licensing plays a central role in revealing how content is

allowed to circulate on the Web and through other media. The majority of MediaWiki

websites - 66 percent - uses open content licenses or release their content under the public

domain. 10 percent of the websites use private copyright and 23 percent of the websites

did not have any indication as to the type of content licensing they were using on their

websites. The GFDL is predominant (37 percent) in terms of open content license, thus

showing the influence of the Wikipedia model. It is also not uncommon that some of the

MediaWiki sites use the original licensing text from Wikipedia, sometimes without

replacing the name “Wikipedia” with their own. Creative Commons licenses are the

second most popular type of open content licensing. There are 54 websites using five

different kinds of Creative Commons licenses. The most common license is the

Attribution-Noncommercial-Share Alike license, which differs from the GFDL in that it

forbids commercial use of content. There are 33 websites using a Creative Commons

license that include the Noncommercial clause, thus revealing an intention to further

develop free content spaces on the Web as opposed to privatized content.

Figure 39: Largest MediaWikis - Intellectual Property Regimes Breakdown

61 http://www.gnu.org/licenses/fdl.txt62 http://creativecommons.org/about/licenses/

230

Largest MediaWikis - Intellectual Property Regimes

0 10 20 30 40 50 60 70 80 90

GFDL

CC

N/A

copyrights to site owners

copyrights to original authors

public domain

GFDL, CC

open content

mixed open/copyright

other

GFDL, open content license (OCL)

It is noticeable that 10 percent of the websites use copyrights, with 4 percent of the

websites declaring that copyrights go to the original authors and 6 percent claiming

copyrights or full rights for publishing content to the site owners. The websites that give

copyrights to the original author show that they define themselves in terms of a

publishing role rather than an authorial role. The websites that claim that the site owners

own the copyright to all content published on the website or have full rights operate

under a different logic - one that privatizes collaborative content. These websites are

predominantly encyclopedias (9 out of 14 websites) and have a wide range of focus, from

entertainment to religion, art, computers and taxes. Two of those websites -

archiplanet.org and feastupontheword.org - have no login requirements to create or

modify content. Archiplanet.org is an interesting example of a website that looks like

Wikipedia with a similar skin, structure and discursive rules that encourage users to

“find, post, and edit the facts and photos here on your favorite structures of all kinds,

231

from your own cottage to the latest skyscraper to your nation’s capitol”.63 On its “General

Disclaimer” page, archiplanet.org states that while users retain copyright to their

contributions, they grant “full rights to Artifice, Inc. to publish those contributions”

through a “perpetual non-exclusive license to Artifice and our assigns for publishing at

Archiplanet and in any other publications in any media worldwide.” The website is

sponsored by the magazine ArchitectureWeek, thus showing how user-produced content

can be reintegrated into a private publishing network. This type of provision circumvents

the restriction imposed by the GFDL that derivative works must be released under the

same GFDL license. This privatization of collaboratively produced content is also present

on the Marvel Universe website, which was built with the MediaWiki software. Marvel

Universe is a subsection of marvel.com and is described as a “dynamic, community-

fueled online encyclopedia of all things Marvel. (...) In order to ensure that Marvel

Universe is the best online resource for Marvel bios, we turned it over to the experts:

you”.64 The skin of the website is completely different from that of Wikipedia and the

Terms and Conditions page stipulates that users submitting materials grant Marvel and its

affiliates:

A royalty-free, unrestricted, worldwide, perpetual, irrevocable, non-exclusive and fully transferable, assignable and sublicensable right andlicense to use, copy, reproduce, modify, adapt, publish, translate, createderivative works from, distribute, perform and display such material (inwhole or in part) and/or to incorporate it in other works in any form,media, or technology now known or later developed, including forpromotional and/or commercial purposes.

63 http://www.archiplanet.org/64 http://www.marvel.com/universe/Main_Page

232

Marvel thus makes use of the discursive possibilities offered by the Wiki format with

regards to collaborative content production, but radically departs from the open-content

model by privatizing it for its own uses and benefits. The a-signifying process that

operates at the level of the licensing of content thus channels content into specific

discursive networks - from the copyrighted commercial spaces of the Web and other

media to, on the contrary, free-content spaces that operate outside of the commercial

Web, in the case of the non-commercial licenses. User-produced content can be

articulated to larger processes of commercialization, thus contradicting some of the ideals

of the gift economy usually associated with the wiki format. As Terranova (2000) argues,

while the high-tech gift economy described by Barbrook (1998) has been seen as being

autonomous from capitalism and leading the way towards a future of “anarcho-

communism”, the reality is that the digital economy has been able to function through the

very use of user-produced content. This form of free labour, according to Terranova

(2000), is a characteristic of the digital economy. As Terranova (2000) describes it, the

“mechanism of internal capture of larger pools of social and cultural knowledge (...)

involves forms of labor not necessarily recognized as such: chat, real-life stories, mailing

lists, amateur newsletters, etc” (p. 38). As the licensing of MediaWiki websites shows,

collaboratively produced knowledge could be added to that list. As Terranova further

describes, the processes of “incorporation” of user-produced labor as free labor “is not

about capital descending on authentic culture but a more immanent process of channeling

collective labor (even as cultural labor) into monetary flows and its structuration within

capitalist business practices” (p. 39). A-signifying processes that involve commercial

233

interests and intellectual property schemes allow for the channeling of collaborative

content into private business. These a-signifying processes do not operate at the level of

the signification of content, but transform its status so that it can be oriented and

channeled within specific commercial or non-commercial flows.

The intellectual property regime of MediaWiki websites is but one of the

component that can be used by a-signifying processes. Another characteristic of

commercial channeling is the use of sponsored advertising, which is more visible in the

sample than the privatization of content through copyrights and full licenses. Sponsored

advertising, such as advertising banners, appeared in 37 percent of the websites and is not

about the direct commercialization of content, but about the use of content to attract and

redirect traffic within broader commercial channels.

234

Figure 40: Largest MediaWikis - Advertising BreakdownLargest Mediawiki Sites - Advertising Breakdown

Amazon1%

Bidvertiser1%

Google/Google and other82%

Yahoo1%

other15%

The most popular sponsored advertising program is Google AdSense, which is used by

82 percent of the websites using advertising, either by itself or in combination with other

forms of advertising. The other recognizable online advertising solutions offered by

Amazon, Yahoo!, and Bidvertiser represent only 1 percent each of the total number of

websites using advertising. Google AdSense departs from traditional advertising banners

by using software that tailor the content of the advertising banner depending on the

content of the website. As is explained on the Google AdSense homepage:

AdSense for content automatically crawls the content of your pages anddelivers ads (you can choose both texts or image ads) that are relevantto your audience and your site content - ads so well-matched, in fact,that your readers will actually find them useful.

235

Google AdSense is popular as it is automatically customizable and can thus offer

a higher click-through rate and higher revenues than other non-contextualized advertising

solutions. It seems that there are a range of reasons why some websites choose to use

sponsored advertising. Sponsored advertising might provide a revenue to pay for the

existence of the website, such as server costs. It might also be part of a more elaborate

business plan to make money out of attracting traffic and redirecting it through sponsored

links. The Marvel Universe website, for instance, uses Google advertising, thus creating

further revenues based on redirecting traffic to supplement the commercialization of user-

produced content. The type of a-signifying process that takes place with sponsored

advertising proceeds by identifying correlations between the content of a website and a

list of advertisers. This process is similar to the process of recommendation seen with the

amazon.com case study, where the goal was to create a smooth chain of signification that

links users to commercial networks. There is a further similarity in the attempt to

translate the practice of reading and accessing content into a set of needs and desires that

can be fulfilled by commercial entities. The a-signifying process is not only about

reinscribing content within the seamless channel of commercial sponsoring, but also of

retransforming that content into material intensities capable of attracting users. That is,

the integration of the wiki websites within commercial channels also requires that content

be seen not only in terms of signified content, but also in terms of material intensities

capable of attracting another type of material flow: user traffic. The a-signifying process

thus imposes new signifying practices on content and mediates content as material flows

to be connected with traffic flows in order to create networks of sponsored advertising.

236

This type of a-signifying process reveals the ways in which open-source content

can be rearticulated and channeled within online commercial networks. The Wikia

sample that was not included in this study because of faulty URLs further demonstrates

the multiple commercial channels within which open collaborative content can be

articulated. While independent from the Wikimedia foundation, Wikia was co-founded

by Jimmy Wales, the main founder of Wikipedia and Wikimedia. Wikia offers hosting

services for wikis and describes itself as:

...supporting the creation and development of over 3000 wikicommunities in more than 70 languages. Part of the free culturemovement, Wikia content is released under a free content license andoperates on the Open Source MediaWiki software.65

One of the main differences between Wikia and Wikimedia projects is that Wikia is a

Wiki hosting service - users wanting to create a wiki do not have to download the

MediaWiki software, but can use the Wikia interface. The creation of a wiki is thus

simplified. Furthermore, Wikia is free for users, but uses Google sponsored advertising to

generate revenues. Wikia is also partly financed through investments from

amazon.com.66 While Wikia is built on an open-content model and thus shares a cultural

link with Wikipedia, it is built on the idea of generating revenue from the content and

traffic on the website. That companies specializing in customized marketing for users

such as amazon.com show interest in financing Wikia is revealing of the ways in which

open content can further be reinscribed within commercial models. In some ways, this

process of commercializing open content is similar to the commercialization of open-

65 http://www.wikia.com/wiki/About_Wikia

237

source software. As Terranova recalls, open-source software plays an important role in

the development of a digital economy: “you do not need to be proprietary about source

codes to make a profit: the code might be free, but tech support, packaging, installation

software, regular upgrades, office applications and hardware are not” (p. 51). The process

of commercialization of open-source content does not take place through the direct

imposition of fees on users to publish or access content. Rather, processes of

commercialization take place at the level of mining content and developing solutions to

translate behaviours and content into desires and needs that can be commercially

fulfilled. Content might still be perceived as free in terms of users not having to pay to

access it, but it is used to channel those users of free content within commercial

networks. Moreover, through the rearticulation of open, collaborative content within

commercial networks, there is a transformation that takes place from creating content as

an activity located outside of the sphere of commodified culture to the redefinition of

such practices as free labor in the digital economy. Communities of interest are created

and provide both the resources (content) and the audiences (the users) to allow for the

sustenance of those new commercial networks. As Terranova puts it: “late capitalism

does not appropriate anything: it nurtures, exploits and exhausts its labor force and its

cultural and affective production” (p. 51). However, examples such as Wikia demonstrate

that the process is not one of exhausting but rather of constant nurturing and exploitation

through data mining. The constant nurturing of users as cultural workers through “free”

perks such as user-friendly platforms and the possibility of harnessing a community of

66 http://www.wikia.com/wiki/Amazon_invests_in_Wikia

238

users sharing the same interest and cultural ideals of accessible knowledge and

communication gives way to a parallel system of exploitation of users and their cultural

production. The uniqueness of this parallel system is that it does not directly infringe on

users. Rather it is presented as opportunities and possibilities that are not imposed on

users’ practices but coexist with them.

The examination of the a-signifying processes at stake with the propagation of the

MediaWiki software reveals the existence of the different channels that constitute the

Web. In some ways, it becomes necessary to talk about different coexisting webs that

articulate commercial, cultural and technical processes. There are multiple processes that

take place to transform systems of signification into material intensities in order to build

new a-signifying processes. These flows of articulation have an impact on the discursive

and cultural models put forward by the wiki platform and Wikipedia by enabling a series

of technocultural readjustments. Those readjustments do not intervene at the level of

discursive practices - which are primarily shaped through the translation of technical

possibilities into cultural goals - but make use of discursive practices to create new

commercial and non-commercial networks.

239

Chapter 5

Conclusion: Meaning, Subjectivation and Power in the New Information Age

Examining the technocultural dimensions of meaning involves analyzing how

meaning is constituted through both material (i.e. technical) and cultural constraints and

possibilities. As such, this research aimed to demonstrate that questions regarding the

relationships between meaning, or the content of text, and the social, political, cultural

and economic dimensions of communication technologies have too often been ignored

within the field of communication studies. The traditional separation between the

medium and the message has created an artificial boundary that needs to be overcome if

we are to pay true attention to the ways in which the production and circulation of

meanings within technocultural environments such as the Web serve to organize a social

order and a cultural context characterized by specific relations of power. In that sense,

Guattari’s mixed semiotics framework proved invaluable for locating the ways in which

processes of signification are articulated with a-semiotic and signifying processes. It

became possible to study meaning in its articulation with specific technocultural contexts

and to examine the ways in which strategies developed for meaning production and

circulation serve to define specific modes of agency and thus, specific power relations

among the human and non-human actors involved in the case studies. In that sense,

Actor-network theory also proved invaluable in providing a vocabulary to understand the

articulations between a-semiotic, signifying and a-signifying processes and to trace the

ways in which cultural ideals, processes, power relations and subject positions are created

and redefined through networks of human and software agents.

240

Amazon.com and the adoption of the MediaWiki software were used, within the

scope of this research, as sites of analysis for understanding how software in charge of

supporting content production and circulation can be understood as a pivotal actor that

articulates a-semiotic, signifying and a-signifying regimes, as an actor that links content

with the technocultural production of a social order. Software for content production and

circulation, then, can be analyzed as creating new material possibilities and constraints,

and as translating cultural ideals and becoming a delegate for an a-signifying machine

that imposes regularities in order to existentialize specific power formations. With

regards to synthesizing the case studies, it is useful to further reflect on the articulation

between meaning-making and the formation of a technocultural order through the

deployment of software. First, there needs be a reflection on the ways in which the

informational dynamics present in the deployment of software force us to reconsider

meaning-making and to identify a new framework for analyzing meaning-making from a

communication and cultural studies perspective.

The second site of synthesis concerns in particular the other category of actor

involved in the content production and circulation: the user. The case studies make it

apparent that the user is a problematic site as both standing in for human actors, but also

as shaped by software and technocultural milieus. It is possible to examine a particular

set of the articulations between a technocultural milieu and human agents that produce

users, and these articulations concern the ways in which the technocultural context

imposes specific modes of agency and processes of subjectivation on human actors. For

the purpose of this particular research, it is useful to adopt a narrower definition of the

241

user that does not encompass the play of subjectivities, identities, agencies, and potential

modes of resistance as they are expressed by actual human actors interacting with the

amazon.com or MediaWiki architecture. Such characteristics of the user cannot be

studied within the scope of this research. Rather, it is possible to examine the user as a

site where technocultural processes define specific modes of being online. This particular

definition of the user stems from the recognition that online mixed semiotics networks

delineate a specific range of communicational practices and as such express a vision of

what users should be. This “ideal” - from the point of view of the software - version of

the user as deployed through technical capacities, discursive positions and

communicational practices constitutes an important first step for understanding processes

of subjectivation on the Web. Subjectivation can be understood as a process of becoming,

as encompassing the modes of existentialization that arise in relation with a

technocultural context. Subjectivation takes place through the articulation of human

actors with diverse technocultural components that, in the case of the Web, express

specific modes of calling human actors, that is, specific possibilities of existentialization

within technocultural power formations. With regards to understanding the role played by

technologies, the concept of subjectivation invites us to consider how technocultural

networks participate in the complex process of translation and negotiation through which

human agents are in turn invited or forced to articulate themselves with specific agencies

and subject positions. In that sense, software creates potential modes of existence that

participate in offering human actors specific user-positions within technocultural power

relations. Reassessing the politics of usership makes it possible to develop another

242

perspective on the question of the link between meaning and the creation of a social

order. In particular, the definition of the sphere of agency of the user and the

multiplication of the modes of usership offer a way to further examine how the

technocultural production of meaning serves to existentialize specific power relations and

modes of subjectivation.

1. Rethinking the divide between Information and Meaning Production and

Circulation through Mixed Semiotics Networks

The theoretical framework that was used for the case studies focused on locating

processes of signification in their articulation with non-linguistic and technocultural

dynamics. These articulations required different perspectives as signifying processes, a-

semiotic and a-signifying dynamics mutually shape each other. The implications of the

mixed semiotics framework for cultural studies of communication include the need to

further acknowledge the informational aspect of communication.

The main critical reassessment in the study of signs and meanings on the Web

concerns the shift from meaning itself to the conditions of meaning production. That is,

both case studies were not centrally focused on the ideological and representational

dimensions of meaning. For instance, the meaning of Harry Potter as the most popular

contemporary children’s book in the world was not the primary focus of the amazon.com

case study. This does not mean that questions related to meaning are unimportant. On the

contrary, they are central questions but in order to fully explore the constitution of

meaning within online spaces, it is necessary to examine how specific meaning

conditions are shaped within technocultural contexts. The theoretical shift that takes place

243

consists of examining the power relations at stake in the formation of meaning, and in

particular the power relations that shape a particular language or means of expression,

delineate specific modes of decoding meanings, and define the sphere of agency,

legitimacy and competency of the actors involved in meaning formation and circulation.

The study of the technocultural wrappers that make meaning possible within online

spaces includes a consideration of the relationship between informational processes and

meaning formation. In that regard, “informational dynamics” (Terranova, 2004, p. 51)

play an important role as communication and information technologies are gaining in

popularity, and in particular in creating both the conditions and the context within which

meaning formations can appear. While the question of meaning relies on Hall’s

encoding-decoding model (1980), informational dynamics have emerged from the

Shannon-Weaver model of sender-message-receiver (1980), where the central focus is on

the transmission of messages with the least noise, or confusion, possible. As argued in the

introductory chapter, the study of the role played by software in setting the technocultural

conditions for meaning production and circulation bridges questions regarding the

informational processes of Web technologies and the cultural process of representation.

The Web can be considered from both a cultural and an informational perspective as a

space of representation and an ensemble of techniques to transmit informational data over

a vast computer network. Guattari’s mixed semiotics framework makes it possible to

examine the articulation between informational and cultural processes as they are

expressed through a-semiotic, signifying and a-signifying processes. While informational

dynamics and the question of meaning have traditionally been seen as separate fields of

244

inquiry with their own theoretical questions and methodologies, there is a need to

consider how meanings are shaped through informational processes within the

technocultural context of the Web. The mixed semiotics framework shows that it is

necessary to examine how informational processes have an impact on the encoding and

decoding of messages by providing the means for meaning formations and enabling

practices of knowledge production and circulation. In that regard, the mixed semiotics

framework proved invaluable, in particular by identifying the deployment of a-semiotic

processes of data gathering and their circulation through a-signifying and signifying

networks. For instance in the case of amazon.com, a-semiotic encodings were translated

into recommendations (signifying semiologies) which were organized through an endless

chain of signification. The endless chain of signification offered specific modes of

cultural interpretation, and those modes of decoding participated in the shaping of a

specific consumer subjectivity that took place at the a-signifying level of

existentialization.

The shift consists in examining the technocultural dimensions of online spaces,

that is, the moments when informational dynamics are translated into specific cultural

processes, and vice versa in order to produce a stable context. Thus, the case studies

focused on exploring the ways in which meaning formations, or signifying semiologies

were implemented through their articulation with a-semiotic encodings and a-signifying

machines. Such a process involves acknowledging that technocultural stability is

achieved when the processes of translation and delegation between informational

processes and cultural dynamics is blackboxed. Such an approach makes it possible to

245

identify the machinic regularities, as Deleuze and Guattari would describe them, at stake

in the shaping of meaning, and thus the systems put in place that define the agency of

human and non-human actors as well as the processes of subjectivation of users. With

regard to establishing specific formats, amazon.com and MediaWiki are important as

they are emblematic as models of some of the technocultural forms that circulate on the

Web. Amazon.com is a reference as an online commercial space that deals with the

shaping of desires for products through the implementation of specific semiotic systems

designed to interpret and reveal the needs of users. Wikipedia as the most famous

embodiment of MediaWiki is an exemplar of a radically different model of

conceptualizing users within a collaborative format as active knowledge producers. Both

Wikipedia and amazon.com are important models for the broader Web. Amazon has been

exporting its features through its offers for Web services, and the Wikipedia model and

MediaWiki software have been exported onto other wikis and collaborative websites.

Amazon.com and Wikipedia.org cannot be simply considered as online spaces, but also

as online formats designed to be used by third parties. The Amazon.com format, as seen

in Chapter Three, circulates on the Web through the Amazon Web Services. In that way,

Amazon.com grants other developers the right to use aspects of amazon.com, such as the

shopping cart, or the recommendation system. Amazon.com still maintains control of the

database, and by multiplying the contexts of use of Amazon Web services, can further

enrich its database. While the circulation of the Amazon.com format on the Web is

almost meme-like with parts of the amazon.com technocultural logic being delivered to

third parties, the circulation of the MediaWiki software follows a different logic. As seen

246

in Chapter Four, the circulation of the MediaWiki software package allows for a greater

range of rearticulations, with websites making use of the MediaWiki package for

different purposes. Another common characteristic of both models is the way in which

they are embedded within the broader flows of the Web, for instance, networks of

advertising. Thus, Wikipedia and Amazon are important both as models and as instances

of the articulation of online spaces with other informational, technological and discursive

networks through the deployment and integration of signifying semiologies within a-

signifying and technocultural networks. Signifying semiologies are thus not only

important in and of themselves but also because of the processes by which they are

captured to hierarchize and define users and commercial and cultural entities and thus

create a social and cultural order through the stabilization of a horizon of cultural

expectations and practices to shape a social reality. The analysis of the mixed semiotics

of amazon.com and the MediaWiki software revealed how informational dynamics that

do not operate at the level of cultural representation nevertheless shape the conditions for

meaning production. The articulations produced machinic regularities are sustained

through the deployment of layers of software.

In order to study the actor-networks that makes the articulation between a-

semiotic encodings and signifying and a-signifying semiologies possible, it was

necessary to focus on the layers directly involved in the formation of meaning. A central

actor in this particular network is the software layer, whose complex role of linking

informational dynamics with cultural ones should be acknowledged. The software layer is

not simply an actor within signifying and a-signifying networks, but a mediator that

247

bridges different spheres and stands in for different types of actor. The software layer acts

as a mediator between the technical and the cultural, enabling the transformation of signal

and code into human-understandable symbols and signs. Furthermore, the software layer

acts as a delegate that stands in for other actors, such as commercial actors and users. The

sphere of agency of software reflects a translation of commercial, political and cultural

ideals and concepts into technical features. As seen with the MediaWiki case study, there

is a process of translation from a cultural ideal of collaborative knowledge creation to the

implementation of a collaborative wiki platform. In the case of amazon.com, the

informational space defined by the software articulates the practices of users within the

commercial imperative behind the very existence of amazon.com. Software is thus a site

where different kinds of meanings are shaped and formed, from the symbolic meanings

created through the interface to the cultural meanings that give form to software itself and

define the practices of users. As a mediator, software stands in and involves other entities

within the assemblage of human and non-human actors. In particular, software stands in

for programmers who have a specific range of commercial and cultural goals in mind

when designing Web interfaces. Software is also what allows for the inclusion of users

within the network. At the same time, users are defined by software through the cultural

and commercial parameters set up by the programmers.

Combining an actor-network approach with Guattari’s mixed semiotics

framework is useful for mapping out the a-signifying network of actors through an

examination of the processes and flows that make meaning possible. The examination of

the symbolic and representational elements and practices available at the level of the

248

interface through a mixed semiotics approach makes it possible to identify the

articulation between technical, cultural and commercial flows and thus to go beyond the

level of the interface. While the starting point of analysis was the formation of meaning at

the level of the interface, the mixed semiotics framework allowed for a critical

exploration that goes beyond the level of the website. This was especially apparent with

the MediaWiki case study, where the capturing of meanings within commercial networks

revealed a picture of the Web different than the one accessible from a conventional user

perspective. The commercial dynamics that capture meanings to capitalize on them show

the existence of other informational webs that graft themselves onto websites and search

engines to regulate flows of traffic and advertising revenues. While these flows do not

intervene in the ideological shaping of meaning, they nevertheless have an important

impact on the shaping of the commercial and discursive aspects of the Web.

The case studies thus underline the existence of cultural and commercial flows of

information that play a role in both defining and utilizing meaning formations. This has

theoretical and methodological consequences for our conventional understanding of the

Web. The analysis of the case studies used as a starting point the interface as a way to

examine the relationships between software, users and programmers and website owners.

The goal of the analysis, however, was not only to study the a-signifying machines that

operate with amazon.com and MediaWiki websites, but also to identify the a-signifying

flows and informational dynamics that embed signifying semiologies into the World

Wide Web. The circulation of signifying semiologies within broad a-signifying flows

reveals a need to go further than the conventional user perception of the Web as a

249

hyperlinked collection of websites. There are broader processes at stake that are not

immediately visible but nevertheless cross through the Web. Networks of targeted

advertising and recommendations reveal the existence of economies of the Web that are

not designed to be entirely visible to users. Or rather, those new flows reappear to users

in a quasi-magical manner as instantaneous advice and recommendations, through

targeted advertising, for instance, or recommendation systems. There exists an economy

that uses meaning and user behaviours and whose logic is invisible to the users and yet

has important consequences in the technocultural shaping of meaning formations. Such

processes reveal that it is necessary to critically assess conventional perceptions of the

Web that are limited to the Web interface in order to include a better awareness of the

flows that cross websites boundaries and are not directly mediated through other

conventional modes of seeing the Web, such as search engines. In the case of targeted

advertising, for instance, commercial entities exist in the background and use the

signifying logics of the Web to capture flows of traffic. Thus, at the methodological level,

understanding the Web requires new models, such as the mixed semiotics framework, to

uncover the informational flows that are not visible from a user perspective.

Meaning formations are captured within informational dynamics that encompass

a-signifying machines. One shared characteristic coming out of the amazon.com and

Wikipedia case studies is the new treatment of meaning formations through informational

dynamics. Informational dynamics, be they the amazon.com recommendation system or

the advertising flows crossing through the wiki sphere, only partially deal with meaning

at the conventional level of the cultural value of meaning. Informational dynamics are

250

only partly concerned with formulating a judgment about the validity of the meanings

being produced in online spaces. As seen with the MediaWiki case study, the discursive

status of text changes through its technocultural mediation as collaboratively produced

knowledge. The new practices made available by informational dynamics have an impact

on the discursive status of text. Yet, discursive changes are but one of the levels at which

to study the articulation of meaning formations within informational dynamics. Rather,

the capacity of meaning formations to be recaptured by new commercial and cultural

processes is also important. On amazon.com, the reinscription of user-produced meanings

within a commercial recommendation system illustrates a series of articulations of

meanings onto other informational flows. With Wikipedia, the reinscription of meaning

within networks of targeted advertising shows that meaning formations are captured in

order to produce new commercial and cultural spaces and flows. Thus, those specific

online informational dynamics operate in the same way as the global informational

dynamics described by Terranova (2004):

This informational dimension does not simply address the emergence of newhegemonic formations around signifiers (...). The informational perspectiveadds to this work of articulation another dimension - that of a dailydeployment of informational tactics that address not simply the individualstatement and its intercultural connections, but also the overall dynamics of acrowded and uneven communication milieu... (p. 54)

Informational dynamics as a-signifying machines focus primordially on the channeling of

signs across different commercial and cultural systems. The integration of meaning

formations within informational processes thus reveals the existence of new a-signifying

regimes on the Web that both provide the space for the production of signs and create

new a-semiotic and signifying processes to integrate those signs within other cultural and

251

commercial flows of information.

Rethinking the formation of meaning in online spaces through Guattari’s mixed

semiotics framework thus highlights new dynamics about the place of meaning within

informational spaces. Guattari’s framework is useful for identifying other processes that

make use of meaning and signifying systems as means rather than goals. The importance

of integrating an analysis of the processes taking place at the a-semiotic level in order to

understand a-signifying processes has been a constant in the case studies. The articulation

of meaning with other material and informational processes is central for understanding

how informational dynamics shape meaning formations on the Web. The notion of a-

semiotic encodings helps understand the processes that take place at the level of data, and

their integration within an a-signifying machine makes it possible to see how the question

of meaning formation goes beyond questions of ideology or hegemony. As seen with the

recommendation system on amazon.com, there is a circulation and translation of data into

meaning and meaning into data. Cultural uses are turned into statistics that can then be

compared with other statistics and retranslated into new cultural needs. In so doing, the a-

signifying machine proceeds by translating signifying semiologies into a-semiotic

encodings in order to reorganize a social and cultural order that fits with the perceived

cultural affinities of users. This process echoes Terranova’s statement (2004) that:

Information technologies have helped make the complexity of the sociusmanageable by compressing variations in tastes, timetables and orientations,bypassing altogether the self-evident humanistic subject, going from massesto populations of subindividualized units of information (p. 65)

Furthermore, the movement at stake with informational dynamics, such as that of the

recommendation system on amazon.com, is not simply about translating the social into

252

manageable data, but also of translating data back as a new social ordering. There is thus

a new process of representation of users at stake with the dynamic production of signs on

amazon.com. As Terranova argues, there are thus two sides to information (2004):

On the one hand, it involves a physical operation on metastable materialprocesses that are captured as probabilistic and dynamic states; on the otherhand, it mobilizes a signifying articulation that inserts such description intothe networks of signification that make it meaningful. (p. 70)

With the MediaWiki case study, the importance of a-semiotic encodings surfaced with

the commercial processes of capturing flows of meaning to turn them into traffic

magnets. Another instance of a-semiotic encoding that concerns both amazon.com and

Wikipedia, and by extension any websites using dynamic content production, is the use

of software to automatically publish content. With dynamic content production, cultural

stabilization takes place through the delegation of part of the process of meaning

formation to the software layer rather than the human layer. Meaning is further

incorporated as regularities produced through the interaction between the software layer

and users.

Informational dynamics thus act at different levels of meaning formations - from

enabling specific kinds of knowledge production practices that give meanings a specific

discursive status to the processes that do not act at the level of the ideological formation

of meaning, but through the reinscription and circulation of meaning within new a-

signifying cultural and commercial flows. In that sense, the analysis of a-semiotic,

signifying and a-signifying processes allows for a mapping of the articulation between

the informational dynamics and cultural processes. Informational processes act as

wrappers that do not only intervene in the articulation of the ideological content produced

253

through the production of signs, but also at the discursive level in terms of defining the

cultural status of meanings and the social order and power relations that make specific

meaning formations possible.

2. Mixed Semiotics and the Politics of Usership

A common theme emerging from both case studies concerns the user as a site that

invites human actors to articulate themselves with technocultural power formations. The

examination of the mixed semiotics present on the amazon.com website and through the

circulation of the MediaWiki software package was primordially focused on analyzing

the role played by software in articulating a-semiotic, signifying and a-signifying

processes. The main idea was to examine software as an actor that can create meanings.

As seen in Chapter One, exploring software as a signifying actor leads to acknowledging

that the development of the Web has made the analysis of the technocultural production

of meaning more complex. With new technologies such as dynamic software, technology

comes to stand in for what used to be human activities. If software can become a

signifying actor, the question that is raised in turn is about whether new conceptions of

usership appear in that process. In the early days of the World Wide Web, the question of

usership was less problematic as users were human agents who produced meanings such

as content, images and hyperlinks, that were then published on the Web interface thanks

to specific languages (e.g. HTML) and programs (e.g. Dreamweaver). The main site of

analysis from a communication and cultural studies perspectives focused on developing a

new understanding of the discursive roles of the user as covering both the sphere of

authorship and that of readership. With the deployment of software that can in turn

254

produce content - software that can engage in a communicational exchange with human

actors - the situation is different. As seen throughout the Amazon and MediaWiki case

studies, software, by articulating signifying and a-signifying processes, works to define

specific modes of subjectivation and spheres of agencies for human actors. The main

conclusion to be taken from the articulation of human actors within the technocultural

contexts of amazon.com and MediaWiki concerns the need to develop a new critical

framework to examine the politics of usership. In particular, the question of usership does

not simply concern changes at the signifying level, where the production and circulation

of meaning has to be done through a specific set of articulations between software and

human actors as two communicative actors. The question of usership also appears at the

a-semiotic level in that there is an encoding of human behaviour and characteristics as

information through profiling. In the case of the circulation of the Wikipedia format, the

process was one of capitalizing on the articulation between meaning and flows of people

in order to produce, for instance, targeted advertising. Furthermore, the question of

usership appeared at the a-signifying level through the definition and delineation of the

sphere of agency, and therefore of the potential processes of subjectivation. That is, the a-

signifying level organizes modes of usership along a-semiotic and signifying processes.

As such, the examination of the circulation of flows of usership within mixed semiotics

processes requires an analysis of technocultural power formations, and their

consequences for a critical analysis of the category of the user.

The goal throughout the case study analysis was to see how Guattari’s mixed

semiotics framework could be used to identify the processes and dynamics that make use

255

of semiotic systems and meanings to create new realities and practices of consumer

subjectivities, in the case of amazon.com, and new processes of capturing and

capitalizing on the practices associated with the free software movement in the case of

the circulation of the MediaWiki software. The amazon.com case study revealed how the

commercialization of cultural products takes place through the articulation of two

semiotic systems - a human-produced one that defines the cultural meanings of products

and a software-based recommendation system that inscribes meanings within an endless

chain of interpretation. On Amazon.com, the complementariness between closure and

openness of meaning is stabilized through the constant subjectification of users as

consumers within a commercial environment. The MediaWiki case study offered a

different set of inquiries mostly dealing with the appropriation of the semiotic systems

and practices developed within an open source, free-software context. The circulation of

the MediaWiki package showed how the cultural model embodied by Wikipedia can be

changed through a rearticulation of cultural goals and discursive roles.

The main finding of the case studies, however, does not only include the shaping

of the user as a discursive category, but also, and more importantly, the shaping of the

user as a site of power formation and articulation of technocultural processes with human

actors through a-signifying processes. As seen through the case studies, the channeling of

informational flows within commercial and non-commercial flows outlined the

importance of new economic and technical actors in articulating signifying practices

within specific a-signifying power formations. In particular, a common theme related to

the definition of the user within a-signifying processes concerned the shaping of specific

256

discursive practices so that they are constantly articulated with commercial dynamics. In

this process, the sphere of agency of human actors becomes extremely restrained. That is,

human actors as Web users can mostly intervene at the level of signification, and it is

impossible to refuse the existentializing a-signifying flows of consumer subjectivation

that are deployed at the a-signifying level. Amazon.com in particular offered a telling

illustration of the paradox of usership. It could be argued that there are uses of

Amazon.com that escape the consumption imperative; for instance, looking up

bibliographical information or searching for a book to buy from another bookstore or to

borrow from the library. However, the very act of surfing produces values in that it is

going to be encoded as more information to produce recommendations. The consumer

imperative is difficult to evade altogether. With the case of the rearticulations of the

MediaWiki packages, the imposition of targeted advertising, for instance, shapes a

system whereby human agents can mainly act at the signifying level of producing content

while there exists a commercial network that they cannot control. The paradox of

usership, is that freedom of expression is encouraged, but this very freedom of expression

at the signifying level is channeled into specific modes of existentializing users. At the

level of the interface, such processes are difficult to examine. The process of producing

recommendations on Amazon.com is never visible to human actors - it appears as

instantaneous feedback and as such presents itself as unproblematic. In the case of

targeted advertising in the MediaWiki case study, the articulation between signifying and

a-signifying flows is hidden as targeted advertising and relegated to specific boxes on the

Web page, and as such as parallel processes that are imposed on the user. Yet, the a-

257

signifying level makes use of signifying semiologies, both as a source of data and as a

site of existentialization. As such, a central finding that emerges from the case study

concerns the shaping of the category on the user through the articulation between

signifying and a-signifying processes.

With regards to the shaping of users through a-signifying power dynamics,

Maurizio Lazzarato’s elaboration on the concept of the machine and the production of

subjectivities (2006) is useful for examining the rise of the user as the articulation of

human actors within a technocultural milieu. In “The Machine” (2006), Lazzarato

identifies two processes of subjectivation, one that is about enslavement and the other of

subjection. Enslavement is about the process through which users become cogs in the

machine. As Lazzarato describes it, this process takes place at the molecular, pre-

individual and infra-social level and concerns affects, feelings, desires and non-

individuated relationships that cannot be assigned to a subject. This process of

subjugation, of transforming users into elements and parts of the machine is present in the

online environment through the treatment of the information provided by users. User-

produced content and behaviour feed the technocultural machine in charge of producing

customized representations. The process of subjection, on the other hand, deals with the

molar dimension of the individual according to Lazzarato: the social dimension and

social roles of the individual. In that particular process, the user is not somebody who is

used by the machine, but an acting subject who uses the machine, according to a pre-

defined technocultural context, in the case of online environments. Such a perspective on

the production of users as communicative agents in an online environment can serve to

258

add a more critical dimension to the celebration of freer expression through the

deployment of software to support content production. In that regard, Guattari’s mixed

semiotics framework and analysis of processes of subjectivation are important tools for

understanding the formation of actions and agents within a technocultural context that

relies on the production of representational systems. Examining how users and user-

produced content are embedded through a-signifying and informational dynamics

requires a better awareness of the politics of code, and of the need to develop a vertical

approach to the Web (Elmer, 2003) so as to examine the contextualization of content and

users not only from a socio-cultural perspective, but also from a techno-cultural one.

The concept of the abstract machine plays an important role in analyzing the

articulation of signifying semiologies and discursive rules within a-signifying

existentializing networks. As Guattari explains it, the abstract machine articulates

discursive and non-discursive fields. Guattari insists that the analysis of an abstract

machine includes both what he calls a discursive field, which is the field of meaning

formation, and the machinic level that provides a process of existentialization. With

abstract machines, then, the question switches from being one of representation to what

Guattari calls “existential intelligibility” (1985). The abstract machine makes meaning

formations possible through a process of existentialization; that is, by giving existence to

and actualizing the practices through which meanings can be produced. This

existentializing function is what produces users as producers and receivers of meanings.

As Guattari (1987) argues, the analysis of the constitution of subjectivities leads to the

acknowledgement that elements at the level of expression or content do not simply act at

259

a conventional discursive level. Discursive elements become “existential materials”

through which subjectivities can be defined. As such, the meanings themselves are not as

important as the specific articulations of discourses with other cultural, economic,

political, institutional, biological and technical processes to delineate the agency of

subjects. In Guattari’s words, the discursive materials serve to enable processes of “auto-

referential subjectivity.” That is, discursive materials are used within an assemblage to

produce effects of stability and regularity, thus allowing for the shaping of recognizable

and identifiable collective and individual subjectivities. This exploration of the process of

auto-referential subjectivation, as Guattari further argues, functions alongside the power

formations and knowledge processes as originally described by Foucault. While “power

formations acts from the outside through either direct coercion or the shaping of a

horizon of imagination and knowledge formations articulate subjectivities with techno-

scientific and economic pragmatics”, “auto-referential subjectivation” produces a

processual subjectivity which reproduces itself through the mobilization of

existentializing materials, among which, discourses and meaning formations (1987, p. 3).

While the scope of this study is quite modest compared to Guattari’s discussion of

processes of subjectivation, it is nevertheless possible to use the analysis of online a-

signifying systems to identify the processes through which the user is defined as both a

discursive category and a cultural entity. Subjectivation in the context of this study can be

defined as the process of shaping a horizon of possible actions that serve as the basis for

the expression of subjectivities. As an illustration of the three kinds of processes of

subjectivation present online, it could be said that power formations and knowledge

260

formations are present when a system of surveillance is put in place. Hidden pedagogies

(Longford, 2005) about how to behave on a website represent an instance where users are

coerced into adopting specific practices. As Elmer argues in the case of the case of

privacy: “users who opt to maintain privacy are punished by being denied access to

various sites, or they face increased inconvenience for having to continuously turn off

cookie alerts” (2004, p. 77). This kind of coercion was present in the case of amazon.com

with the obligation to accept cookies in order to use the website. On Wikipedia, power

formations take place mainly through the establishment of rules of collaboration. Here a

meta-discourse about the goal of the Wikipedia project suggests a new horizon of reality

to users. The kind of coercive processes of surveillance present on amazon.com are

tightly linked with knowledge formations. Forcing users to give up their privacy is a

process of subjectivation that also takes place through the analysis of users’ behaviours.

Power formations give way to knowledge formations that further integrate users within a

system that can predict customized desires. Furthermore, as pointed out by Guattari’s

discussion of the three modes of subjectivation, there exists a process of auto-referential

subjectivation that actualizes specific subjectivities through the reduplication of specific

practices. In the case of online spaces such as amazon.com and Wikipedia, a central

existentializing material is produced through the interface as a space of meaningful

representations. The software layer defines the agency of users - what they can do, how

they can express themselves and use the websites - and defines the range of practices that

are possible to manipulate signifying materials. The regularity and stability of websites as

constructed through the software layer constitute the basis for auto-referential

261

subjectivation. The specific range of practices available to users ensures the stability of

the use of existentializing materials, and thus the range of practices available for users to

express their subjectivities.

Software, in that sense, is the mediator that articulates practices and meaning

formations. In particular, the notion that software builds the technocultural stability

needed for the production and channeling of meanings highlights its role as a producer of

technocultural regularities. The processes of auto-referential subjectivation in the online

context have thus to include the role of software in producing the technical and cultural

continuity within which subjectivities can be defined through stabilization of the

technocultural context, as well as the repetition of a specific range of practices that give

existence to the subjectivities of users. Furthermore, it can be argued that software acts on

the definition of a collective of actions that assign a broad collective identity. On

amazon.com, for instance, the principal mode of subjectivation is that of creating a

consumer identity shared by all users. The recommendation software’s main function is

to articulate individual meanings within a shared cultural horizon. The social

personalization that ensues makes it possible to have individual subjectivities defined

within a collective of other human actors, that is, within a reconstituted social order

defined through an informational logic of statistical correlations. In the Wiki sphere, the

practices available to users are such that users are made constantly aware that they

function through a collective of other users that monitors online behaviours and

discursive participation. Individual subjectivities are socially shaped through the goal of

reaching a common agreement and a neutral point of view. The practices made available

262

to users through the software thus define the category of the user as always included

within a collective human actors.

In terms of defining a critical politics of usership, it is important to acknowledge

the multiple modalities of users, and consequently their different modes of articulation

with human actors. The direct equation between human actors and users is problematic in

that it fails to acknowledge the technocultural mediations that assign a narrow range of

agencies to human actors at the signifying levels, while other processes of

existentialization of users along commercial dynamics are imposed on human actors. As

such, there is a need to argue for a multiplicity of sites of usership. The mixed semiotics

framework forces us to reconsider the question of the user beyond its discursive

manifestation at the interface level, and how it functions as a site of articulation of a-

signifying, a-semiotic and signifying processes.

The shaping of users as shifting cultural agents that can be reintegrated within

different a-semiotic, signifying and a-signifying dynamics is part of the process of social

ordering of online spaces. This social ordering, in the case studies, took place through the

disciplining of users into customers on amazon.com, and through allowing users to

collaborate in producing knowledge within commercial and non-commercial spaces, as in

the case with Wikipedia. The analysis of the shaping of users’ practices in online spaces

is important, especially as users have become a new kind of techno-discursive agent that

cannot fully be studied through reference to traditional discursive roles, such as that of

the author and the reader. As seen throughout the case studies, users are extremely

important for both amazon.com and in the circulation of MediaWiki because they provide

263

the knowledge through which those online spaces can exist. At the same time, they are

also essential components of a-signifying systems as both providers of information and as

embodying processes of commercial subjectivation. In terms of using a mixed semiotics

analysis to study users, the most apparent site of usership is at the level of the interface,

and as such it is an important site of analysis. Yet, an examination of the politics of

usership invites us to analyze users on the interface not only as discursive agents, but also

as products of unseen a-signifying dynamics. The critical mixed semiotics framework

that can be developed, in that sense, concerns in particular the ways in which software

acts as a mediator that links human actors with communicative possibilities. Embedded in

those communicative possibilities are processes of existentialization: a-signifying

processes make use of specific signifying possibilities in order to define specific modes

of usership. As seen in the two case studies, the agency of users at the interface level

important as it is part of the social reordering that is put in place in order to create

specific signifying semiologies within a-signifying machines. As a starting point, there is

a need to identify discursive rules and their role in articulating the semiotic domain with

social processes and power relations. However, it is central to then examine the different

networks within which the shifting category of the user is embedded. Indeed, the

category of the user shifts in relation to the different kinds of mixed semiotics

articulations that are being considered. In particular, the case studies showed that at the

signifying level, the user is a producer and receiver of meaning, at the a-semiotic level,

the user is a source of data, and at the a-signifying level, the user is existentialized

through the articulation between technological and commercial dynamics.

264

3. Mixed semiotics and Software Studies

The main research question guiding the case studies analysis was: what are the

social, cultural and communicational realities constructed through the semiotics of online

spaces? To answer this, this project used a multiplicity of theories, from actor-network

theory to medium theory, and from software studies to Guattari’s mixed semiotics

framework. In so doing, the aim was to demonstrate how the field of software studies -

how the study of the impact of software on culture - could benefit from a renewed

attention to the context of semiotic practices. If software is to be studied at the level of its

intervention in the process of mediation and of meaning production and circulation, then

there needs be a framework taking into account the ways in which software articulates

itself with other technical and cultural processes, and with other human and non-human

actors. The characteristic of software as being that which allows for the bridging of the

technological and the cultural with regards to online modes of expression forces us to

abandon frameworks prioritizing one field over another - the medium versus the message,

discourse versus technology, language against the social. In that regard, the mixed

semiotics framework proved invaluable in showing how the study of semiotic processes

can become the study of the articulations between processes of meaning-making and

other technological and cultural processes and practices. These articulations, and the

study of the assemblages and networks crossing through linguistic, economic, political,

technical and cultural fields are what shape social realities and constitute the context of

communication. Locating software as it participates in processes of articulations and

negotiations of these networks offers great potential with regards to defining the

265

technocultural power relations that frame practices, agencies and a horizon of

subjectivities.

The overall conclusion emerging from the two case studies is that questions

regarding the formation of meanings, and consequently of processes of subjectivation and

power relations to define the category of the user, need to be critically reconsidered

within the dynamics of online informational spaces and flows. In particular, the question

of meanings and semiotics has to be critically assessed not only with regards to the

semiotic systems available on the Web, but also in terms of the articulation of those

semiotic systems within new power relations, or a-signifying machines, that define new

processes of subjectivation and capitalization of users within informational flows.

Combined with Actor-network theory and cultural studies’ attention to the mutual

shaping of technology and culture, Guattari’s mixed semiotics framework offers a robust

set of methods to analyze the constitution of communicational spaces, and the multi-

dimensional articulations of processes of communication with power formations. In that

sense, Guattari’s framework offers ways to reassess the question of representation by

examining the technocultural realities that are constructed through specific modes of

production and circulation of meanings. The mixed semiotics framework can be used not

only to examine the articulation between discursive practices and modes of

existentialization, but also to further understand the politics of usership through the

deployment of software that dynamically adapts content to the behaviour of users. The

complexity of the category of the user as a hybrid between a technocultural system and

human actors is an important site of analysis for further understanding the cultural impact

266

of the contemporary Web spaces, particularly social networks. In that regard, the choice

of Amazon and MediaWiki as case studies was not simply done because one is a popular

website and the other a popular Wiki format. Amazon and wikis such as Wikipedia have

been described and heralded as at the forefront of the Web 2.0 movement. Web 2.0 is

defined as facilitating user-produced content so as to build large spaces of knowledge that

can be dynamically mined to create further information. Web 2.0 functions exclusively

on the mining of user-produced content and has been presented in the mainstream as

spaces of democratic communication. Time Magazine’s Choice of “You” as 2006 person

of the year epitomizes a utopian vision of Web 2.0 and social networking sites as

harnessing the wisdom of the crowd and allowing for all voices to be heard. The mixed

semiotics analysis of the politics of usership on amazon and through the circulation of

MediaWiki offers a way to critically question such rhetoric of democratic

communication. In particular, mixed semiotics analysis shows that the user cannot be

fully equated with a human actor. Rather the user as a hybrid between a technocultural

mode of existentialization and the particular intention of a human user has to be analyzed

as a site where different processes of subjectivation are articulated. The power dynamics

that are established through these articulations need to be further studied, especially in the

case of Web 2.0 spaces used for political communication, such as Facebook or YouTube.

A critical assessment of the politics of usership forces us to reconsider the broader

concept of communication in online spaces. Generally, the notion of “better” or “more

democratic” communication is equated with a greater freedom of expression for human

actors, be it in expressing ideas or having access to more tools that facilitate the

267

communication process, such as tools that offer instantaneous communication or simplify

content production processes. A mixed semiotics analysis of commercial Web 2.0

platforms, however, would point out that current understandings of “better”

communication are limited to a specific set of signifying practices tightly regulated by

signifying agents such as software. Furthermore, a-semiotic processes cannot usually be

changed by users, who, on commercial Web 2.0 platforms, have to accept “Terms of

Use” depriving them of any agency with regards to controlling processes of surveillance.

In a similar manner, a-signifying processes, especially those related to commercialization

and marketing are imposed on users as a part of the meaningful feedback given to them

by software layers. The mixed semiotics framework thus offers ways to critically assess

the power dynamics regulating communicational practices and distribute delineated

spheres of agency. In order to fully understand the politics of social networks and

commercial Web 2.0 platforms, it is necessary to develop a multi-dimensional, mixed

semiotics approach to examine what is made apparent, what is hidden, and what regulates

technocultural practices and uses on technocultural networks.

In closing, this research project has attempted to provide a grounded methodology

to unravel the dynamics linking code, software and culture. The politics through which

these dynamics are established and accepted as the normative are central in understanding

the play of power relations on the Web. The adaptation of the mixed semiotics

framework to the study of Web makes it possible to trace the unfolding of cultural

practices of meaning-making and ways of being on the Web. In so doing, the mixed

semiotics framework redefines software studies as an essential approach to understanding

268

the evolution of the social and political implications of the mainstream Web.

269

Bibliography

Alevizou, Panagiota (2006). Encyclopedia or Cosmopedia? Collective Intelligence and

Knowledge Technospaces. Proceedings of the Second Wikimania Conference, 4-

6 August 2006, University of Harvard Law School, Cambridge, Massachusetts,

2006. http://wikimania2006.wikimedia.org/wiki/Proceedings:PA1.

Angus, Ian. (1998). The Materiality of Expression: Harold Innis' Communication Theory

and the Discursive Turn in the Human Sciences. Canadian Journal of

Communication 23(1). http://www.cjc-

online.ca/viewarticle.php?id=443&layout=html

Barbrook, Richard. (1998). The Hi-Tech Gift Economy. First Monday 3(12).

http://www.firstmonday.org/issues/issue3_12/barbrook/.

Baudrillard, Jean. (1981). For a Critique of the Political Economy of the Sign. Toronto:

Telos Press.

Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms

Markets and Freedom. New Haven: Yale University Press.

Benson, Eric. (2004). Use of Browser Cookies to Store Structure Data. Washington,

D.C.: U.S. Patent and Trademark Office.

Berners-Lee, Tim. (1999). Weaving the Web: The Original Design and Ultimate Destiny

of the World Wide Web by Its Inventor. San Francisco: HarperSanFrancisco.

Bezos, Jeffrey; Spiegel, Joel; McAuliffe, Jon. (2005). Computer Services for Assisting

Users in Locating and Evaluating Items in an Electronic Catalog Based on

270

Actions Performed by Members of Specific Communities. Washington, D.C.:

U.S. Patent and Trademark Office.

Bishop, Ryan and Phillips, John. (2006). Language. Theory, Culture and Society, 23(2-3),

51-69.

Bolter, J. & Grusin, R. (1999). Remediation: Understanding New Media. London: MIT

Press.

Bolter, J. (1984). Turing's Man: Western Culture in the Computer Age. Chapel Hill:

University of North Carolina Press.

Bowman, Dwayne; Linden, Greg; Ortega, Ruben; Spiegel, Joel. (2006). Identifying Items

Relevant to a Current Query Based on Items Accessed in Connection with

Similar Queries. Washington, D.C.: U.S. Patent and Trademark Office.

Bowman, Dwayne; Ortega, Ruben; Hamrick, Michael; Spiegel, Joel; Kohn, Timothy.

(1999). Refining Search Queries by the Suggestion of Correlated Terms from

Prior Searches. Washington, D.C.: U.S. Patent and Trademark Office.

Bowman, Dwayne; Ortega, Ruben; Hamrick, Michael; Spiegel, Joel; Kohn, Timothy.

(2001). System and Method for Refining Search Queries. Washington, D.C.:

U.S. Patent and Trademark Office.

Bowman, Dwayne; Ortega, Ruben; Linden, Greg; Spiegel, Joel. (2001). Identifying the

Items Most Relevant to a Current Query Based on Items Selected in Connection

with Similar Queries. Washington, D.C.: U.S. Patent and Trademark Office.

Burns, Tony. (2000). The Purloined Hegel: Semiology in the Thought of Saussure and

Derrida. History of the Human Sciences, 13(4), 1-24.

271

Bush, Vannevar. (1999). As We May Think. In P. Mayer (Ed.), Computer Media and

Communication: A Reader. Oxford: Oxford University Press.

Bush, Vannevar. 1945. As We May Think. The Atlantic Monthly, July 1945.

http://www.theatlantic.com/doc/194507/bush.

Buxton, William. (1998). Harold Innis' Excavation of Modernity: The Newspaper

Industry, Communications, and the Decline of Public Life. Canadian Journal of

Communication, 23(3), 321-339.

Carey, James. (1968). Harold Innis and Marshall Mcluhan. In R. Rosenthal (Ed.),

Mcluhan: Pro & Con (pp. 270-308). New York: Funk and Wagnalls.

Carey, James. (1975). A Cultural Approach to Communication. Comunication, 2(2), 1-

22.

Castells, Manuel. (2000). The Rise of the Network Society. Oxford: Balckwell.

Chesher, C. (1997). The Ontology of Digital Domains. In D. Holmes (Ed.), Virtual

Politics. London: Sage.

Chesher, Chris. (2003). Layers of code, layers of subjectivity. CultureMachine, 5 (2003).

http://culturemachine.tees.ac.uk/Articles/CChesher.htm.

Chun, Wendy. (2005). On Software, or the Persistence of Visual Knowledge. grey room,

18(Winter 05), 26-51.

Cross, Tom. 2006. Puppy Smoothies: Improving the Reliability of Open, Collaborative

Wikis. First Monday 11, (9).

http://www.firstmonday.org/issues/issue11_9/cross/index.html.

Cubitt, Sean. (2000). The Distinctiveness of Digital Criticism. Screen, 41(1), 86-92.

272

Dave, Kushal. 2004. Studying Cooperation and Conflict between Authors with history

flow Visualizations. http://www.research.ibm.com/visual/projects/history_flow/.

Dawkins, Roger. (2003). The Problem of the Material Element in the Cinematographic

Sign: Deleuze, Metz and Peirce. Angelaki, 8(3), 155-166.

Deleuze, Gilles and Guattari, Felix. (1983). Anti-Oedipus: Capitalism and Schizophrenia

(Mark Seem and Helen R. Lane Robert Hurley, Trans.). Minneapolis: University

of Minnesota Press.

Deleuze, Gilles and Guattari, Felix. (1987). Thousand Plateaus: Capitalism and

Schisophrenia (Brian Massumi, Trans.). Minneapolis: University of Minnesota

Press.

Deleuze, Gilles. (1988). Foucault (Sean Hand, Trans.). Minneapolis: University of

Minnesota Press.

Deleuze, Gilles. (1992). Postcript on the Societies of Control. October, 59, 3-7.

Derrida, Jacques. (2002). Différance. In J. Cullen (Ed ), Deconstruction - Critical

Concepts in Literary and Cultural Studies (pp. 141-166). New York: Routledge

Dyer-Witherford, Nich. (1999). Cyber-Marx: Cycles and Circuits of Struggle in High

Technology Capitalism. Urbana-Champaign: University of Illinois Press.

Elmer, Greg. (2003). A Diagram of Panoptic Surveillance. New Media & Society, 5(2),

231-247.

Elmer, Greg. (2004). Profiling Machines: Mapping the Personal Information Economy.

Cambridge: MIT Press.

273

Elmer, Greg. (2006). The Vertical (Layered) Net - Interrogating the Conditions of

Network Connectivity. In Massanati, D. and Silver, A. (Eds.), Critical

Cyberculture Studies (pp. 159-167). New York: NYU Press.

Elmer, Greg; Devereaux, Zachary; Skinner, David. (2006). Disaggregating Online News:

The Canadian Federal Election, 2005-2006. Scan, 3(1).

http://scan.net.au/scan/journal/display.php?journal_id=72

Eisenstein, Elizabeth. (1979). The Printing Press as an Agent of Change. Cambridge:

Cambridge University Press.

Engelbart, Douglas. (1999). A Conceptual Framework for the Augmentation of Man's

Intellect. In Paul Mayer (Ed.), Computer Media and Communication: A Reader.

Oxford: Oxford University Press.

Esposito, Joseph. 2003. The processed book. First Monday 8(3),

http://www.firstmonday.org/issues/issue8_3/esposito/ (Accessed April 4, 2007).

Foot, Steven Schneider & Kirsten. (2004). The Web as an Object of Study. New Media &

Society, 6(1), 114-122.

Foucault, Michel. (1990). The History of Sexuality: An Introduction. New York: Vintage.

Foucault, Michel. (2003). What in an Author? In The Essential Foucault. New York: The

New Press. 377-391.

Fuller, M. (2003) Behind the Blip: Essays on the Culture of Software. New York:

Autonomedia.

Fuller, M. (2005). Media Ecologies: Materialist Energies in Art and Technoculture.

Cambridge (MA): MIT Press.

274

Galloway, Alexander. (2004). Protocol: How Control Exist after Decentralization.

Cambridge: MIT press.

Gane, Nicholas. (2005). Radical Post-Humanism: Friedrich Kittler and the Primacy of

Technology. Theory, Culture and Society, 22(3), 25-41.

Genosko, Gary. (1996). The Guattari Reader. Oxford: Blackwell.

Genosko, Gary. (1998). Guattari's Schizoanalytic Semiotics: Mixing Hjelmslev and

Peirce. In Eleanor Kaufman and Kevin Jon Heller (Ed.), Deleuze and Guattari:

New Mappings in Politics, Philosophy and Culture (pp. 175-190). Minneapolis:

University of Minnesota Press.

Genosko, Gary. (2002). Felix Guattari: An Aberrant Introduction. London: Continuum.

Grossberg, Lawrence. (1987). We Gotta Get out of These Places: Popular Conservatism

and Postmodern Culture. New York: Routledge.

Grossberg, Lawrence. (1996). On Postmodernism and Articulation: An Interview with

Stuart Hall. In Kuan-Hsing Chen (Ed.), Stuart Hall: Critical Dialogues in

Cultural Studies (pp. 131-150). London: Routledge.

Grusin, Jay David Bolter and Richard. (1999). Remediation: Understanding New Media.

Cambridge: MIT Press.

Guattari, Felix. (1977). La Révolution Moléculaire. Paris: Recherches.

Guattari, Felix. (1985). Machine abstraite et champ non-discursif. Retrieved 28 January,

2007, from http://www.revue-chimeres.org/pdf/850312.pdf

Guattari, Felix. (1987). De la production de la subjectivité. Retrieved 15 May, 2007,

from http://www.revue-chimeres.org/pdf/04chi03.pdf

275

Guattari, Felix. (1995). Chaosmosis: an Ethico-Aesthetic Paradigm (Paul Bains and

Julian Pefanis, Trans.). Sydney: Power Publications.

Guattari, Felix. (1996a). Microphysics of Power/Micropolitics of Desire. In Gary

Genosko (Ed.), The Guattari Reader (pp. 172-183). London: Blackwell.

Guattari, Felix. (1996b). Semiological Subjection, Semiotic Enslavement. In Gary

Genosko (Ed.), The Guattari Reader (pp. 141-147). London: Blackwell.

Guattari, Felix. (1996c). The Place of the Signifier in the Institution. In Gary Genosko

(Ed.), The Guattari Reader (pp. 148-157). London: Blackwell.

Gumbrecht, Hans Ulrich. (2004). The Production of Presence: What Meaning Cannot

Convey. Stanford: Stanford University Press.

Halavais, M. Garrido & A. (2003). Mapping Networks of Support for the Zapatista

Movement: Applying Social Network Analysis to Study Contemporary Social

Movements. In M. McCaughey & M. Ayers (Ed.), Cyberactivism: Online

Activism in Theory and Practice (pp. 165-184). New York: Routledge.

Hall, S. (1980). Encoding/Decoding. In D. Hobson S. Hall, A. Lowe and P. Willis (Ed.),

Culture, Media, Language: Working Papers in Cultural Studies (pp. 128-138).

London: Hutchinson.

Hanks, Steve and Spils, Daniel. (2006). Increases in Sales as a Measure of Interest.

Washington, D.C.: U.S. Patent and Trademark Office.

Hansen. (2000). Embodying Technesis: Technology Beyond Writing. Ann Arbor:

University of Michigan Press.

Hayles, N. Katherine. (1993). The Materiality of Informatics. Configurations, 1, 147-170.

276

Hayles, N. Katherine. (2003). Translating Media: Why We Should Rethink Textuality.

The Yale Journal of Criticism, 16(2), 263-290.

Heidegger, Martin. (1977). The Question Concerning Technology. In The Question

Concerning Technology and Other Essays (pp. 3-35). New York: Harper.

Herr, Bruce, and Todd Holloway. 2007. Visualizing the 'Power Struggle' in Wikipedia.

The New Scientist, 26o5, p.. 55.

http://www.newscientist.com/article/mg19426041.600-power-struggle.html.

Hines, Christine. (2000). Virtual Ethnography. London: Sage.

Hjelmslev, Louis. (1971). Prolégomène à une théorie du langage. Paris: Éditions de

minuit.

Holloway, Todd, Miran Bozievi, and Kathy Borner. 2005. Analyzing and Visualizing the

Semantic Coverage of Wikipedia and its Authors.

http://arxiv.org/ftp/cs/papers/0512/0512085.pdf.

Humphreys, Ashley. (2006). The Consumer as Foucauldian "Object of Knowledge.”

Social Science Computer Review, 24(3), 296-309.

Innis, Harold. (1951). The Bias of Communication. In The Bias of Communication (pp.

117-130). Toronto: University of Toronto Press.

Jacobi, Jennifer; Benson, Eric. (2000). System and Methods for Collaborative

Recommendations. Washington, D.C.: U.S. Patent and Trademark Office.

Jacobi, Jennifer; Benson, Eric; Linden, Gregory. (2001). Personalized Recommendations

of Items Represented within a Database. Washington, D.C.: U.S. Patent and

Trademark Office.

277

Jacobi, Jennifer; Benson, Eric; Linden, Gregory. (2001). Use of Electronic Shopping

Carts to Generate Personal Recommendations. Washington, D.C.: U.S. Patent

and Trademark Office.

Jenkins, Henry. (2006). Convergence Culture: Where Old and New Media Collide. New

York: New York University Press.

Jensen, Klaus Bruhn. (1995). The Social Semiotics of Mass Communication. London:

Sage.

Jesiek, Brent. 2003. Democratizing Software: Open Source, the Hacker Ethic, and

Beyond. First Monday 8, 10.

http://www.firstmonday.org/issues/issue8_10/jesiek/.

Johnson, S. (1997). Interface Culture: How Technology Transforms the Way We Creaate

and Communicate. New York: Basic Books.

Kellner, R. Kahn & D. (2004). New Media and Internet Activism: From the "Battle of

Seattle" to Blogging. New Media & Society, 6(1), 87-95.

Kirschenbaum, Matthew. (2003). Virtuality and VRML: Software Studies After

Manovich. Electronic Book Review.

http://www.electronicbookreview.com/thread/technocapitalism/morememory

Kittler, F. (1997). Gramophone, Film, Typewriter. Stanford: Stanford University Press.

Kittler, F. (1995). There Is No Software. Ctheory.

http://www.ctheory.net/text_file.asp?pick=74

Kittler, F. (1990). Discourse Networks. Stanford: Stanford University Press.

278

Kitzmann, Andreas. (2005). The Material Turn : Making Digital Media Real (Again).

Canadian Journal of Communication, 30(4), 681-686.

Langlois, Ganaele, and Greg Elmer. (2008). The Economies of Wikipedia: Open-Source

as Promotional Traffic. New Media and Society.

Latour, B. (1987). Science in Action: How to Follow Scientist and Engineers through

Society. Cambridge: Harvard University Press.

Latour, B. (1993). On Recalling ANT. Retrieved 14 January, 2005, from

www.comp.lancs.ac.uk/sociology/papers/Latour-Recalling-ANT.pdf

Latour, Bruno. (1993). We Have Never Been Modern. Cambridge: Harvard University

Press.

Latour, Bruno. (2005). Reassembling the Social: An Introduction to Actor-Network

Theory. Oxford: Oxford University Press.

Latour, Bruno. 1999. Pandora's Hope: Essays on the Reality of Science Studies. Harvard:

Harvard University Press.

Latour, M. Callon & B. (1981). Unscrewing the Big Leviathan: How Actors Macro-

Structure Reality and How Sociologists Help Them Do So. In K. Knorr-Cetina

& A. Cicourel (Ed.), Advances in Social Theory and Methodology: Toward an

Integration of Micro- and Macro-Sociologies (pp. 277-303). Boston: Routledge

& Kegan Paul.

Lazzarato, Maurizio. (2006). The Machine. Retrieved 15 June, 2006, from

http://transform.eipcp.net/transversal/1106/lazzarato/en

279

Lessig, L. (2001). The Future of Ideas: The Fate of the Commons in a Connected World.

New York: Vintage Books.

Lessig, L. (2005). Code 2.0. http://codev2.cc/.

Lessig, Lawrence. (1999). Code and Other Laws of Cyberspace. New York: Basic

Books.

Lih, Andrew. 2004. Wikipedia as Participatory Journalism: Reliable Sources? Metrics

for Evaluating Collaborative Media as News Resource.

http://journalism.utexas.edu/onlinejournalism/2004/papers/wikipedia.pdf.

Linden, Greg; Smith, Brent, and York, Jeremy. 2003. Amazon.com Recommendations:

Item-to-Item Collaborative Filtering. IEEE Computer Society.

Linden, Gregory; Jacobi, Jennifer; Benson, Eric. (2001). Collaborative Recommendations

Using Item-to-Item Similarity Mappings. Washington, D.C.: U.S. Patent and

Trademark Office.

Linden, Gregory; Smith, Brent; Zada, Nida. (2005). Use of Product Viewing Histories of

Users to Identify Related Products. Washington, D.C.: U.S. Patent and

Trademark Office.

Lipovetsky, Gilles. (2000). The Contribution of Mass Media. Ethical Perspective, 7(2-3),

133-138.

Lipovetsky, Gilles. (2002). The Empire of Fashion: Dressing Modern Democracy.

Princeton: Princeton University Press.

Longford, Graham. (2005). Pedagogies of Digital Citizenship and the Politics of Code.

Technê, 9(1). http://scholar.lib.vt.edu/ejournals/SPT/v9n1/longford.html

280

M. Lister, J. Dovey, S. Giddings. I. Grant, K. Kelly. (2003). New Media: A Critical

Introduction. London: Routledge.

Mackenzie, Adrian. (2006). Cutting Code: Software and Sociality. New York: Peter Lang

Publishing.

Manovich, L. (2000). The Language of New Media. Cambridge: MIT Press.

Marinaro, Tony. 2004. Searching for Profits with Amazon - Inside the Book and in the

Margins. Publishing Research Quarterly, 20(3), pp. 3-8.

McLuhan, Marshall. (1995). The Playboy Interview. In Eric McLuhan & Frank Zingrone

(Ed.), Essential Mcluhan (pp. 233-269). Toronto: House of Anansi.

Meyerowitz, Joshua. (1986). No Sense of Place: The Impact of Electronic Media on

Social Behavior. Oxford: Oxford University Press.

Meyrowitz, Joshua. (1993). Images of Media: Hidden Ferment and Harmony in the Field.

Journal of Communication, 43(3), 55-66.

Moscoe, Vincent. (2004). The Digital Sublime: Myth, Power, and Cyberspace.

Cambridge: MIT Press.

Nelson, T. (1993). Literary Machines 93.1. Sausalito: Mindful Press.

Ortega, Ruben; Avery, John and Robert, Frederick. (2003). Search Query

Autocompletion. Washington, D.C.: U.S. Patent and Trademark Office.

Porter, Robert and Kerry-Ann Porter. (2003). Habermas and the Pragmatics of

Communication: A Deleuze-Guattarian Critique. Social Semiotics, 13(2), 129-

145.

281

Robins, K. and F. Webster. (1988). Cybernetic Capitalism: Information, Technology and

Everyday Life, from http://media.ankara.edu.tr/~erdogan/RobinsCybernetic.html

Rogers, Richard. (2004). Information Politics on the Web. Cambridge: MIT Press.

Seem, Mark and Guattari, Felix. (1974). Interview: Felix Guattari. Diacritics, 4(3), 38-

41.

Seigworth, Gregory J. and Wise, J. Macgregor. (2000). Introduction: Deleuze and

Guattari in Cultural Studies. Cultural Studies, 14(2), 193-146.

Shannon, Warren Weaver and Claude. (1980). An Introduction to Information Theory:

Symbols, Signals and Noise. New York : Courier Dover Publications.

Slack, J. (1989). Contextualizing Technology. In L. Grossberg B. Dervin, B. O'Keefe &

E. Wartella (Ed.), Rethinking Communication, Vol. 2: Paradigm Exemplars (pp.

329-345). Thousand Oaks: Sage.

Slack, J. (1996). The Theory and Method of Articulation in Cultural Studies. In Kuan-

Hsing Chen (Ed.), Stuart Hall: Critical Dialogues in Cultural Studies (pp. 112-

130). London: Routledge.

Smith, Brent; Linden, Gregory and Zada, Nida. (2005). Content Personalization Based

on Actions Performed During a Current Browsing Session. Washington, D.C.:

U.S. Patent and Trademark Office.

Song, Zhengrong. (2005). Methods and Systems for Assisting Users in Purchasing Items.

Washington, D.C.: U.S. Patent and Trademark Office.

282

Spoerri, Anselm. 2007a. Visualizing the Overlap between the 100 Most Visited Pages on

Wikipedia for September 2006 to January 2007. First Monday 12(4).

http://www.firstmonday.org/issues/issue12_4/spoerri/index.html.

Spoerri, A. (2007b). What is Popular on Wikipedia and Why. First Monday 12(4).

http://www.firstmonday.org/issues/issue12_4/spoerri2/index.html.

Stake, R. E. (2005). Case Studies. In N. K. Denzin and Y. S. Lincoln (eds.), Handbook of

Qualitative Research. Thousand Oaks, CA: Sage.

Sterne, Jonathan. (1999). Thinking the Internet: Cultural Studies Versus the Millenium.

In S. Jones (Ed.), Doing Internet Research: Critical Issues and Methods for

Examining the Net, (pp. 257-288). London: Sage.

Terranova, Titziana. (2004). Communication Beyong Meaning. Social Text, 80(22), 51-

73.

Terranova, Tiziana. 2000. Free Labor: Producing Culture for the Digital Economy. Social

Text 18(2): 33-58.

Weber, Samuel. (1976). Saussure and the Apparition of Language: The Critical

Perspective. MLN, 91(5), 913-938.

Weerawarana, Sanjiva et al. 2005. Web Services Platform Architecture: SOAP, WSDL,

WS-Policy, WS-Addressing, WS-BPEL, WS-Reliable Messaging, and More. New

York: Prentice Hall.

Wernick, A. (1999). No Future: Innis, Time, Sense, and Postmodernity. In C. Acland and

W. Buxton (Ed.), Harold Innis in the New Century: Reflections and Refractions

(pp. 261-280). Montreal and Kingston: McGill University Press.

283

Whitman, Ronald; Scofield, Christopher. (2004). Search Query Refinement Using

Related Search Phrases. Washington, D.C.: U.S. Patent and Trademark Office.

Wikipedia. (2005a). ICANN. URL: http://en.wikipedia.org/wiki/ICANN (April 10, 2005)

Wikipedia. (2005b). Software. URL: http://en.wikipedia.org/wiki/Software (April 10,

2005)

Wikipedia. (2005c). Working Goup on Internet Governance. URL:

http://en.wikipedia.org/wili/Working_Group_on_Internet_Governance (April 10,

2005).

Williams, Raymond. “The Technology and the Society.” In Television: Technology and

Cultural Form, 9-31. New York: Schocken Books, 1975.

Wise, J. Slack & J. (2002). Cultural Studies and Technology. In L. Liewrouw & S.

Livingstone (Ed.), Handbook of New Media: Social Shaping and Consequences

of ISTs (pp. 485-501). Thousand Oaks: Sage.

Wise, John McGregor. (1997). Exploring Technology and Social Space. London: Sage.