Technologies of abundance
There is no doubt that technology has played a significant role in the emergence of the contemporary information society. The chapter discusses how some of the major digital technologies and technological developments have broken down some boundaries of knowing and erected new ones. Specific attention is given to the development and impact of networks and networking, the personalisation of information technology from the 1980s onwards, and the idea of the usability of technologies and their increasingly deeper convergence. Finally, the chapter considers the impact and consequences of technologies and argues that the most fundamental change is not in the development of technologies as artefacts, but in their appropriation as tools and the power of these appropriations to change our thinking.
The predominant boundary and force of change in the information society and digitality discourses is undeniably digital technology. Even if the technology is seen in the discourses from a variety of different perspectives, it is an issue that is difficult to bypass. One reason to start our exploration of the boundaries of our knowing from technology is the controversy that stems from its privileged position. Conservative information science literature tends to downplay the transformative role of technology by underlining the persistence of traditional values of the information profession in the new technology environments. At the same time, many are inclined to see technology as the dominant propeller of change, which gives us no choice other than to adapt. Even if Rob Kling (1994) targeted his critique primarily towards the popular and professional literature of the early 1990s, the tendency of portraying digital technologies from utopian (and dystopian) rather than empirically oriented accounts is still fairly typical. The Library 2.0 phenomenon of the mid and late 2000s illustrates the clash of these two attitudes. Some discussants see technology as the single most important aspect of the future information services while others appropriate it as a marketing tool (Holmberg et al., 2009). In practice, the dystopian and utopian attitudes can become even more emphasised, for instance, in the way digitisation projects or changes to intellectual property legislation are motivated (Wormbs, 2010). Even if Rafael Capurro (1990) has a point in emphasising that technologies are complementary and overlapping, the polarising tendencies of the debate substantiate a claim that technologies make a difference and consequently, as John Buschman (2009) writes, we must question and critique them.
In a historical perspective, past and contemporary technologies – photography, film and more recently the different forms of digital media – have affected the ways that texts are read, information is used and art is viewed. Lev Manovich (2001) writes about the language of new media, how film has its own language and how the emergence of digital media opens the language of media once again for redefinition. Simultaneously to giving us a possibility to redefine them, the technologies have redefined us and our relation to media and information. Giovan Lanzara (2010) has made remarks on how the introduction of computers and video recording has affected the work of music and judicial expertise. He writes about the need for reflective translations from the old to the new and back in order to remediate objects, actions and representations to a new medium properly.
Paraphrasing theoretical propositions in anthropology and media theory, technology – in Friedrich Kittler’s (1999) sense of media technology – can be the determinant of our situation, or – as Jack Goody (1977) proposes – not precisely a determinant, but something that has an undeniable impact. Thinking back to the technological development of the last 100 years helps to understand its impact, but at the same time it is necessary to remember that contemporary society is not a result of a mere technological development. Society has shaped technology as well, and not all technology has shaped society to an equal extent. From the perspective of how people economise in their quest for knowing in the age of the social web, there are some technological trends that have had a more profound impact and a more comprehensive cognitive and cultural penetration than others. (Whitworth (2009: 162) uses the concepts of cognitive penetration and cognitive separation to make a distinction between exclusionary and inclusionary systems.) Some technologies have erected higher fences between boundaries than others. I will now discuss in more detail some of the major technological influences of the contemporary era, and their impact on the emergence of boundaries and the evolution of how we come to know what we know. They are the impact of networking, personalisation, usability, ubiquity and convergence.
The Internet is based on a large number of different ideas, techniques and technologies. Its most central aspect is its capability to support non-centralised global communication between computers as an almost universal network. The ideas of a galactic network sketched by J.C.R. Licklider at the beginning of the 1960s led to the establishing of ARPANET, the predecessor of the Internet, in 1969 (O’Regan, 2008). The major novelty of ARPANET and the Internet was that in contrast to their antecedents, they were packet-switched networks. Before, all communication between computers was based on direct contact between two nodes connected by a cable. Similarly to the conventional postal service, a packet-switched network is based on the idea of sending and receiving information in packets attached with information about their recipients. Computer files, music streams and email are all divided into small segments, transferred separately and merged again by the recipient. The new technique allowed the construction of larger and considerably simpler networks than before. The original ARPANET was based on the idea of separate networks built on different technologies, but it did not take long time to conceive the idea of an open network technology. The result was the Transmission Control Protocol/Internet Protocol (TCP/IP) still used in the Internet of today. The new protocol made it possible to create a true inter-network network or, in short, Internet (O’Regan, 2008).
During the 1970s and 1980s the Internet became a daily tool for a large group of researchers around the world and it had slowly begun to spread outside research institutions. The early success and popularity of email was a clear indication of the aspirations and needs of the Internet users of the day. The essentially social function of the Internet has not been changed since. From the point of view of individual users of the Internet, the major revolution started at the beginning of the 1990s when Tim Berners-Lee, by this time working as a researcher at CERN, the European Organisation for Nuclear Research, developed a hypertext-based internet service called the World Wide Web (WWW). Especially after the launch of the first graphical browser application, Mosaic, the new easy-to-use service increased in popularity rapidly. The implementation of support for images, audio and video provided opportunities for developing a plethora of new utilities and easier to use graphical user interfaces for existing services. The Internet, an inter-network network, had become the Internet that connected people around the globe.
The brief history of the Internet illustrates the sequence of changes in the contexts of computing and digital communication. Packet switching lifted the boundaries caused by direct connections between individual computers. Similarly, the unification of networking protocols helped to cross boundaries between individual networks. Services such as email and WWW lowered the barriers of human communication. The Internet was not the first international network that united societies and individuals. Paved roads, caravan traffic, scheduled coach and railway connections, telegraph, radio, air traffic and television had similar effects before the Internet. They all have a common denominator, which relates to the crossing of the boundaries of communication and premises of knowing. In contrast to the earlier networks and the early Internet, the major difference of the internet age is that connectivity has proliferated. In the 1990s, Internet use spread from universities and research institutions to business and government. During the first decade of the twenty-first century the expansion continued to homes and the net became an integrated part of society, industry, commerce and everyday life. According to the Pew Internet & American Life Project, in 2000 46 per cent of adult Americans used the Internet, 5 per cent had broadband access at home and 50 per cent had a mobile phone. In 2009, 79 per cent of adult Americans used the Internet, 63 per cent had broadband at home and 56 per cent used the Internet wirelessly (Rainie, 2010: Rainie and Horrigan, 2002).
The increased connectivity has had many consequences. Pier Cesare Rivoltella (2008) has suggested that networking has become the main means by which we interpret our culture. Even if it has not necessarily become the main way of interpreting everything, connectivity has undoubtedly had an impact on many subjects. Geographical place matters less in communication. It is possible to phone and be in contact with essentially all information systems from anywhere using secured connections through a standard internet connection. Employees can sit by their desks at an airport, in a coffee shop or on a beach and use the same information systems all the time. Travellers can update their blogs and micro blogs while sitting in a sailing boat and students can attend classes from another continent. The possibility of being online and connected to anywhere almost everywhere has dramatically shaped the traditional geographies of participation. The landscape of digital interaction looks very different from the landscape of physical participation. American students can participate in lectures organised by European universities; digital resources of a museum are available not only in one physical building, but also all over the world. Politicians can be in contact with their voters and companies with their clients, regardless of their physical locations. Digital connectivity is reproducing and enhancing the effects of being together apart, as Susan Keller (1977) observed in her studies on the use of the telephone. The concept ‘near’ has been ascribed another meaning when distance is measured by an electronic or a digital rather than a physical yardstick.
Besides the apparent effect of lowering some apparent boundaries, networking has raised and emphasised other borderlines. A boundary that has become increasingly apparent with the introduction of networking technologies is the network itself as a social place. Things happen increasingly ‘on the net’ that demarcate other locations outside the nexus of activity. As David Weinberger (2011) argues, the Internet is not merely a technology. We are the medium and a location. Digital connectivity enables us to cross boundaries, but the boundary between the Internet and the outside world has become at least as impermeable as the earlier barriers of physical distance.
The concept of connectivity and William Dutton’s (2005) related notion of ‘reconfiguring access’ challenge the assumption that the Internet is essentially about information. According to Dutton, the key is the ability to control access to information. Besides factors such as geographical proximity, policies, rules and social practices, access depends on technologies and technology-related decisions, which Dutton describes as ‘digital choices’. The choices affect costs, proximity and authority structures between actors, architecture of access and shaping of gatekeeper positions. Digital choices can make access cheaper, but regardless of the price, the digital distribution of information changes the cost structure. Technology and connection cost more than before, but accessing an individual item is likely to cost considerably less. Physical proximity loses some of its earlier meaning and the measure of remoteness of a functioning Internet connection becomes the most significant physical distance. In the networks, information is distributed not only one-to-one or one-to-many, but also many-to-many and many-to-one (Huvila, 2010b). The abundance of connectivity has limited the perceived significance of earlier physical boundaries and dominating boundary objects such as libraries and local knowledge (Rosa et al., 2006; Rowlands et al., 2008). Different organisations and individuals become gatekeepers and obtain new power positions.
The consequences of these choices may be difficult to predict as they have influence beyond the technology itself. Networking technologies are ‘stupid’ in all three senses Andersson (2010) considers the Internet to be stupid: it just relays information; people use it in ‘stupid’ ways (as perceived by other people); and it was designed in a very straightforward manner without detailed considerations of the wider impact of its adoption. Further to Dutton’s (2007a) argument that the Internet is intrinsically a social phenomenon, the abundance of networking technology is instead an abundance of access and social networks rather than of the speed of physical or wireless connectivity. Our knowing is always related to our physical and imagined surroundings, to the geography of a particular landscape within which we operate, and we attempt to act as economically as we can. Networking has stretched earlier physical boundaries and changed radically the geography of this landscape of knowing for individuals and communities. At the same time, however, the radical stretching of the earlier, often physical, boundaries of knowing has made it more difficult to locate the new limits of what can be reached, what remains outside and where there is a boundary to be crossed.
Despite its social effects, the principal premise of networking technology is to bring computers closer to each other. As we have noted, digital connectivity has had wide ranging social implications, but the underpinnings of these implications are in another type of technology, which has developed in parallel to the expansion of computer networks. There was very little personal in computer technology for a long time. Until the early 1980s, a ‘computer’ was a huge machine, the size of a room or a big closet placed in a special computing centre. The devices had become smaller since the days of the Second World War, but the estimates that a few computers would be enough to satisfy global data-processing needs were closer to reality than our contemporary landscape of ubiquitous ICT. Today everyone in the developed world uses a multitude of computers and other devices with embedded computers daily.
A major step towards personal information technology was the introduction of the microprocessor at the beginning of the 1970s. Suddenly, the central functionality of a computer could be integrated into a single microchip so there were radically smaller and more affordable computers. Enthusiasts could soon build their own small computers and a few years later a large number of small personal computers were introduced to the market. In 1981, the personal computer entered the workplace, when IBM introduced the first IBM PC.
The capacity of personal computers improved rapidly during the 1980s and together with the march of graphical user interfaces and the introduction of the first truly portable computers made these devices fit for serious work. A continuation of the personalisation of computers was the rapid development of mobile telephony from the early 1990s onwards. Even those inhabitants of developing countries who possessed mobile phones now had the same of amount of computing power as a computing centre had had in past decades. The higher end smartphones and introduction of small ultra-portable computers and tablet-sized computers have filled the gap between full-sized personal computers and mobile telephones.
In contrast to the networks, personal computing may seem to be an antithesis of connectivity, which it was during the 1980s before the expansion of the Internet and widespread adoption of local and wide-area networking of personal computers. However, personal computing did break a number of boundaries albeit very different ones from those broken by networking technologies and applications. In the early days, computing was a highly organised and hierarchical activity. State and research institutions and to a lesser extent large corporations owned, operated and controlled all the available processing power. Academic computing centres nurtured tribal collegiality and innovative use of technology for purposes that paved the way to the mass services of the turn of the millennium. Many of the standard applications and services of today – from the graphical user interface to virtual worlds, email, discussion forums and technologies such as web cameras and the computer mouse – have their roots in computing laboratories of the 1960s and 1970s. The limited access to the computing equipment and need for specialist training limited the popularity and impact of the inventions for a long time, until the personalisation of computer technology allowed users to participate in technology consumption and, even more, take charge of the technology.
From the point of view of the information landscape of today and tomorrow, the single most significant impact of personal computing technology may be argued to be the diminishing of the boundaries of common participation and the connectivity of the masses. At the same time, personal computing may be seen as a comparable step to individualisation as the introduction of an affordable automobile for some decades earlier. Individuals could themselves decide when to cross the extents of some of the most substantial boundaries of their lives and determine the perimeters of their social participation. The calculus of the economics of knowing many ordinary things changed radically at the eve of personal computing. Together with the reconfiguration of access a decade later, personalisation has made it possible to obtain and use information in wildly different ways from before. It would be an overstatement to claim that boundaries and boundary crossings have become less social than before, but in some cases ubiquitous personal access has limited our direct reliance on other people. Technology allows us to check train times without asking a train company’s member of staff, and to borrow a book from a library by using a loan point without interacting with a librarian. When we need answers to queries, we are more and more often supposed to check first a certain web page before we are directed to ask a staff member. Instead of delimiting our possibilities on the basis of physical and formal restrictions of access, the personalisation of technologies has emphasised the constraining role of capabilities and, especially, the lack of capacity to help oneself.
Personalisation of information technology changed the idea of computers and their use. Applications of information technology diversified from their early uses in military and science to being used in commerce and industry, and finally – from the 1970s onwards – in individual work and entertainment. At the same time, the perception of computers changed. Computers are not anymore separate instruments. They are an integral part of all technology. Use of computers no longer requires specific expertise. In contrast, computers are expected to be as user-friendly as any other piece of technology. The obverse of the changing role of computers is that we have been forced to accept new types of complexities and boundaries of work. We have also learned to tolerate shortcomings in software and hardware that would be unacceptable with any other type of technology.
Despite what has in many ways been a successful struggle for enhancing the usability of technology, computers and computer programs are notoriously difficult to use. This is common knowledge. Or is it? Are websites and services that we use daily really too hard to use? Is it difficult to make a search using Google (http://www.google.com) or to perform a status update on Facebook (http://www.facebook.com)? Christine L. Borgman (2000) reminded us of the difficulty of creating easy-to-use computer applications. Computers are designed to handle a myriad of imaginable and unimaginable activities. The same interface, keyboard and mouse are used to control activity in multiple contexts and the same physical movements are used for writing text and controlling an avatar in a three-dimensional virtual world. It is unfair to compare a computer or web service to a cashpoint that has been designed to serve only one functionality: to dispense notes.
Despite this seeming impossibility, it would be unfair to characterise many of the new web services of the post-dot.com bubble as difficult to use – at least at some level. If a person has ever used a computer, it would be an insult to assert that this person would find a typical web search engine hard to use. Anyone can write something in the inviting empty box and push the adjacent button. In any event, everyone can be taught to do that. There are three conceivable reasons why some applications have in fact become relatively easy to use. The first, perhaps a marginal one, is that user interface designers have mastered their skills and the research in human-computer interaction has revealed important facts about the relationship between people and computers. Designers and developers know more about how to make complex user interfaces easier to use. The second is that despite the possibility of striving for complexity, the new web services have been inclined to do the opposite. The most successful services tend to have only one function or one primary function with optional complexity. Google is about searching, Flickr (http://www.flickr.com) about uploading pictures, YouTube (http://www.youtube.com) for watching videos, and even the rather complex social networking sites tend to have a relatively easy basic set of functions. Borgman (2000) makes a similar comparison, noting that almost anyone is capable of answering a telephone or playing a film on a videorecorder, but both devices have advanced functionality such as call transfers or timing, which can be more difficult to master. The third and most important aspect is that we have been emancipated by the technology. A significant part of the population has mastered or grown up with the highly unintuitive method of directing computer applications using a device called a mouse: using a hand to move a device on a flat surface to direct a cursor on a semi-vertical surface, where the device and the cursor are far from being aligned with each other. Similarly, it is only a slight exaggeration to claim that everybody knows what to do with a box and a button on a web page. In reality, almost everyone knows what to do with them.
A side-effect of the integration of computers in everyday life is that computers have become more transparent. There are no obvious boundaries between ordinary people and a computer, or between computers and the things they are supposed to do. We use services instead of computers. Computers follow us everywhere in the form of a smartphone, tablet or small laptop. Computer applications know where we are because of their embedded satellite navigators and information from the wireless networks. We get real-time information that is supposedly relevant at the precise location where we are. Simultaneously with locating us accurately in timespace (time and space), networked services have seemingly liberated us from their constraints. Many commercial and public services are available 24 hours a day, seven days a week, independent of our physical location.
The expectation of the ease of use has developed during the short history of digital information management, but became a more central issue when computers began to appear on desktops from the 1970s and 1980s onwards. Until then the problem was to teach computer operators how computers worked and how they should be programmed. The personalisation of computer technology meant that, at least theoretically, computers should be educated to understand their users and their ways of thinking. The idea of human cognition as a premise of computing was expressed already by many of the pioneers including Vannevar Bush, J.C.R. Licklider and Douglas Engelbart. The research in computer-human interaction grew rapidly during the 1980s (Nickerson and Landauer, 1997) to become one of the most studied topics in computer science during the following decade.
The usability of (old and) existing services is important, and it is an implicit assumption of designers of new services and technologies that they should be usable. The WWW was an attempt to empower researchers to manage information. The rapid expansion of mobile computing has traces of the same ideal to make information technology more present and approachable. The infrastructuralisation of computing is also present in the memes like Web 2.0 and the introduction of web-based counterparts of common productivity programs.
By making technology usable it is made as ‘convenient’ as possible. People are encouraged to forget their role as users and merely use things without actively reflecting the presence of technologies or the physical location of the programs or data. A large number of empirical studies have underlined the significance of convenience as a central criterion of choosing specific sources of information (Connaway et al., 2011). A logical conclusion of the observation is to provide people with information in a convenient way. Michel Gensollen (2006) suggests further that our tendency to economise in information seeking is turning our experience of quality to an intermediary form of the traditional definite and pragmatic senses of quality. Despite their insightfulness, both observations warrant some remarks in the light of the economics of ordinary knowing. According to the theory, a preference for convenience and pragmatic solutions is a premise for seeking information. The challenge is that people can feel content for a variety of reasons and the experienced convenience may have as many rationales as the sense of adequate knowledge. The complexity (Byström and Järvelin, 1995) and perceived importance of tasks (e.g., Agarwal et al., 2011; O’Reilly, 1982) affect information seeking and use as well as the other aspects of their context. It may be too hasty to assume that if people seek information in particular locations on the Internet and find them convenient, a similar service in another context would be similarly convenient. As the theory of economics of ordinary knowing suggests, the economies of convenience and goodness are highly conditional.
The paradox of usability is the continuing paradox of the complexity of easiness. Even if the usability of technologies has improved vastly during the past decades, the simultaneous increase in their complexity has been doubled by the growing number of assumptions made on behalf of users. The boundaries created by the difficulty of using technologies have been replaced by new boundaries that limit us to using technologies in specific but often implicit ways rather than in explicit predetermined ways.
One of the most distinctive characteristics of the WWW is the seamless integration of different forms of media. Integration is facilitated by the fundamental similarity of all digital data. On the web, everything is composed of series of 1s and 0s. The second major difference from the earlier media environments is that a single instrument, the computer, is capable of processing and representing a large variety of different types of media. Computerisation has catalysed the integration of media forms and the evolution of the genres of information, and simultaneously facilitated the engagement of large groups of people from around the world. The involvement of ‘everyone’ helps to enrich web content and convergence of media. Amateur bloggers and tweeters can assume a role of an amateur journalist; readers can provide traditional news media with fresh video footage and photographs from news sites. At the same time, services like Ushahidi (http://www.ushahidi.com) can be used to aggregate different types of material from the crowd to a single living report of an ongoing event.
AJAX and RSS were significant technological factors that influenced the rapid emergence of a large number of interactive web applications. Web 2.0 emerged as a popular umbrella term for these new services and a designation for an assumed profound change of the web as it was known. Even if the term was used probably for the first time by Darcy DiNucci in 1999 (see Ruiz, 2009), Web 2.0 is closely associated with
Tim O’Reilly, who popularised it together with his colleagues in the title of a conference organised by O’Reilly Media in 2005. Web 2.0 has frequently been described as a buzzword without a clear shared understanding of its meaning. Tim Berners-Lee has criticised the notion of being a restatement of what the WWW has been supposed to be from the beginning (Laningham, 2006). Andrew Keen (2008) has been an ardent critic of the social impact of Web 2.0. He argues that the notion has created a cult of digital narcissism and glorification of amateurism that undermines the value of expertise.
Even if the notion of Web 2.0 is easy to criticise and even ridicule as an empty buzzword, it embodies many of the central significations embedded in the techno-culture of the first decade of the twenty-first century. Michael Zimmer (2008) sees Web 2.0 as a part of the rhetoric of techno-cultural optimism. At the same time, it can be seen as a collective name for a heterogeneous group of incarnations of the same rhetoric. The services and the non-existing Web 2.0 as a new version of the old WWW are a result of optimistic expectations of the convergence of the social sphere of humanity and the diverse forms and genres of media and information on the web.
The convergence is not limited to the web. The ultimate convergence of media is with reality. The Memex of Vannevar Bush from 1948 was a form of convergence of information and reality. Ivan Sutherland (1965) envisioned a life-like information environment already in the 1960s. In his 1984 novel Neuromancer, William Gibson (1984) described a global graphical information network that converged directly with the minds of its users. It was not possible until the 1990s to come close to its visions, however. Only then did the technology begin to allow the development of reasonably life-like virtual and augmented reality systems that made it possible to combine actual reality with a reasonably similar virtual counterpart. Since then, virtual and augmented reality systems have developed in huge leaps. The difference between the two approaches is that a virtual reality (or world) takes us and information from the physical reality to the realm of the virtual. Augmented reality does the opposite and takes the virtual to the actual, physical environment. They both represent convergence, but from two rather distinct points of view.
Expensive data suits and glasses have allowed convergence of digital and real data for some time. More and more augmented reality features have been implemented into more casual technology. Smart mobile phones with cameras can be used as windows to an augmented reality. The novelty of using a camera phone to watch three-dimensional models of teddy bears or geometrical objects placed on top of a real-time video image is not the teddy bear or the virtual rectangle, but the way in which information can be placed and accessed where it is needed. The heads-up displays (‘virtual’ screens with information projected directly into a person’s field of vision) used by military fighter pilots show the precise value of being able to access information directly in its primary context.
Visual integration of the digital and the physical has been one of the central aims of augmented reality research. Besides visuality, the physical world can be augmented by a multitude of data flows from different sources. Visions of intelligent machines and the semantic web are based on what Nolin (2010) calls ‘markism’. Convergence is extended to cover information about all conceivable living and non-living entities. Tim O’Reilly and John Battelle (2009) argue that, increasingly, ‘everything and everyone in the world casts an aura of data’ on the web. A techno-optimist ideal is a total convergence of these auras to create a comprehensive augmented reality that moves an ‘unnecessary’ cognitive burden from human beings to intelligent systems and networks.
In some respects like the great expectations related to the notion of usability, convergence is very much a combination of technological change and assumptions made about its social implications. Convergence is a very real phenomenon, as Margaret Mackey (2002: 193) demonstrates by showing how a group of youngsters navigated back and forth between different forms of media while working with a single ‘text’. The notion of convergence is used to refer to a pervasive fusion and interaction of different modes of communication and informing. There is a tendency to see convergence as an ultimate boundary ‘object’ that unifies the digital and physical landscapes of information and existence. It has been suggested that relations between technologies, organisations, branches, markets, society and communities, content and users should be changed (Jenkins, 2004). At the same time, however, the convergence itself creates new convergent entities that require a new kind of intensive visual and aural attention of the surrounding time space. Besides lowering boundaries between converging forms of technologies and media, convergence may be argued to produce a new closed space that rules out other forms of media and technology.
According to theorists such as Kittler and Johnston (1997), a central aspect of technology is its role as an active determinant in the interplay of medium and message. Jay Bolter and Richard Grusin (1999) make similar observations in their widely cited work Remediation, on the effects of the representation of works and genres in new media. An underlying technology or medium is never a neutral carrier of information even if it would be represented as a mere facilitator or a platform, the latter a term heavily criticised by Tartleton Gillespie (2010). Lev Manovich (2001: 45) proposes that technology transcodes information into a particular language of its own. All efforts to deny the active role of technologies as a platform or utilities are making claims of their role. As Buschman (2009) reminds us, if technologies were truly neutral, they would be a non-questionable premise, but as long as we make assumptions of the consequences of technologies, they need to be critiqued and questioned.
It is fair to argue that much of the history of technology has been about balancing the positive and negative outcomes of increasing speed and empowering different groups of people to control their lives (Pursell, 2007: xiv, 30, 126, 289, 211, 311, 279, 350). The emphasis of different aspects may have changed over time, as has the understanding of what is considered to be speed and empowerment, and of who is empowered. In some sense, it may be correct that humankind is ‘slowing down’ (as Michaels 2011 suggests) when the NASA space shuttle is retired from service and there is no successor for the Concorde, but this does not mean that a certain type of speedism would diminish other domains of human activity such as the Internet. Jan Nolin (2010) has proposed that the Internet is driven by three distinct ideologies of speedism, boxism and markism, aiming respectively at speed and the segregation and labelling of different types of information. Despite the fundamental differences of the approaches, it is fair to argue that they are all based on the preference for control and temporal effectiveness. Networking technologies, personal information technology, emphasis on usability and convergence have positively enabled people to do new things and to do old things differently and more effectively. At the same time, the technologies have erected boundaries that can make it difficult to remember how people lived without mobile telephones, cars or television. The change is not related to the emergence of a particular technology or a family of technologies, however. The ideologies that drive technological change have a central role in shaping the agency of technology.
Luke Tredinnick (2008) writes about the difficulty of making a distinction between the agency and impact of technology and the cultural representation of technology. An overemphasis on the active role of technology trivialises the significance of how we as users of technology perceive and appropriate it as a part of our everyday lives. Technology produces as much culture as culture produces new technologies (ibid.).
It is both object and subject (Bentivegna, 2009: 18). Jorgen Skågeby (2011) discusses an example of the cultural pre-production, or pre- ‘produsage’ (in the concept of Axel Bruns, 2008), of new technologies in the context of the discussions on the Apple iPad before it went on sale in 2010. Skågeby shows how the expectations of the new technology ‘remediate’ – in Bolter and Grusin’s (1999) sense – experiences of earlier technologies and have an effect on the anticipated and actual user experience of the new product. Similarly, the politics, history and culture of particular technologies influence how they can be used outside the scope of their original and intended use. Andrew Whitworth (2009: 92–3) compares the difficulty of overcoming the predominantly passive mode of viewing television in educational use of the same medium to a potentially similarly effect of the computer. Even if educational television programming encouraged active engagement, the leisurely mode of hypnotic viewing is difficult to break. Whitworth suggests that the prevailing uses of digital technology can have a similar effect on its use in other contexts.
The mutual influence of cultures and technologies is not, however, as direct as the examples driven from science fiction literature and popular sociology of technology would seem to suggest. The only outcome of Arthur C. Clarke’s anticipation of a communications satellite (Clarke, 1945) or Vannevar Bush’s vision of Memex in 1945 (Bush, 1945) is not that satellite communication and modern digital information systems (representing quasi-Memexes) have become an integral part of the current infrastructural landscape. In contrast, the role of technologies tends to be very different from the expectations advocated by enthusiasts and pessimists, as Marcus Leaning (2009) carefully demonstrates. The dichotomy of the negative and positive experiences of technology is a sign of pre-existing anxieties of anticipated change (Tredinnick, 2008). Techno-critics from luddites to Walter Benjamin (2008) and Paul Virilio (2004) have voiced their uneasiness about technologies, but in most cases the principal source of discomfort tends to spring from social and cultural rather than technological change. New technology is not dangerous per se. It is precarious as a device for causing anxiety, unemployment and cultural artefacts of inferior quality. The various schools of thought from cybernetics to transhumanism demonstrate that positive expectations extend far beyond the technology itself. The common denominator of both positive and negative views is that the extent of change is difficult to anticipate. Wolfgang Schivelbusch (1986) has described the most far-reaching side-effect of the introduction of the railway system as probably being a radical change in the way we experience time. With the onset of the railway system, for the first time the speed of the railway required that different communities along a railway line synchronised their clocks so that trains could run on schedule. The scheduling of horse-drawn coaches before the railway had been at best rather approximate and these coaches ran slowly enough to compensate for even considerable local differences in time. The relation between the railway and better synchronisation of measuring time was not, however, a matter of direct causation. The railway system is a product of a demand for efficiency and speed as much as it changed how these two qualities were perceived after its introduction.
The consideration of time is equally appropriate in more recent technological transformations because the Internet has been another major step on the same continuum of synchronisation or, as Mika Pantzar (2010) has remarked, reconfiguration of the use of time, which interestingly enough has followed closely the hypotheses of Alvin Toffler (1970) from the early 1970s. Instead of the significance of time per se, the impact of the introduction of railways tells us even more about the difficulty of predicting the effects and extents of technological and cultural transformations. Similarly to the railway, networking is not only a technology trend. It is closely related to the perception of a preferable configuration of society. Networks have many desirable qualities in both the light of the collectivistic ideology that prevailed in the early days of the Internet and the vastly different individualistic discourses of empowerment. As a technology and a social configuration, networks help individuals to cross differences in time and space. It is easier than ever to move across boundaries to engage in helping people who suffer from natural and human catastrophes, to communicate and to work together for different causes from open source software to societal activism. The abundance of connectivity has limited the significance of earlier physical boundaries and dominating boundary objects such as libraries and local experts (Rosa et al., 2006; Rowlands et al., 2008), but even more, this connectivity has evoked an impression of boundless possibilities. The abundance seems limitless even if we would be curtailed by physicality, economic realities, our attempt to seek information in as economical a way as possible and a plethora of other factors stemming from our social and cultural context.
Like the network, the ideal of usability has permeated culture beyond the instrumental qualities of ICT. The expectation of usability permeates applications and structures that have been as expert-centric as early computers. Library catalogues illustrate such a system. They were introduced as a tool for librarians to keep track of collections and for locating specific pieces of literature for library users. Until recently, libraries have placed considerable focus on educating users to ‘use’ libraries – to become novices in the trade of using a library and library catalogue (Spiranec and Zorica, 2010). Despite the ambition to make open public access catalogues more accessible (Sauperl and Saye, 2009), they are still very much the domain of proficient catalogue users rather than used by ‘everyone’ (Saarti and Raivio, 2011). The idea of opening catalogues for direct user input has been proposed with enthusiasm, but at the same time with an emphasis of the associated risks (Arch, 2007). The paradox of the library catalogue is its similarity with search engines. People expect similar features and the same sense of ease of use from library catalogues that is prevalent in other types of web services (Connaway and Dickey, 2010). In Search Engine Society Alexander Halavais (2008) described how search engines have become a default in society and they determine how other technologies are supposed to function. The comparison between easy-to-use search engines and obscure library catalogues is, however, partly spurious and based on the assumed similarity of the two systems. The paradox of library catalogues is that the front-end, open public access catalogue is very similar to a search engine, but the actual catalogue is fundamentally very different from the mass of web pages indexed by search engines. In contrast to the Internet, library catalogues are also carefully structured and constructed for a particular purpose (Denton, 2007). While the Internet contains an endless variety of texts written by and for ‘everyone’, a library catalogue can be characterised rather as a technology of regulation rather than of emancipation, as Gloria Leckie et al. (2009) have argued. Because of the regulatory attempts to retain the fine line between library professionals and the general public, attempts to improve usability have not necessarily been very profound.
The notion of usability embodies a genuine ideal of making information technology and its applications more approachable and fit for use. At the same time, the aspiration becomes a label that is enough to make services and technologies epitomise the expectations beyond the real consequences of their ‘usability’. If a particular popular service like a search engine is supposed to be easy to use, its usability is taken for granted and the qualities of all other similar and quasi-similar services are measured against that yardstick. In a sense, the very abstract notion of usability has become a measure of itself. Usability has become an example of a bypassed threshold in collective behaviour (Granovetter, 1978). A sense of ‘easiness’ is preferable because it is has become a truism. Usability reached this tipping point a long time ago and has become a major instrument of the discourse of empowerment in the culture of individualistic participation.
The political nature of technologies makes it easy to see them in a negative light. Technologies such as the Internet, which are considered to be essential, continue to cause anxiety and doubts (Montgomery, 2007: 210) and despite the promises of variety and empowerment, modern technology has led to systems that seem to constrict rather than liberate (Pursell, 2007: 316). At the same time, as Tredinnick (2008) remarks, the maliciousness of technology is very counterintuitive to our everyday experience. Countless technologies have made and make our lives easier than before and do indeed provide the sensations of empowerment and speed embedded in the three ideological underpinnings of the Internet discussed by Nolin (2010). Paraphrasing the argument of Martin Heidegger (2001), the problem is the tendency to frame technologies technically according to their exploitability (Harman, 2010). Instead of the tools per se, the issue is their appropriation within a certain system of instrumentality. The object of critique should not be the technology itself, but our assumptions of it and its consequences. Technologies and systems are part of the same structuration process, in which technologies and human beings constitute each other as human actors, as Orlikowski (1992) has noted, but only within the framework of a particular discourse of appropriation. The problem with the proposed outcomes of technologies is that they are very difficult to measure and therefore there is a risk of returning to normative assumptions about them. If technologies are created for empowerment and speed, that is precisely the outcome that will be measured and achieved. As Neil Postman (1992) warned, we are risking becoming blind to the ideological meanings of the technologies we use. The technology is not the source of positive or negative change. Kentaro Toyama (2011) has suggested that the effect of information technology may be best explained in terms of amplification of the intent and capacity of human and institutional stakeholders. Technology does not function as a substitute for existing deficiencies. Together with its representations, technology is at its best in amplifying normative assumptions made of its role. There is no technology to do the contrary.
There is no doubt that technology is a fundamental premise of knowing in the age of the social web. Both old and new ICTs give us opportunities and constrain our possibilities to be informed, to know and to act. The complexity of the relation of knowing and technology is that their relationship is far from being given. Computer networks and the Internet are not only technical communication infrastructures. They have had a central role in reconfiguring the experience of access and distances around the world. Personal computing is similarly a question of a profound personalisation of the digital environment. Usability and convergence have become cultural expectations beyond their original scope of technological devices and digital media.
Even if a central message of the prevailing techno-discourse is that of a libertarian empowerment and glorification of speed, technologies have simultaneously raised new boundaries that confine our opportunities for seeking information and knowing. A technology per se may be neutral, but the instrumental acts of bringing forth and using technologies make them active agents within the context of their emergence and exploitation. Expectations of the emerging capabilities of technology frame our outlook and the horizon within which we act beyond the context of the technology itself. Even if Douglas Rushkoff (2010) makes a relevant suggestion by urging us to take responsibility, to program instead of letting ourselves be programmed, we do not have boundless capabilities to do so. As Sherry Turkle (2005: 159) argued, technologies such as computers function in a dual role as machines and tools in the Marxist sense. As machines they constrain us and as tools they provide us with capabilities to act. Tendencies such as networking, personalisation, usability and convergence begin to live a life of their own and erect boundaries when they become measures in their own right. A network has become a boundary that is difficult to cross. Everything outside the network ceases to exist. Similarly, anything that is not personal or linkable to a personal domain loses significance. Usability becomes another boundary that prevents people from understanding the complexity of technologies and the information they are supposed to convey. Finally, the notion of convergence implies a promise of a total reconfiguration of boundaries that closes us within the confines of a perfect experience.
In contrast to the promises of an unlimited access and freedom, technologies and technological transformation are only capable of amplifying and remediating our inert capacity in the digital environment. The digital environment provides us with a new ground for cultivating our capabilities, but the environment is as bounded as earlier ones, even if the boundaries are very different. It is capable of amplifying some of our capabilities, but provides very little room for others. At the same time, the simultaneous amplification of similar voices may end up in a cacophony that is capable of little more than reasserting normative assumptions of the perceived quality of the technologies themselves.