From space to function
Having established in the previous chapter the concept of space, we now need to understand how to move from the conceptualisation of the space towards implementing the space’s infrastructure such that it is fit for purpose. The first three chapters of this book discussed what we need to have considered before we move from space to function. We need to understand:
So we now need to move from the idea of the space to a functional system. The following sections should not be seen as a methodology (Chapter 7 on putting it together does this). Rather they should be thought of as establishing a mindset for approaching implementation, that is indeed consistent with the ‘softly softly’ approach to Web 2.0 that is in fact recommended by many consultants and advisory firms.1 It provides a framework to help understand where to let go, when to intervene and how to let this happen: how to move from the type of game we are playing and its purpose to the rules of engagement and the type of equipment and infrastructure we will need. We need to:
1. Understand the players and their positions in the overall process of moving from space to function. Who is responsible for design, how should it be managed, how much control is necessary, how much responsible autonomy is possible?
2. Determine the nature of the information to be stored, as information is the vehicle for achieving the purpose of the game (scoring). Is it prescriptive, descriptive, personal or emerging knowledge? This will help us decide if Web 2.0 tools are the appropriate tools to manage it. For example, prescriptive knowledge about emergency resuscitation, on which life depends, probably doesn’t belong in an open and democratic wiki.
3. Design the specific types of information objects and metadata which will constitute the content of the space: the first component of the game’s infrastructure. These will be the objects which constitute wiki pages, blog pages, social networking personal pages, semantic web and social tags, ratings and recommendations and so on.
4. Consider the flows which are required to move the information objects through the space towards achieving the purpose. We then identify the software functionality which will let us process the information in the required way. This is the second component of the game’s infrastructure.
Firstly let us look at the question of agency: who should be involved in the movement from space to function, who takes responsibility, who does the work of designing and implementing a Web 2.0 system in the enterprise? Much of the design and take-up of any tool lies in how its usefulness is conceptualised and translated into functional capabilities. The German philosopher Heidegger provides a deep insight into tools and their role in working life, which we shall pursue here.2
The value of a tool like a blog or wiki is derived from its role as a piece of functional infrastructure or equipment within a set of activities. The value of such equipment is constituted by its fitness for purpose: it exists in order to perform some function in terms of other related pieces of equipment within that space. Our perception of these tools is at its most immediate in their use, which is paradoxically when we are least aware of them: we are just using them. We simply do things with objects that are ready-at-hand and which, as expected, fulfil their function. We understand the things we do quite naturally and do not carry around explicit instructions to ourselves in our heads. Heidegger employs the example of using a hammer within the set of activities known as carpentry: when we need it, we simply pick the hammer up, grasp and place some nails on wood and use the hammer to drive in the nails. Everything in this process belongs together, it is natural and non-conscious, we are aware of the instruments and the materials, but not in any intellectual or analytical way.3
This is how human activity mostly is: experience is immediate and we are one with reality most of the time, we are ‘in the world’ not just interested bystanders. In an attempt to more genuinely capture our feeling of being alive than preceding centuries of academic analysis, Heidegger’s philosophy focuses upon the immediacy of experience and our ‘oneness with reality’. Indeed, it is when we analyse and when we seek to articulate being in the world that we induce a kind of blindness, we become oblivious to the obvious. It is not until a work process breaks down in some way because the tools do not suit what we are trying to achieve that we become aware of what the tool is meant to achieve or indeed what sort of tool is needed. We are ‘thrown’ by an anomaly, a discomfort or something that doesn’t work.
In order to be adopted by knowledge workers, tool artefacts, like those of Web 2.0, should be functionally fit for the purposes of the activity that occur within a space. They need to be ready-at-hand, supporting the information flows and game moves that are required in a way that is natural and almost not noticeable: this requires a deep integration of the tools into what we actually do and how we perceive our work. These tools need to be designed and implemented to support activities which are often deeply tacit and themselves performed unconsciously. Remember, knowledge work is non-routine, it does not follow procedures and it is almost impossible to draw a clear connection between effort and value.
Classic systems analysis and design are disciplines generally exercised by others on our behalf. They seek to extract, analyse, decompose and model our activities so that computer functions can be specified, developed and integrated into our working lives, making us more efficient. But this often results in poor outcomes, for all sorts of reasons. In my view, one of the critical differences between standard application software technology and Web 2.0 is the mode in which the transition from space to function can occur. Because Web 2.0 tools are malleable and because they are forgiving, the integration of the tools into working life can emerge. Therefore the first key mode of transition from the open field within a space to a set of functional tools which are ready at hand is emergence.
This emergence comes through the iterative application and adaptation of the tools in working life until they cease to break down, where they have reached a state of being ready at hand. Because the tools of Web 2.0 are simple and very often familiar, much of this adaptation and emergence can be decided by the artisan, the knowledge craftsman, the ultimate user of the tool. To be sure, advice is needed and some education will be necessary, in particular in information design. But these tools are used in the home, on the Internet, on personal mobile phones: they are consumer products not intended for specialists. This is the second key mode of moving from space to function with Web 2.0: autonomy.
Finally the knowledge craftsman is not working in a Cartesian vacuum: he or she is not in a simple, disinterested cognitive relationship to a piece of knowledge. Craftsmen work with others in a consensually co-created set of meaningful activities: no single craftsman has the privileged blueprint of the right way to do things. The right way to do things is continually reinforced and reinvented by the group and its leaders. So any creation of a set of tools to support this ‘right way of doing things’ will be done by the group. This is the third key mode of moving from space to function with Web 2.0: it is collective and self-organising.
The time point at which these characteristics come into effect in Web 2.0 projects will vary widely: some implementations will have their hands held until the last drops of self-determination have been squeezed out by the care and love of a paternal systems analyst. Others might wander aimlessly on the open field, breaking every law of good information management in an orgy of user-driven creativity. And yet others sit still and quiet, fixed to the spot by negligence-induced agoraphobia. In my view, the best default position for the taking of responsibility for design and implementation, the move from space to function, is one of emergence, autonomy and collective action for the users of the space.4
This is not to say that advice, training, support and governance are not critical or cannot be called upon when required. Quite the contrary, adequate assistance in how to use the tools and how to apply them to business needs is critical. But the mode of delivering these should be open- ended rather than closed, peer-to-peer rather than instructional, exploratory rather than fully formed, on demand rather than according to a schedule. The building blocks, not completed structures, should be supplied to potential users. Web 2.0 is a platform, not a completed structure.
Many potential users of Web 2.0 spaces will need information on how they can use the software, what functions exist, what the pitfalls are and how to ensure the information in the space remains current and useful. It may take a passionate ‘champion’ within the organisation to promulgate and raise awareness of the tools. And it may well be that a consultant or business analyst observes that their particular business process or area of activity is a perfect candidate for Web 2.0 application and offers help and design guidance. But nonetheless, the factors that drive good design and successful adoption are often not on the functionality radar. Design, particularly of solutions to fuzzy and non-routine knowledge-based interactions, must often emerge in the interaction with the tool, not as the result of an analytical exercise. And adoption of the outcomes of this design may depend upon the very act of being responsible for it rather than it being a particularly good design. Apart from ‘ownership’ of the outcome, there are other attributes of the social environment which co- evolve with the design activity: leadership and the participation of leaders, direct measurable contribution to business outcomes, trust and the evolution of other social institutions which drive collaborative behaviour.
There is another, more mundane argument for these principles. Even with the best of intentions, IT departments and specialists are simply another complicating factor: wherever it becomes necessary to involve them, there will be delays and miscommunication and a de facto sacrifice of some autonomy and self-determination. In my view, entry by potential users into the Web 2.0 space should be able to be immediate: everything that is necessary to commence active use should, in principle, be present and available.
Finally, the degree of responsible autonomy depends upon the technical capability of the ‘designer-user’. Given the current penetration of technology in business, the home and in education, this capability is heightened from two ends: from one end, the tools of the Web 2.0 suite are a consumer item – robust, simple and used at home – so the degree of particular capability required is fairly undemanding. From the other side, the knowledge workforce is filled with people who are IT savvy, who already write complicated spreadsheets, who might have set up an MS- Access database on their laptop, who use Facebook, Wikipedia and comment on online newspapers. Wikis, blogs and social networking present little intellectual or technical difficulty for these people. So the initial challenge comes mainly in the area of information design and information management: being able to conceptualise the information to be produced in a way which will lead to effective, appropriate use of the tools and information which is useful, maintainable, accurate and up to date. How do we achieve this with Web 2.0 tools?
The knowledge in organisations consists of the mental schemas, mental models, explanatory frameworks and facts which enable us to order the world, predict consequences, take action and learn from the past. This can be classified in a number of ways. The most commonly used classification of knowledge is Michael Polanyi’s distinction between explicit and tacit knowledge. Polanyi (1891–1976) was a chemist and philosopher who, although a passionate believer in the superior value of objective positivist science, observed that scientific insight actually emerged in non-logical ways. Background ‘tacit’ knowledge, belief, gut feeling and commitment were clearly critical in the creation of ideas but had to be distinguished from the validation or test of any resulting scientific theories, which had to be clear, articulated and ‘stand on their own feet’ as it were. Explicit knowledge is written down or captured in some form, so that it can be easily understood, transferred and shared. Tacit knowledge, however, resides in the mind and describes knowledge which is hard or impossible to express and which reflects expertise, experience and know-how.
All organisations already use technologies to manage information and knowledge exchange and the communication flows which create and share it when it is of commercial or productive utility. In framing Web 2.0 to an organisation, it is critical to be able to explain clearly where it fits within an existing suite of tools and the circumstances in which Web 2.0 may provide a more appropriate instrument. For these purposes we take a fourfold knowledge taxonomy which is differentiated on the basis of what makes sense to information management within organisations. It is not intended to be philosophically watertight but is a heuristic and explanatory taxonomy.5 The four types of knowledge cover what are called:
Knowledge can of course change its type as circumstances change. Distinctive knowledge can be passed verbally from a mentor to members of a group, who discuss it in an online forum as emergent knowledge, apply it and make it part of their approach to work, and upgrade it to proprietary knowledge by placing it in a site wiki. Subsequently the knowledge becomes recognised as ‘best practice’ and is integrated into the procedures, thereby becoming prescriptive knowledge. Figure 5.1 summarises the knowledge types in this taxonomy and gives examples of types of tools which are appropriate for managing each type.
Just because knowledge might be universally accessible on a wiki or a blog (instead of in a paper report or in a corridor) does not automatically make it prescriptive, any more than shouting at a meeting means your point of view becomes law or that it will be taken seriously. Rather it means that the processes of developing and sharing proprietary or prescriptive knowledge are taking place in Web 2.0 tools rather than e-mails or conversations, with the added bonus of being widely available and open to scrutiny. Precisely this openness and scrutiny will of course dissuade some people from using the systems.
Proprietary prescriptive knowledge is definitive and normative. It lays down how staff must perform their tasks, how they must act to achieve their assigned goals, perhaps even how they must treat each other. This knowledge is highly explicit, controlled and generally contained in procedures, manuals, strategies, mission and value statements, organisational role descriptions and so forth. It can also be tacit in the form of unambiguous behavioural norms and values: laughing might be unforgivably frivolous here, a tie unacceptably uptight there. But it is usually explicit and must be externalised and available to all relevant staff at all times, and be unequivocal and definitive in its form of expression. Knowledge is already strongly shared and understood when it reaches this strength of institutionalisation. This knowledge is usually authorised and legitimated through a formal process of consultation or management fiat, approved, signed-off and published so that universal, concurrent access to an authorised version is guaranteed. This knowledge is close to being reified, a concept used often by Marx to describe ideas which, although of human creation, appear to have an existence that is unassailable and non- discussable, independent of criticism: his favourite example was God.
The requirement to develop and manage versions of procedures, gain expert input, receive management authorisation and train users means this is a slower-moving and conservative type of knowledge. There is a need for all employees to understand and follow universal, normative, controlled procedures, not the least argument for which is to mandate safe behaviour and the avoidance of hazards under a duty of care. Therefore document management systems, workflow management systems and formal publication tools such as the intranet are the ideal tools to manage this kind of corporate memory in most large organisations. If wikis and blogs were to be used as publishing vehicles for this information, one would have to restrict the ability to change such prescriptive information to authorised people in order to retain control over content.
The managerial imperative for control and optimisation makes many leaders suspicious of wikis and blogs. Immediate publication by an individual bypasses the normal controls and erroneous information available on a wiki can appear authoritative and prescriptive. This may have serious consequences, not only for productivity but for human life, for example if it is incorrect information about how to dismantle large machinery or construct scaffolding for multi-storey building construction. The question is not only one of control of versions of the ‘truth’; in some industries it is one of unambiguously distinguishing normative instruction from other sorts of information. While scepticism and judgment is applied regularly by users in the Internet sphere, this is not necessarily replicated in organisations.
Proprietary descriptive knowledge is about how tasks are usually done, can be done or might be done, providing a description of operations and heuristics (or rules of thumb) that have been accumulated within groups over time: ‘it’s the way we do things here’. Proprietary descriptive knowledge is ‘proprietary’ in that it represents the organisation’s own way of doing things (although similar problems, regulations and environments often lead to similar solutions in other firms of course). It is created as individuals or groups work on projects, operations or allocated tasks and socialise each other into this way. This knowledge is generally stored in people’s heads as tacit expertise and know-how, but it can be shared by being written into reports, meeting minutes, e-mails, additions to procedures, notebooks and so on. Generally it is externalised in conversation, through the socialisation of new hires, through meetings or planning events, and so on. This is knowledge of business value, but it is often restricted to specific groups which have a specific responsibility or which are physically separated and develop individual solutions to a common problem over time.
Proprietary descriptive knowledge often remains tacit and manifest in group behaviour but represents the most common type of actionable knowledge found in firms. It can be shared via e-mails, intranet websites and document management systems or contained in archives, but wikis are an ideal tool for managing and sharing this kind of knowledge. Wikis have low barriers to entry, they are easy to use and multiple users have the ability to change content from multiple sites at any time. In some senses it is precisely because they are usually not authorised or signed off that wiki entries are most appropriate for this knowledge. The social and organisational effort in mandating and certifying information is very high and the rate slow; wikis support a pulse and tempo which match everyday work and the contents often describe ‘the best way we have of doing things at the moment’.
Wikis, as tools which allow collaboration and synthesis of knowledge under page names and conceptual categories, are ideal tools for the development, location and use of proprietary knowledge. However, there are important issues to consider with the legitimacy of knowledge in organisational wikis and the impact of errors and misinformation.
Distinctive knowledge is deep knowledge coming from many years of experience and is usually stored in a person’s mind only. This knowledge is rare and valuable, but its value is difficult to ascertain and there is no relation between the volume of the knowledge and its value at a certain time. These people are often known as ‘the guru’ and are valued for their insight. Distinctive knowledge is generally accumulated over a long period of time, either through internalisation and socialisation in a sphere of activity (or within all parts of an organisation over a long period) or through careful in-depth study and research. It is overwhelmingly kept in the heads of individuals; the knowledge is often voluminous and highly tacit, not obvious often to the holder of the knowledge; it is difficult to capture, perhaps even impossible, but will often be accessed when these individuals are questioned to provide solutions or insights, triggering a specific response which can be extremely valuable. This knowledge has not been externalised or objectified to the extent that it is common or assumed by a significant number of others.
Distinctive knowledge is usually shared by the guru via conversation, told in stories or simply by being observed on the job for the way they attack problems and tasks. It is seldom stored, but if it is, it is as organisational case histories and stories. Some of it might be captured as video-recorded oral debriefs, narrated as lectures or by using the experts as trainers. Web 2.0 enters the scene by providing firstly a forum for externalisation: a blog is the ideal vehicle for expressing distinctive knowledge but of course is limited by many personal and social institutions like modesty or shyness or lack of time to become familiar with the technology. The oral debriefs and stories or lectures can be captured, stored, classified and published as learning objects embedded within wiki pages, constituting a kind of organisational ‘podcast’ or even a corporate ‘YouTube’. If transcribed, the recording can be inserted into wiki articles as text which can be upgraded and changed by others in the future. Blogs provide a vehicle for an expert to express and develop an idea which they personally consider important.
Emerging knowledge becomes manifest when articulated and socially constructed by groups. This knowledge is not necessarily in one head but emerges through the combination of knowledge held by different people. At any point in time in an organisation, knowledge is in flux towards becoming proprietary and at the same time is being acted upon. It is an outcome of a social process: several members contribute and through interaction create a way of understanding or doing things, find a solution to a business problem or make things clearer to themselves. Emerging knowledge represents potential rather than discrete information or facts. When three people engage in a discussion of facts A, B and C to create new ideas D and E, the knowledge is a complex, responsive, interactive process rather than an entity.6 The knowledge is objectivised and legitimated locally when participants agree, but does not have wider ranging authority than their own consensus. The knowledge is developed further into organisational knowledge by being promulgated and accessed through the managerial ability to create forums and opportunities for bringing the right protagonists with diverse knowledge into a single discussion space.
This emerging, often interim and ‘becoming’ knowledge is created and stored in the minds and conversations of formal and professional networks, tea rooms, pubs and communities of practice. With the crowd- sourcing possibilities of the Internet, it can even take place on specialised websites. Conversations at event reviews, meetings, corridors and breakouts continually create new knowledge while reinforcing the old as they combine in new ways and for new purposes. Electronic forums, e-mail, video and teleconferencing have been key technologies for supporting these interactions between people who are displaced in time and space.
Technologies such as instant chat, video-conferencing or phone are generally transient and unstructured: as the Roman poet Catullus said, it is as written on the air or swift water. E-mail is fragmented and personal. Electronic forums can capture the exchanges but are typically issue-based and prompted by a specific question within an overall community title. Forum technology is generally issue-focused and lacks ways of organising and synthesising the subsequent conversation under a discipline or knowledge category. This also leads to fragmentation and so these tools do not represent a good option for turning emergent knowledge into coherent, proprietary knowledge.
Web 2.0 technologies provide strong solutions to these drawbacks. Blogs and wikis, and at an even faster rate Twitter, are very interactive and conversational. Wikis can link threaded discussions directly to a knowledge page or classify knowledge in conversations according to known categories. Each wiki page has a talk page, which can be in threaded discussion format, which links the conversational contributions to a specific knowledge object (i.e. proprietary knowledge item). The emergence of knowledge is coherent and stored, and the reasoning behind certain solutions can be seen by future users.
Within the boundaries of a certain space we can enumerate the flows of information that are required to achieve the purpose of the overall activity. In a process of progressive elaboration, we need to specify the activities performed in a particular space. For example, an encyclopaedia space will probably have a set of flows that is fairly constrained and standard: encyclopaedia articles in any organisation or context will have similar flows and layouts, although of course decisions need to be made whether the flows are more like Wikipedia than Britannica. Some spaces may have very few activities: for example, an advisory space via a blog might just be considered a vehicle for publication by a group (similar to an intranet home page) or opining (as performed by an expert blogger), in which case very little analysis is required. But usually even the simplest activity will require some process analysis and the identification of key events. So some kind of group planning session will usually be needed to develop a picture of what activities the wiki, blog or social networking site will need to support. Let’s look at a case study of a change implementation team in a large engineering organisation.
A project team is responsible for implementing document and records management standards in a large organisation through the use of a sophisticated document management system (DMS). They are moving through each department, developing DMS folder structures, training people in how to use its functions for storing and versioning controlled documents, answering questions and providing support. Most communication is via e-mail, telephone, training sessions and face-to-face meetings. Progress is slow and cumbersome and they have to repeat themselves for each group. So they decide to move to the wiki for information distribution and interaction. Within a two-hour group meeting, they model these flows on a whiteboard and come up with the following list:
The project identifies a number of different spaces – advisory, project and collaboration, for example – which will engage different groups of people with different rules and standards. The information contained in the spaces is a mix of prescriptive, descriptive, distinctive and emerging knowledge, and each is handled differently. Prescriptive information is kept on the Internet content management system, the personnel address list of the organisation or within the new document management system, but is linked to directly from the wiki. Descriptive knowledge, such as project-specific dates or hints and tips, is entered directly into wiki pages by the support team. Emerging knowledge is developed and stored via the comments and user forum pages associated with each wiki page. Distinctive knowledge is not stored initially in the wiki – it is kept in the heads of people such as the software experts – but these ‘gurus’ can be easily found and activated by their subscriptions: users place questions on the ‘questions’ page which triggers them to provide a response. These responses are given by the experts on the FAQ page: the users are directed to the link on the FAQ page which directly answers their question.
This is a sophisticated communication and collaboration environment but in reality it took less time to implement than the planning meeting. The wiki pages were set up with immediate effect, links to the information (much of which already existed but was dispersed throughout the organisation) entered, and other information, previously sent in e-mail attachments, was uploaded. The team filled in the gaps. In future, any phone call or e-mail queries are referred first to these wiki pages. The amount of time needed to explain verbally dropped dramatically, the number of reported errors dropped as people learned to use the systems more quickly, and the reworking of folder structures reduced substantially as more people can comment on them and identify problems prior to implementation.
It was particularly helpful that the wiki is an enterprise system – that is, all staff automatically have full read/write access. All members of the implementation team can update the information for users, and any user can read or comment on the information. All administration is independent of the technical department. All support team members are notified of changes to a wiki page, including new comments, which take place in a threaded forum format associated with each information page.
After moving from space to a more detailed consideration of the flows within the space, we need to look at which software technology will actually support the flows: the knowledge transformation processes which are the basis for all knowledge work. How will information be created, how will it be shared, how will it be classified, how will the right people be involved in a conversation or discussion, how will people be told something has changed, how will it be legitimated, how will we know it is true? Unfortunately, it is not possible to enumerate here a list of blog, wiki or social networking functions. First, these are extremely numerous and products vary widely in their implementation! Secondly, products are constantly evolving and improving. Finally, products and tools are converging and integrating functions from other ‘tool types’. Therefore you will need to refer to the relevant documentation for the product you are considering using.
There is of course a limited set of functions within the technology: how can this set meet every such possible demand? It seems to me to be important at this stage to be able to articulate the requirements for knowledge exchange in simple language which is associated with the specific information. Table 5.1 is an example of the correspondence between the flow and the wiki function, and the knowledge type and the controls, for some of the flows in the above case study.
But not all functions are immediately or intuitively available to users. Here is an example. A department within a large construction company decided to use the corporate wiki, running the Mediawiki software, to distribute information to its members instead of e-mail. This has many advantages: the information is present at once and can be easily updated and even discussed or amended by group members. This information consisted of information about new procedures, meetings and conferences. A notification of new information could easily be placed by the group administrator on the group’s RSS feed, which was a special type of page within Mediawiki. In order to be notified, every member of the group had to subscribe to the group’s RSS feed, but this subscription had to be initiated by the receiver. How could this be guaranteed? A simple fix was needed: as it happened, a Mediawiki extension called ‘Who Is Watching’ was found which allowed one person to add other people to a Watchlist for a page, thereby ensuring they were notified when something changed in that page (see Figure 5.2). This was found in Mediawiki’s extension list by a member of the IT team and illustrates that although autonomy and self-determination might be crucial for adoption, the technical expertise and advice should never be too far away.
Figure 5.2 The Mediawiki extensions list showing ‘Who Is Watching’ extension Source: http://www.mediawiki.org/wiki/Extension_Matrix
Having understood the flows and the knowledge type which will be stored or linked to from within the Web 2.0 tool, we need to define more precisely the information objects and their layout. Information design within Web 2.0 is about establishing a framework so that information will grow in an orderly way, making it simpler for workers to create, update, locate and use. Generally, a group of knowledge workers who need to collaborate or create information will have a language or vocabulary which defines their sphere of activity and provides the conceptual backbone for knowledge to grow within and upon. At certain times (which may or may not be routine or regular) or at points within a process or activity, information is needed or should be added to the space. It is therefore generally necessary at the beginning to decide upon the key information objects around which information will cluster. Some objects might be information products, the outputs of collaboration, such as a monthly progress report, a proposal in response to a request for tenders or a document which analyses a reduction in sales in a certain area. Some objects are persistent information objects which will grow over time: information about complex work machines, business process descriptions, particular customers and so on. Some of these objects will be updated as part of a business process, for example a customer contacts page. Others will not be part of a business process and are updated when new knowledge or experience is accumulated, for example an article about a particular enzyme or geological formation. This is the great advantage of wikis and blogs in particular: they can be about anything.
Another important part of information design is the creation of a set of normative tags, the semantic web, which is the conceptual skeleton of the system of thought within the space. These need to be agreed within the group running the space. They may or may not be linked to each other, but it must be possible to at least tag any of the basic information objects with the appropriate standard term. So, for example, the previous case might have defined the standard tags shown in Table 5.2: DMS Project, DMS Training, DMS Help, DMS User Group Page.
|Publish standards||DMS Help|
|Publish schedule||DMS Project|
|List contact details||DMS Project|
|User group draft folder structure||DMS User Group Page|
|Frequently asked questions||DMS Help|
|Locate all project pages||DMS Project|
These might be linked together into the category tree shown in Figure 5.3. Someone examining a DMS Help page can navigate via the conceptual category tree of tags contained in the page to the DMS Project Page and of course then, via the tree, to any other page in the project.
So information design is a very important part of the implementation of a Web 2.0 space. It is critical to consider the types of information object, the level of detail required, the conceptual classification (or tag) for each information object and the relationships between these classifications.
The point of this chapter has been to give an idea of the sorts of decisions that need to be made in moving from the conceptualisation of purpose and space to a set of rules and functions which will allow information to be processed and flow in a way that achieves the purposes of the space but within boundaries and limits. Wittgenstein’s ‘game’ metaphor is particularly useful in this endeavour: what constitutes the boundary of the space, the goalposts, the ball and the passing sequences? What is a good passage of play and what is not? What is a move at all in the game and what is not? And the games in Web 2.0 spaces can be adapted by the players as circumstances change.
Of course there are many methods for systems analysis and information design which may serve the same purpose as the idea of space and the game played within it. But it seems to me that with social software the objective must be not only to provide software function which is familiar and easy to use, but also methods and frameworks for design and implementation which are already partially embedded in the understanding of the participants. I hope space, flow and the paraphernalia of the game might provide some of that liberation.
1.For example, Gartner, McKinsey, Forrester.
3.This is well put by Michael Polanyi: ‘When we use a hammer to drive a nail, we attend to both nail and hammer, but in a different way. […] The difference may be stated by saying that the latter (i.e. hammers) are not like the nail, objects of our attention, but instruments of it. They are not watched in themselves; we watch something else while keeping intensely aware of them. I have a subsidiary awareness of the feeling in the palm of my hand which is merged into my focal awareness of my driving the nail’ (Polanyi, 1973: 55).
4.This is not as scary as it sounds. Many studies have shown the measurable productivity benefits to be gained from full, democratic participation in work process design, e.g. Coch and French’s (1948) classic study of clothing manufacture.