Appendix: A case study1
This story is located in a fertiliser and sands processing company operating in central-north USA which exports most of its products to overseas markets, including China. It is a complex, enthralling and sometimes brutal environment, dominated by harsh geography and the unforgiving edges of the industry. The story straddles two epochs in a single year, separated by the schism of the global financial crisis of 2008. The first epoch was one of rapid, almost breathless, expansion in productive capacity to meet the world’s hunger for natural resources: achieving 12 per cent annual growth in GDP, building a new city of 2 million persons each month, manufacturing half the world’s televisions and most of its pillowslips, China was the locomotive of global growth. The worst imaginable sin for any processing company was to have any resource left in the ground or in the warehouse at the end of this orgy of expansion. Then within weeks, in September 2008, the financial crisis broke into the open, with global lines of credit disappearing, stock values collapsing, the US real estate and housing construction markets going into freefall taking consumer sentiment with them. Suddenly expansion projects in almost every such processing company around the world were being cancelled or frozen and contractors terminated. The talk of commodities’ ‘super cycles’ and China’s ‘decoupling’ from dependence upon the United States withered.
The two epochs reflected very different concerns and motivations. The core challenges of the expansionist epoch were project speed, production ramp-up, a skills shortage and high staff turnover, especially in the areas of trades people and engineers. After the crisis, the priority shifted to cost-control, controlled project quiescence, the maintenance of jobs, positioning for future growth and the management of uncertainty.
In retrospect, the Web 2.0 story is a very small part of this drama of course. But it had its moments of grand aspiration, when the participants could see what would happen if the tools were used to capacity and they were even prepared to suspend their natural pessimism. The project was observed closely by technical managers for its capability to ‘add value’ to the business and business managers could generally see the potential for shortening and simplifying communications processes. But the rapid adoption implied by the cheapness, simplicity, accessibility and patent usefulness of the technology did not eventuate, at least not in the short term. Indeed, in the post-crisis cost-cutting, this lag in quick payback may lead to the closure of a perceived non-core, social experiment. But the small story here has some big lessons and provides some insights which may serve to qualify expectations of the technology and impart ideas on how to make it work.
The company is diversified into several types of processed product and each line of business is a profit centre, but head office provides investment funding, financial and managerial oversight, and policy in key areas such as safety, public relations, technology standards and industrial relations. Each line of business runs its own technology but connects through to the corporate networks to access centralised services. Strategy making and the setting of organisational objectives take place in regular cycles, with each level of planning breaking down into sub-plans for each organisational unit. Every goal is measurable, whether it be production figures, profit and loss or, in the case of the IT department, user satisfaction, help-desk response rates or project budgets.
The work model is largely based upon outsourcing and the use of specialists. This applies to the construction of production and processing facilities, the operation of resource collection, and much of the back office work such as surveying, geology and earth analysis. IT work, such as applications development, help desk, software maintenance and facilities management, is outsourced to a computer services company. Consulting services are used to develop information management strategies and specifications and to implement change management practices and so on. Most permanent personnel are at a head office site and most contractors work on sites in remote areas.
Information and know-how within the company are stored in many places and in many forms. There is a very large volume of information used and produced to support the key business processes from exploration to assessment, planning, processing site construction, implementation and processing operations, logistics and marketing.
Prescriptive, normative information is found largely in procedures which direct staff in areas such as safety, equipment maintenance, equipment use, reporting, record keeping, administration, budgeting and so on. These are found in diverse areas, depending upon the specific group: networked drives, the business unit intranet, the corporate intranet and a variety of document management systems. Although the repositories differ, the process of developing, reviewing by experts and approving such documents by managers is largely similar. Other authoritative information to which this applies includes reports, published information and information for regulatory agencies. Finally there are the transaction processing systems which represent the state of the business and operations at any time. These are largely enterprise resource planning (ERP) modules which contain customer sales, production plans and actuals, maintenance plans, equipment profiles and histories, materials movements and locations. There are real-time data collection systems which record production, stockpiles, quality, locations and shipments and feed these into the ERP systems.
The non-prescriptive, proprietary knowledge – the ‘way we do things around here’ – is generally retained in the heads of staff: how to attack a problem, how to prevent problems recurring, what a particular job involves and so on. If a proprietary or common practice is deemed important by management, it is translated directly into procedures. There are procedures which prescribe post-project learning in the form of written reports, but these outcomes are not easy to find and so are not used.
There are several deep experts who have been with the organisation for decades. Their names recur when one is seeking advice about particular topics. Their knowledge is tacit although their expertise is sought after in the formulation of explicit reports and strategies. There has been no consistent attempt to capture their knowledge or pass it on through techniques such as mentoring or job-sharing. A common strategy to retain the knowledge of noted experts is simply to pay very well and convince them to delay retirement or departure. The large amount of contracting means that much on-the-job expertise departs when a contract is completed.
There is no explicit technological support for emerging knowledge apart from e-mail and desultory communities of practice using e-forum technology. As in any organisation, people meet, discuss and develop ideas and approaches to complete their projects and achieve organisational objectives. But solutions tend to be local and isolated as there is no way of overcoming problems of separation and anonymity. History and reasoning are not stored or shared and neither are the outcomes of conversations. E-mail management is of considerable concern to management, not because of the knowledge that is lost to the organisation in personal exchanges, but because of the potential for discovery and prosecution via the USA’s Sarbanes-Oxley legislation.
Knowing how to find information or advice is problematic and solutions are largely based upon an individual’s ability to develop a transactive directory in personal memory and a network of contacts. There is no usable enterprise search function and information is scattered across disparate repositories in different parts of the organisation, so information is found by keeping local copies of documents or utilising personal networks to track stuff down. This reinforces the identification and application of local solutions.
Transactive directories, therefore, are kept largely in people’s heads. The organisation chart is a highly used signpost and is available online, searchable and generated automatically from the SAP Human Resource system. The titles in the chart reflect an area of knowledge and contact details are precise and current in Microsoft Exchange. However, these titles are high level (a job title, a couple of words). As is to be expected, managers form transactive directory hubs: people will usually ask a project or group manager for advice on where to find something. Managers with longer tenure seem to have good memories of where something is or who did what and higher-level managers in the organisation seem to have directories with wider scope. Experts are also transactive hubs: they like to talk and will often remember that a certain report was done once and kept in someone’s drawer or computer or that a particular system maintains assay information from a certain area.
The efficiency of transactive retrieval depends upon the completeness and accuracy of the transactive directory and the ease of then accessing the information or person. Using the organisation chart usually leads to a chain of interactions, when the first person recommends someone else and so on. Using a manager’s or an expert’s transactive directory allows a search intention to be articulated clearly and contextualised, and they can give a precise pointer to the information store. E-mail is often critical in the retrieval process: the first step is to a known transactive directory ‘hub’ and then tracking the information source down happens via e-mail and the Microsoft Exchange directory. The final step is often then a pleasant phone call, sometimes some travel, to actually obtain the desired information.
The transactive allocation, or the routing of information to a person based upon their ‘need to know’, is usually based upon a business process rather than a transactive directory. That is, information is generated in one place and sent to (or fetched by) a subsequent person because the business process workflow defines this. But information sent to a person based upon a transactive directory alone does not seem to be significant, although some routing of publications, reports, events and so on does occur. One symptom of poor transactive allocation seems to be scattergun information, that is telling a large number of people something in the fear of missing someone out. In fact, the converse of transactive information seems to be quite prevalent: people are considered themselves responsible for finding information and keeping current.
Transactive directory maintenance is performed in many ways. The main explicit and published directory systems, the organisation chart and Microsoft Exchange, are automatically updated through human resource processes. Documents generally contain the author’s name which can be traced via these directories, for example. But below this there is a technology chasm where human transactive directories take over. Managers’ and supervisors’ personal, mental directories are kept up to date by the multifarious and wide-ranging interpersonal interactions they have. Expert directories are updated more slowly over time, as a long-termer will often have seen people come and go (often within the organisation). The personal directory of other staff is maintained through work interactions, being on various projects, social occasions, meetings and so on, where colleagues disclose information about themselves and what they know, which serves to build directories for future reference.
This is a resources and basic processing company, front and centre: each manager wears virtual steel-capped boots; it is male dominated in most roles, although this is not the case in some areas such as human resource management, information and change management, health management or safety and some scientific areas. The company sees itself very much as a resources processing company, running a simple business that relies on the asset in the ground. The business is routine and repetitive and staff are driven by procedures and unambiguous work instructions, not innovation. The most important thing in this job is to work safely, and the company seriously does whatever it takes to be safe – and is seen to take safety seriously. As it is a hard business, with pronounced ups and downs, it’s generally accepted that management have the right to make hard decisions, and that jobs are not secure, irrespective of your previous contributions. Because they outsource non-core, non-value tasks, the knowledge that contractors gain is dispensable. In terms of performance, success is primarily about production and financial budgets, is measurable and clear, and the value of activity is assessed by its direct contribution to production.
Power in the organisation is largely explicit and coercive. It is exercised directly through hierarchy and rank. Job descriptions, personal goals and annual targets are set by managers largely to be measurable, and cover production, project completion, safety incidents, systems use, equipment downtime and so on. There are sanctions associated with underperformance, most commonly being fired or contracts not being renewed.
There doesn’t seem to be a strong overall ‘esprit de corps’: the organisational mission and vision are clearly articulated, but it is a company one works, not lives, for. There is a social club, a health club, company breakfasts and share schemes, but the personnel generally conform to institutions of performance and direct control, not inner values of commitment to the company or the team.
Among the professional groups of scientists, geologists, managers and planners, there is commitment to the ethos of the profession, which extends the performance of these staff to beyond just what is required. People wish to do well to be good at their job, to establish their experience for their résumé and for future employment. These people subject themselves to the norms of their professions, which are largely about individual performance, the respect of colleagues and self-esteem.
In-group prototypes in the workplace generally revolve around job role and age. In such a complex organisation, there are many, widely divergent groups. There are scientists, marketing staff, maintenance engineers, geologists, accountants, surveyors, safety coordinators and so on. There are young people and mature staff in their sixties. There are those in groups in remote, tough areas and those in head offices in large modern cities. There are those who are company employees and others who are contractors.
In the time of expansion, when the organisation was working flat out to bring new gathering and processing capacity online, the restless and innovative information manager, Katie, had initiated a slather of projects aimed at improving the management of information. This included documents which were prescriptive and which needed to be controlled: safety procedures, maintenance processes, reports – these were scattered in different repositories and network drives. The main impetus for improved control came from senior management concerned about legal liability if incorrect use of procedures led to injuries. If an electrical circuit was not isolated properly or a scaffold lacked safety mesh and a resulting injury investigation showed that a procedure was not used due to it not being ‘found’, managers became liable.
The company was not a ‘knowledge management’ company: it had little or no exposure to the discipline and little interest. But Katie knew it was important and that there were great efficiencies to be gained from improving the exploitation of the organisation’s knowledge, but the exigencies of rapid expansion and the mindset of ‘we collect it, process it and send it away – what’s the big deal?’ meant there were too many competing priorities and too little headspace for something as nebulous as knowledge. But the publicised war for talent and skills and the impending retirement of baby boomers added weight to the fear that the organisation would be left with a shallow, superficial and immature knowledge base.
So Katie was able to create a budget item to address this: she only needed a creative solution. This solution became a kind of Trojan horse for what is Web 2.0. While a simple interview and publish process for this knowledge may have satisfied the auditor, this was neither sustainable nor terribly useful. So the solution of a wiki, semantic web, social tagging, podcasting and blogs was introduced to carry the content of the knowledge capture: a Web 2.0 suite.
The solution worked as follows. Key experts were identified throughout the organisation. They were interviewed twice, once to establish the format and knowledge area of the interview and prepare the questions and a second time in which the questions were simply asked and the responses recorded. The questions were structured to provide self-contained ‘learning objects’ which were edited into individual video files and transcribed to text. Text and video were loaded into a wiki page with an appropriate title. The text, unlike the video, could be updated by other experts or business people going forward, either if it was missing something or as knowledge changed. The analyst conducting the interviews created a mini-ontology from the information, which was linked into an overall business process model. The ontology was used to tag the wiki article and one could find an article via search or navigation through the ontology. After interviewing about 50 experts, the wiki was evaluated by business managers, experts and users and, with some exceptions, found to be of sufficient potential to proceed with publication. The methodology for preparing and conducting interviews using video equipment was also tested by non-experts and found to be simple and the results useful.
Subsequently the content was published on the intranet using the TWiki software as a base. The initial installation of the freeware TWiki software on company servers provided diverting moments: according to procedures, it could only be installed on a server by the contracted outsourcing company (although it had been installed by the knowledge management consultant and running for months on his standard personal laptop for demonstration and testing purposes). This company was disconcerted by the notion of freeware with no visible means of support, and prevaricated heroically before providing a system programmer to download, install and configure all the software: a simple process lasting not more than two hours, with the programmer’s manager nearby the whole time muttering darkly that ‘it wasn’t going well’. The TWiki freeware software supported 350 hundred users in its first six months of operation without a glitch, a prospect difficult for executives of large IT services companies to appreciate and enjoy.
Going live was low stress and simple: apart from not being immediately mission critical and not requiring simultaneous use by many workers, the system was simply very easy to use, robust and accessible. With no advertising at all, the system grew slowly but steadily: within four months there were hundreds of logged-in users. Of these, 50 per cent returned – a rate considered disappointing until Peter, the intranet manager, said he would give his right arm for a return rate like that on the intranet. The rate of editorial contribution was probably on the low side, reaching about 12 users actively contributing content.
There was standard wiki functionality – creating, editing, reversal and linking of information pages. Any user could upload videos, audio files, images, presentations or Adobe pdfs. To avoid conflict with existing document management facilities and version management, it was not permitted to upload ‘editable’ files such as MS-Word, MS-Excel or MS-PowerPoint formats. All training was in video-audio and screen capture format, available from within the wiki.
An ontology, or semantic web, was made of the core business processes and concepts. These were linked together in a hierarchy as a ‘category tree’ and could be navigated to find useful articles which had been tagged with the standard concepts.
Blogging was made available, although not using any specific blog software. Users were encouraged to use their own personal page as a blog, or simply to start a wiki page as a blog about a particular theme.
Social personal pages were available where people could place personal interests, contact details, photos and so on. Initial interest in this was very high, particularly by young people who saw a reflection of Facebook in it, but actual contribution remained modest.
Social tagging was made possible by allowing every user to tag any wiki page as they wished, but not allowing them to create formal category pages which linked into the organisational ‘semantic web’. Tags without matching pages were reviewed regularly by the wiki administrator and, if found to be of general use, were given a category page and added into the organisational conceptual map.
Every wiki page could create an RSS feed and personnel were free to use any RSS reader to subscribe. At this stage, there was no standard RSS reader available, but it was anticipated that the RSS reader within Internet Explorer 7 or Microsoft Outlook 2007 would become the standard reader, when the standard operating environment products were upgraded to those releases.
Podcasting was documented and made available using the RSS feed capability. The intention was to use it for uploading training sequences and ad hoc training using the MS-PowerPoint narration capability or Windows Media Encoder, a free product to capture voice and screen dynamics.
To make it more accessible to lay users, many examples of good usage of the wiki and blog were given (such as a replacement for e-mail, collecting ideas, project discussion spaces, new personnel induction and so on) and the idea of wiki ‘spaces’ was developed. It was clear that the initial project (of interviewing experts with deep tacit knowledge) had been encyclopaedic in nature, but that the wiki could be used for a lot more. So four core wiki space types were explained to users:
Several of these capabilities and spaces overlapped and therefore competed with existing software thus contravening the management mantra of tight cost control (minerals and natural resource companies are the lowest investors in information technology). A wiki, for example, can perform many of the publication functions of an intranet for group publishing but far more easily and immediately. The ability to quickly create a narrated PowerPoint presentation and upload it to a wiki for podcasting competed with SAP’s Knowledge Warehouse, which is a highly structured collection of training courses which are directly related to the competencies defined in job roles with the SAP Human Resources module. The threaded discussion pages of a wiki were a dead-ringer for the threaded forums within the communities of practice – but far more open and accessible. The various portal offerings, such as those of Hummingbird or SAP, which create a web portal containing relevant applications for specific roles, could easily be usurped by personal or general-purpose wiki pages which bundle useful links and SOA functions. Selling the Web 2.0 story to management often met with the comment: ‘We’ve already got one of those … how are we going to know which system to go to?’
An explicit decision was taken to make all capabilities universally accessible through the single-sign-on mechanism. This meant anyone could upload or change text, with of course the knowledge that what they wrote would be clearly attributed to them, across the whole organisation. This was a risk. But the whole point (and all were agreed on this) was to make knowledge available wherever it might be needed and not to lock it up in private areas which precluded unanticipated use and promoted information hiding. Although the notion of private wikis and restricted blogs is attractive in many ways, in the view of all project participants and the steering committee it would be senseless to introduce another private content management system.
But it was clear from the outset that the systems could be used for a lot more than just posting videos and text contributions. Tom, a senior operations manager, was very preoccupied with ensuring that the prescriptive knowledge in safety and maintenance procedures was correctly stored and managed and insisted that the wiki not divert attention from this. Peter, a technology manager, was concerned with the arrival of ‘yet another content management system’, which overlapped with the document management system, the intranet and the communities of practice (all of which were performing poorly). Corporate policymakers were introducing a mantra of technology simplification and reduction, in which a wiki fitted neither philosophically nor politically (there were too many ‘similar’ products). This led to a need to promote the wiki very quietly so as not to be cancelled immediately by one or other of these groups. The other side of the coin was that there was no one in a senior position who actively promoted the technology and no mandated, measurable targets for use or contribution.
However, the next part of the project, to acquire a more ‘corporately appropriate package’ than the TWiki freeware product and interview many business experts to load a substantial amount of content, was put on hold as the expansion epoch came to a shuddering halt. The managerial imperative moved from speed to cost reduction and project streamlining. The project was left without a formal change manager to promote it and the wiki moderator was left with the task of proselytising the solution without a budget or the capability to publicise widely. The strategy of ‘viral contagion’ had been adopted anyway, as being one which was recommended by the global consulting groups like Gartner, Forrester and McKinsey. This involved simply demonstrating the software to interested parties, talking through use-cases where productive wins could be gained and supporting people who decided to give it a go. No formal training was provided – education was all within the wiki as videos, narrated PowerPoint slides and live screen captures. Furthermore, the software was considered very simple. Significant amounts of information were posted about the potential savings, applications and advantages of the system.
There seemed to be all sorts of reasons to interview specialists, people considered to have deep knowledge of how to do things and why they had to be done. The results of the 50 interviews created a basic stock of very interesting, very useful, very extendible knowledge – all agreed on this. However, it was a bit like an encyclopaedia that stopped at ‘D’. A plan had been worked out to complete interviews in all significant operational areas of the organisation before the axe fell on the funding. The knowledge was secured before experienced staff left, it was shareable across the entire organisation, it was cheap, easy and effective, all participants recognised the value, and only two people declined to participate. There actually appeared to be different types of interview to conduct: interviews about an operational area; analysis and discussion of lessons learned from projects and events; the services and skills offered by a particular group or division within the organisation. The experts themselves decided on what they wanted to talk about, based upon what they felt was important, what they kept getting asked and wanted to record once and for all, or what knowledge they felt was vulnerable should they leave the organisation – even what they thought was just nice to pass on, such as the tradition of an annual golf day between competing sites.
For example, there had been a problem with a certain piece of important machinery at one site that had held up production for some time. It had been incorrectly commissioned and then inadequately lubricated, leading to several days’ breakdown where trucks could not be loaded. There were some simple but effective lessons learned that needed to be passed on to new maintenance engineers (an area of high turnover), so the manager of the maintenance section volunteered to be interviewed to see if using the Web 2.0 solutions could be used to record and post the lessons effectively. It was immediately clear that it was a good, cheap, easy solution, with a company vice president showing interest in promoting it. Further, the general maintenance area was struggling to achieve learning and consistency. However, the vice president left the organisation and momentum was lost. Some managers said that the best way to learn was to put the information into procedures. Others in the group said they enjoyed the video and had a bit of a friendly go at the manager for grandstanding and putting a video of himself up for public viewing.
The company runs a number of general programmes, such as the Health and Well-Being Group, Workplace Safety and TQM Process Improvement. The Health Programme’s job is to gather and disseminate hints on lifestyle and healthy living, to answer queries on common illnesses such as obesity, alcohol consumption and smoking, and to develop and publicise health events and programmes. This group quickly adopted using the wiki as it gave them a vehicle to very easily publish information and links, upload photos and commentary on events such as bike rides and walks, but also to receive comments back from staff and converse with them in an interactive forum. This reduced the number of e-mails, paper wastage and shared information very quickly across a broad front. The group manager was very supportive, progressive and encouraged the openness of the wiki. The nominated wiki support person, although not technical, was intelligent and receptive and picked up the necessary skills within hours.
In another example, a wiki page was developed for process improvement: a template was developed to publish and list lessons learned from projects to the wider organisational community. Previously the learning procedure had created paper documents which languished in drawers or in departmental network drives. This mini-system took 30 minutes to set up and allowed a group (which was usually physically dispersed after project close) to discuss a project wherever they were and immediately publish their findings into the standard format in a wiki page. The results were automatically tagged and findable via search. Although greeted enthusiastically by the responsible manager, they never used it.
There was a group of scientists responsible for modelling, quality control, resource assessment, geochemistry and so on. They were distributed across many locations where they conducted their work, and were supported with the provision of scientific information, software and training about the specifics of the region from specialists in head office. Ian, the manager of the group, had been interviewed as an expert to create a wiki article on a specific area of geological interest. He had immediately seen the potential of the tool for his group: he was a devoted scientist, a lover of knowledge and very direct. He assigned his group’s information and research manager, Simon, to set up an appropriate wiki area and support adoption. Simon, in a two-hour design session with the wiki administrator, set up the group’s main site page, with useful, consistently sought information on the first page, and sub-pages for new hires, research documentation, who’s who in the group and links to much needed (but carefully concealed) corporate information. Specialist areas were set up for the different areas of earth science and the specialists slowly built up pages to avoid having to repeat themselves. Instead of travelling to sites to educate new staff, they were able to narrate into PowerPoint presentations and upload those, although this did not happen often at first. New, young hires, particularly at sites, immediately understood how to use the system and the idea behind them.
One uploaded photos from social events, some less than dignified, and this prompted some quiet feedback from the wiki administrator on the need to consider the feelings of others … But although initial use was not as regular or dynamic as was expected by Ian or Simon, it has increased over time.
Within the Information Services group itself there were varied responses to the Web 2.0 capabilities. Business analysts, for example, communicate with processing sites to resolve problems, establish areas of information need and explore solutions with users. These analysts must travel regularly to the sites for meetings, discussion and information exchange. Wikis are ideal for the communication and recording of ideas, solutions and agreements over distance and the difficulties of shift work. The corporate wiki was available across the business, equally to all people, but the analysts did not take it up, preferring to travel to the site and break up their routine.
On the other hand, a systems rollout staffed by contractors which affected personnel across the organisation prompted high usage. Divisions were being converted, trained and supported into the use of the new system one at a time, again a perfect opportunity for using the wiki to develop and refine a consistent message, for taking questions from users, for developing FAQs, for having discussions and resolving questions, even for uploading training and how-to information. A project wiki area slowly developed, with help documents and links being loaded, due largely to the pressure of one project participant. Information continued to be distributed by e-mails but nevertheless, over time, the project wiki area has become highly visited and now receives thousands of hits.
Finally, the programming department, again staffed mainly by contractors, was responsible for managing and fixing software bugs, discussing and planning extensions with business representatives, and generally keeping the production and transport systems operational. A large maintenance team, with wide areas of knowledge and relatively high turnover, was under constant pressure: their team leader identified architecture, service provision and requirements gathering as ideal (indeed urgent) areas of need for wiki use. The rapid, interactive nature of the work made wikis ideal – while maintaining a complete document record. However, Web 2.0 and wikis were not adopted in the short term, due in the most part to pressure of work.
One department, mainly based at sites, had the job of training staff across the organisation. They developed curricula for all types of situation and business role, which encompassed online training, advice, induction and follow-up. Their manager simply instructed them that they would use the enterprise wiki for their interactions wherever possible. After some initial business analysis, it was found that most of their tasks would be simplified by using the wiki. They found it cut down time and reduced travel to create curricula, identify problem areas and deliver follow-up advice to new starters. They created FAQ sections and uploaded media files for training, including photos of the natural environments.
Within the operation, fertiliser and sands processing was also required. These were areas in the direct line of operation, where breakdowns affected production. The turnover in managers was fairly high, with actual leadership roles being, for skilled staff such as geologists and chemists, stepping stones to corporate careers. The perceived return on learning from a wiki was considered to take some time and, indeed, not be measurable with the current simple measures. A former manager of one of these plants was interviewed as an expert and some of his hitherto tacit knowledge was captured in audio-visual and text on how to optimise the capacity of these plants. The new manager also expressed interest in using the system to discuss and capture how to fine-tune and improve reliability in the processing machines, but did not take it further.
The system, along with a few interview videos with other old-timers, was demonstrated to one ‘ageing’ specialist, an expert in a large and critical piece of processing equipment, who had been fetched out of retirement at great cost. After watching the videos of others he said it would be perfect for their area: they spent a lot of time training new staff in how to optimise the truck positioning (factors like metal expanding and contracting in heat and cold, land incline and weight differentials, all leading to difficulties in positioning trucks precisely). After a long training period, especially when machines had been upgraded, there was fairly high attrition and fairly soon after this there was often complete team turnover. A video of an expert interview would set the scene for learning specific instrument use, give the bigger picture of the major elements to be considered, and be continually available for new staff. But the old-timer never returned subsequent calls.
1.This story is a composite of several learning cases and I have chosen to embed it in a fictitious resource processing company. The intention is to allow the reader to practise their analytical skills and arrive at methods for analysing the receptivity of an organisation to a Web 2.0 solution and work out ways to secure adoption and productive use. The case has been created for metaphoric and educational purposes, is fictitious and does not represent or have any involvement with registered companies or people outside of this narrative.