The human factors that relate to technological developments in aviation
The discipline of human factors has its roots in the aerospace industry. This chapter provides a brief overview of the development of human factors, from its birth during World War II to the present day. Flight deck design, pilot selection and training, and crew resource management (CRM) are all considered. While human factors has been primarily associated with aviation safety during the first 50 years of its life, it is argued that it is now time for the discipline to take a more pro-active role in improving the performance of airlines and reducing operating costs. This can only be achieved by taking a more integrated approach to the application of the various aspects of the discipline.
Human factors, as a whole, is a relatively new discipline, arguably with its nascency in the 1940s in the aviation domain. However, it is also a somewhat fragmented discipline, drawing upon basic science from psychology, sociology, physiology/medicine, engineering and management science, to name but a few.
From an overall system perspective, three generic, antagonistic parameters can be applied to evaluate system functioning: safety, performance and cost. Airworthiness authorities are concerned solely with safety aspects of aircraft design, pilot training and airline operations. However, airlines are required to balance the requirement for safety against both cost and performance considerations, but, as will be argued, the human factors discipline has until recently concentrated almost exclusively on the safety aspects of the system function troika.
The organisation and operation of aircraft in an airline is a socio-technical system. This can be described using the five ‘M’s model (Harris and Harris, 2004; see Fig. 7.1). The operation of a commercial airliner is not just about the union of pilot (huMan) and aircraft (Machine) to perform a flight (or Mission) within the constraints imposed by the context of the physical environment (Medium). The Mission of a commercial aircraft is simply to deliver passengers and cargo at the greatest possible speed (and in comfort, as required) while maintaining the highest possible standards of safety and achieving all this in a cost-effective manner. However, the societal aspects of the environment (another component of the Medium) and the role of Management (both internal and external to the aircraft) are also central to the control of safety, performance and cost.
7.1 Five ‘M’s conceptual model of socio-technical systems. (Source: Harris and Harris, 2004.)
The (hu)Man aspect of the five ‘M’s model encompasses issues such as the basic skills and abilities of the pilot in addition to their size, strength and fuel requirements (all elements falling within the ‘traditional’ realms of psychology and ergonomics). From a user-centred design perspective, the (hu)Man is the ultimate design forcing function. It cannot be changed. When (hu)Man and Machine elements come together they perform a Mission. It is usually the Machine and Mission components on which developers and designers fixate.
However, aircraft designers and engineers must not only work within the bounds of the technology available, the abilities of the pilots and the physical aspects of the Medium; they must also abide by the rules and norms of society (part of the societal Medium). For example, the minimum safety performance standards for human–machine systems such as an aircraft are actually determined by societal norms (regulations), e.g. the level of redundancy required (aircraft certification) or minimum standards of user competence (flight crew licensing). Aviation is an international business and as such culture (another part of the societal Medium) also has a profound, yet largely unseen, effect on operations. Management must also work within these rules. The airline Management is the link between the (hu)Man, Machine, Mission and Medium. It performs the integrating role to ensure compliance with operating, crew licensing, maintenance and aircraft certification requirements to promote safe and efficient operations.
Regulatory objectives are specifically aimed at enhancing safety. Organisational aims, however, need to balance safety against performance, comfort and economy. Until relatively recently most human factors research has been biased toward satisfying the regulatory/safety component of the performance troika. However, with new technology and operating concepts, human factors may also be able to make a positive contribution to the other two aspects.
The roots of human factors in the aviation domain lie within the work undertaken in the UK and North America during and shortly after World War II. From the mid-1940s to the mid-1960s the discipline was essentially building its applied science base, drawing heavily from experimental and social psychology, and aerospace medicine. Early work completed after World War II in the USA by pioneers such as Alphonse Chapanis, Paul Fitts, Paul Jones and Walter Grether identified deficiencies in the design and layout of cockpit instrumentation that led to accidents, and produced recommendations for improvements.
In a book looking back at the nascency of aviation human factors, Chapanis (1999) reviewed his work at the Aero Medical Laboratory in the early 1940s where, amongst other things, he was asked to investigate why pilots sometimes retracted the gear instead of the flaps after landing in certain types of aircraft (specifically, the Boeing B-17 Flying Fortress, the North American B-25 Mitchell and the Republic P-47 Thunderbolt). He observed that that the toggle switches and the actuation levers for the gear and the flaps in these aircraft were both identical in shape and were located immediately next to each other. He also noted that the corresponding controls on the Douglas C-47 Skytrain (or Dakota) were not adjacent and their methods of actuation were quite different, and that pilots of this aircraft never retracted the gear inadvertently after landing. Chapanis’s insight into human performance, especially in stressed or fatigued pilots, enabled him to understand how they might have confused the two controls. He proposed coding solutions to the problem: separate the controls (spatial coding) and/or shape the controls to represent their corresponding part (shape coding). Hence, the flap lever was shaped to resemble a flap and the landing gear lever resembled a tyre. The pilot could therefore ascertain, either by looking at it or by touching it, what function the lever controlled.
Losses involving US Army Air Corps pilots during World War II were approximately equally distributed: about one-third were lost in training crashes, a further one-third in operational accidents, and the remaining third were lost in combat (Office of Statistical Control, 1945). This suggested various deficiencies inherent in the system. In some further work investigating human-centric design deficiencies in early military cockpits, Grether (1949) described the difficulties of reading the early three-pointer altimeter. Previous work and numerous fatal and non-fatal accidents had shown that pilots frequently misread this instrument. Grether investigated the effects of different designs of altimeter on interpretation time and error rate. He asked pilots to use six different variations of the altimeter containing combinations of three, two and one pointers (both with and without inset counter displaying the altitude of the aircraft) as well as three types of vertically moving scale (similar to a digital display). The results showed that there were marked differences in the error rates for the different designs of the altimeters. They showed that traditional three-needle altimeters took slightly over seven seconds to interpret and produced over 11% errors of 1000 ft or more. The vertical, moving-scale altimeters took less than two seconds to interpret and produced fewer than 1% errors of similar magnitude. Eventually, the electro-mechanical counter-pointer altimeter replaced the three-pointer altimeter. This is generally regarded as being an excellent design. It is fast to read and produces very low error rates. The sweep of the 100s-of-feet hand provides the pilot with good rate of change information; the counter with good state information.
Both of these early examples, one of control design and one of display design, suggest that it is not ‘pilot error’ that causes accidents; rather it is ‘designer error’ (i.e. confusing system controls or the poor presentation of information). Nowadays, the notion of blaming the last person in the accident chain (the pilot) has lost credibility. The modern perspective is to take a systemic view of error (and also of cost and performance) by understanding the relationships between all the components in the system, both human and technical. However, the work by Chapanis and Grether (amongst many) was quite radical in this respect in the 1940s.
At the same time in the UK, related research was also being undertaken in the MRC Applied Psychology Unit in Cambridge, on such issues as the direction of motion relationships between controls and displays (Craik and Vince, 1945). Mackworth was developing his famous (if you are a psychology student) Mackworth clock to investigate the effects of prolonged vigilance in radar operators (Mackworth, 1944, 1948). During the latter half of the war, with the introduction of more complex, longer-range, higher-performance aircraft that placed greater demands on their pilots, fatigue became a big issue in RAF aircrew. Losses as a result of fatigue rather than combat were mounting. Perhaps the most famous piece of apparatus developed by Kenneth Craik was the Fatigue Apparatus, which universally became known as the Cambridge Cockpit (Craik, 1940). It was based upon the cockpit of a disused Supermarine Spitfire. As early as 1940, experiments using the Cambridge Cockpit established that the nature of control inputs from a fatigued pilot were considerably different from those from a fresh pilot.
The tired operator tends to make more movements than the fresh one, and he does not grade them appropriately … he does not make the movements at the right time. He is unable to react any longer in a smooth and economical way to a complex pattern of signals, such as those provided by the blind-flying instruments. Instead his attention becomes fixed on one of the separate instruments…. He becomes increasingly distractible.
An excellent summary of this early work in aviation human factors can be found in Roscoe (1997).
With increasing knowledge and specialisation, the discipline of human factors began to fragment, with sub-disciplines in human-centred design, training and simulation, selection, management aspects (organisational behaviour), health and safety, and so on. Nevertheless, from the mid-1960s human factors began to make increasingly large contributions, particularly in the three areas of selection, training, and the design of flight decks.
The processes and methods for the selection of flight crew particularly began to develop in their degree of sophistication throughout the 1970s. This coincided with a change in the nature of work of the airline pilot. The job has changed somewhat over the past half century from that of being a ‘hands on throttle and stick’ flyer to one of being a flight deck manager. This change has been particularly pronounced in the last half of this period.
In the 1950s and 1960s airlines tended to rely quite heavily on the military for producing trained pilots. Even until relatively recently it was reported that, in the USA, 75% of new-hire airline pilots were recruited after commencing their flying career in military aviation (Hansen and Oster, 1997). Selection techniques tended to rely on techniques that assume candidates are already trained and competent (e.g. reference checks, background checks, interviews, often combined with a flight check – see Suirez et al., 1994). Military selection procedures tend to place a great deal of emphasis on spatial and psychomotor skills in the evaluation of aircrew candidates (e.g. Hunter and Burke, 1994); however, military aviation is very different from commercial aviation, requiring a great deal more ‘hands on throttle and stick’ flying.
In Europe, where there has traditionally been less emphasis on recruiting pilots from a military background (a trend that has been developing as the demand for commercial pilots has increased), there has been greater emphasis on selection processes for ab initio trainees, where emphasis is placed upon an assessment of the candidate’s potential to become a successful pilot. After pre-screening, a typical assessment centre for ab initio pilot candidates will involve personality assessments, tests for verbal and numerical reasoning, tests of psychomotor skills, group discussions and structured interviews. For example, Cathay Pacific used selection criteria in six main areas: technical skill and aptitude; judgement and problem solving; communications; social relationships, personality and compatibility with Cathay Pacific; leadership/subordinate style and motivation and ambition (Bartram and Baxter, 1996). The main point to note here is that flying skills per se formed only a relatively small component. Hörmann and Maschke (1996) analysed personality data from circa 300 pilots in addition to data from a simulator checkride and other biographical data (e.g. age, flight experience and command experience). Three years later it was found that pilots graded as being below-standard pilots had significantly lower scores on the interpersonal scales and higher scores on the emotional scales of the Temperament Structure Scales (Maschke, 1987), a psychometric instrument developed specifically for assessing aircrew personality. An earlier review by Chidester et al. (1991) also found that better-performing airline pilots scored higher on traits such as ‘mastery’ and ‘expressivity’, and lower on ‘hostility and aggression’. Non-personality-related selection tests (e.g. verbal and numerical reasoning tests) have also shown a strong relationship with performance, which probably reflects the job of the airline pilot, where flight administration and planning are as important as the psychomotor skills required for flying.
Appropriate and effective selection procedures (especially for ab initio trainees), although expensive, can ultimately save airlines a great deal of money. They help to ensure that the pilots selected are likely to complete their training (dropouts from training are very expensive) as well as making sure that capable, safe pilots are recruited to the airline. In the selection of personnel, even relatively modest increases in the success of the selection process, especially since training costs are high, can pay back the initial investment ten-fold.
Until relatively recently, pilot training and licensing concentrated on flight and technical skills (manoeuvring the aircraft, navigation, system management and fault diagnosis, etc.). Training was largely undertaken in the aircraft or in relatively low-fidelity (by today’s standards) flight simulators. A great deal of the emphasis was placed upon technical training (e.g. how to handle the aircraft’s systems or how to fly the aircraft manually) and training for emergencies resulting from a technical failure (e.g. engine failure at V1 or performing flapless approaches). However, with increasing technical reliability, it became evident that the major cause of air accidents was human error. The failure of the flight deck crew to act in a well-coordinated manner further contributed to this end on many occasions. This resulted in a series of intra-cockpit flight crew management programmes being instigated.
The human factors discipline really began to come to prominence with the cockpit – later – crew resource management (CRM) revolution, which introduced applied social psychology and management science onto the flight deck. This placed emphasis on training to facilitate the flight deck (later whole aircraft) crew acting as a coordinated team. The Joint Airworthiness Authorities (JAA) (1998) defined CRM as ‘the effective utilisation of all resources (e.g. crewmembers, aeroplane systems and supporting facilities) to achieve safe and efficient operation’. CRM evolved as a result of a series of accidents involving perfectly serviceable aircraft. The main cause of these accidents was a failure to utilise the human resources available on the flight deck in the best way possible. Many would argue that the initial stimulus from the CRM revolution was the accident involving a Lockheed L1011 (Tristar) in the Florida Everglades in 1972 (National Transportation Safety Board, 1973). At the time of the accident the aircraft had a minor technical failure (a blown light bulb on the gear status lights) but actually crashed because nobody was flying it! The crew were ‘head down’ on the flight deck trying to fix the problem. Other accidents also highlighted instances of the captain trying to do all the work while the other flight crew were almost completely unoccupied or happened as a result of a lack of crew cooperation and coordination.
Pariés and Amalberti (1995) suggested that CRM has progressed through four eras. First-generation CRM focussed on improving management style and interpersonal skills. Emphasis was placed upon improving communication, attitudes and leadership to enhance teamwork, although, in many airlines, only captains underwent CRM training! Second-generation CRM also included topics such as stress management, human error, decision making and group dynamics. However, CRM also began to extend beyond the flight deck door: cabin crew became part of the team, and training was extended to include whole crews together, rather than training flight deck and aircraft cabin separately. The CRM concept also began to extend into the airline organisation as a whole. Indeed, the evolution of the abbreviation of CRM itself has exemplified the change in culture in commercial aviation in the last quarter of a century. Further evolution of this approach has seen CRM extend beyond the aircraft to ramp operations and maintenance, and even beyond the airline (e.g. to air traffic control (ATC)). By the fourth generation, CRM training per se was beginning to disappear as the concepts were being absorbed into all aspects of flight training and the development of procedures. Helmreich et al. (1999) suggested that fifth-generation CRM will extend throughout the organisation and will basically involve a culture change. Early CRM approaches were predicated upon avoiding error on the flight deck. Fifth-generation approaches will assume that, whenever human beings are involved, error will be pervasive and the emphasis will change toward the error management troika: avoid error, trap errors and/or mitigate the consequences of errors.
The advent of CRM training was partly contingent upon a change in training philosophy towards line-oriented flight training (LOFT). Indeed, there is now a mandated requirement for crew training as part of the Airline Transport Pilot Licence (ATPL) syllabus. LOFT places emphasis on training as a crew and acting as a crew member. During a LOFT session (which will take place in a full-flight simulator) crews fly a complete trip (or part thereof) just as they would in normal operations. However, they will be presented with a series of in-flight problems and emergencies that require them to act as a team. During these simulated flights the instructors do not intervene, but crews’ actions are recorded for later analysis. After the LOFT session, the crews’ performance is reviewed with respect to how they handled flying the aircraft, the technical aspects of the problem and, most importantly, how the crew was employed to address the issue (see Foushee and Helmreich, 1988).
The effectiveness of the LOFT approach depends upon two key factors: the development and implementation of the flight scenarios within which the training takes place, and the adequacy with which crews are de-briefed by the instructors after the exercise. The role of the facilitator during the de-briefing after the LOFT exercise is central to its effectiveness. However, Dismukes et al. (2000) observed great variations in facilitator performance. It was noted that the mean debriefing duration of a post-LOFT simulator ride (which typically lasted approximately two hours) was only 31 minutes. One-third of this time was often spent reviewing incidents on the video and, in the remaining time, the facilitator overseeing the session often spent more time talking than did the crew being trained.
Despite the change in the training philosophy in later years, LOFT scenarios were based largely around handling technical failures. As will be seen in the section looking at the present day, this situation is beginning to change.
Reising et al. (1998) have suggested that flight deck displays have progressed through three eras: the mechanical era, the electro-mechanical era and, most recently, the electro-optical (E-O) era. With the advent of the ‘glass cockpit’ revolution (when E-O display technology began to replace the electro-mechanical flight instrumentation) opportunities were presented for new formats for the display of information, and human factors started to play an increasingly important part in their design. However, while the new display technology represented a visible indication of progress in the third generation of airliners (e.g. Airbus A300/310; Boeing 757/767 and McDonnell-Douglas MD-80 series), the true revolution in the way in which these aircraft were being operated lay in the less visible aspects of the flight deck, specifically the increased level of automation available as a result of the advent of the flight management system (FMS) or flight management computer (FMC); see Billings (1997). The glass cockpit display systems were merely the phenotype: the digital computing systems that were being introduced represented a true change in the genotype of the commercial aircraft. The higher levels of automation on the flight deck not only allowed opportunities for reducing the number of crew on the flight deck (in many instances) from three to two (eradicating the function of the flight engineer) but also required a change in the skill set required by flight crew. Aircraft were now ‘managed’, not ‘flown’. This trend continued further in later designs with even higher levels of automation and integration, such as the Airbus A320, Boeing 747-400 and McDonnell Douglas MD-11. For a historical perspective on flight deck design, Coombs (1990) provides an excellent overview, providing interesting insights behind some of the design solutions found in modern aircraft. Harris (2004) provides a more comprehensive, technical examination of all aspects of the human factors of commercial flight deck design.
With these increasing levels of automation on the flight deck in the 1980s and early 1990s much research was undertaken in the areas of workload measurement and prediction, and enhancing the pilot’s situational awareness. Autoflight systems certainly reduced the physical workload associated with flying an aircraft. Computerised display systems also reduced the mental workload associated with routine mental computations associated with in-flight navigation (and considerably increased navigational accuracy and reduced the number of errors). However, it is wrong to say that automation reduced workload. It simply changed its nature. Wiener (1989) called the automation in these more modern aircraft ‘clumsy’ automation. It reduced crew workload where it was already low (e.g. in the cruise) but increased it dramatically where it was already high (e.g. in terminal manoeuvring areas). Pilots’ workloads have changed to become almost exclusively mental workloads associated with the management of the aircraft’s automation.
The new (at the time) breed of multifunctional displays coupled with high levels of aircraft automation also had the ability both to promote Situation Awareness through new, intuitive display formats and simultaneously to degrade it by hiding more information than ever before. In the scientific literature there are various definitions of situation awareness. For example, Smith and Hancock (1995) defined it as ‘the up-to-the minute comprehension of task relevant information that enables appropriate decision making under stress’. Boy and Ferro (2004) suggested that the concept was a function of several quasi-independent situation types: the available situation on the flight deck, available from data originating from either the aircraft, the environment (including ATC) or other crew members; the perceived situation by the crew, which may be affected by various parameters such as workload, performance, noise and interruptions; the expected situation derived by the crew from their planning and decision-making processes, or the inferred situation by the crew, compiled from incomplete and/or uncertain data.
However, automation can make a system ‘opaque’ and hence degrade situation awareness. Dekker and Hollnagel (1999) described automated flight decks of the time as being rich in data but deficient in information; hence they did little to enhance situation awareness. Christoffersen and Woods (2000) developed guidelines to suggest ways to optimise human–automation interaction and turn it into a ‘good team player’. Good team players, they opined, make their activities observable for their fellows and are easy to direct. As a result they suggested that system information should be event-based (not state-based displays, conveying the status of the machine and its mission goals). The displays must be future-oriented, allowing operators to enhance their situation awareness by being able to project ahead, and the display formats should also be form-based, enhancing the pilots’ ability to detect patterns rather than having to engage in arduous calculations, integrations and extrapolations of disparate pieces of data.
While it is undoubted that the advanced-technology aircraft being introduced were safer and had a much lower accident rate (see Boeing Commercial Airplanes, 2009), they introduced a new type of accident (or perhaps they merely exaggerated the incidence of an already existing underlying problem). Dekker (2001) describes these as ‘going sour’ accidents, which involved these newer-generation, highly automated airliners being ‘managed’ into an accident. The majority of these accidents exhibited a common underlying set of circumstances: a series of human errors, miscommunications and mis-assessments of the situation. Dekker argued that the accidents occurred as a result of a number of factors pertaining to the flight deck design, the situation and the crews’ knowledge/training, which conspired against their ability to coordinate their activities with the aircraft’s automation. The characteristics of the automation involved factors such as autonomous actions on the part of the aircraft and limited feedback about its behaviour, making it what, in CRM parlance, could be deemed a bad team player (Sarter and Woods, 1997).
It can be seen that it is almost impossible to separate design from procedures from training if it is desired to optimise the human element in the system. One of the great problems aviation has (which can also be a great strength) is the longevity of designs and working practices. Change is never for the sake of change (it is a very conservative industry, which aids in maintaining safety as much as intensive research and development). However, this always means that, as new problems come to the surface, solutions must be found that can be applied to the many legacy designs operating in the world-wide fleet. For example, Wood and Huddlestone (2007) have proposed a new method of training pilots to use the automation in their airliners; however, this only partially overcomes the fundamental design issues concerning the way that the automation on the flight deck has been implemented. What is actually required is an integrated, systemic approach to human factors.
The practice of human factors in the aerospace industry today is largely an incremental development of the position described in the previous section. However, there is now universal acceptance of its importance to safe operations. Nevertheless, emphasis is still firmly on the safety aspect of the system-function troika. Perhaps the two biggest changes in the operation and design of modern aircraft involving human factors are again in the areas of training and design.
The raison d’être of LOFT was to encourage effective flight deck management practices through team-based training and de-briefing performed within abnormal and emergency flight scenarios. However, the problems faced in a LOFT session are not the major threats to operational safety faced by crews on a daily basis. Line Operations Safety Audits (LOSA) were introduced into airlines during the late 1990s. LOSA data are collected by trained observers during line operations. These data form the basis of an audit process to check the everyday safety health of the operation. Data are collected in three categories: external threats to safety (e.g. ATC problems, adverse weather or system malfunctions), errors and the crew’s responses to the errors committed, and non-technical skills evaluation (essentially CRM processes and techniques). The concept of the LOSA methodology can be found in Helmreich et al. (1999). Thomas (2003) compared threats gathered from LOSA data with problems faced by aircrew in LOFT training scenarios. He observed that almost 70% of LOFT scenarios involved an aircraft malfunction; however, this only occurred in 14% of operational threats in the LOSA audit data. The most frequent external safety threat in line operations was weather (almost 21%), but this was incorporated in only 4% of LOFT sessions. Other external threats to safety, such as operational pressures on crews, air traffic and ground handling events, did not occur in any LOFT scenarios at all. Thomas (2003) also observed that crew performance during LOFT sessions was considerably superior to that observed when flying on the line, a classic instance of training appearing effective in the simulator but failing to transfer effectively to the workplace itself.
The approach of developing training needs directly from line operational requirements reflects the latest training philosophy outlined by the Federal Aviation Administration (FAA) in the Advanced Qualification Program (AQP) and also being adopted elsewhere (e.g. by the European Aviation Safety Agency (EASA) 2008). The emphasis in the AQP has moved away from time-based training requirements to fleet-specific, proficiency-based requirements (see AC 120-54; Federal Aviation Administration, 1991). In the AQP process, the airline develops a set of proficiency objectives based upon its requirements for a specific type of operation (e.g. based upon a threat and error management process developed from a LOSA audit). The AQP is based upon a rigorous task analysis of operations but with emphasis placed firmly upon the behavioural aspects of the flight task, such as decision making or the management of the aircraft’s automation. The revolutionary aspect of this process for the aviation industry is that many of the regulatory shackles are released when approving the content of the training programme. However, the complexity of the AQP process means that considerable skills and resources have to be applied to gain approval, and Maurino (1999) has suggested that only major airlines with such resources will benefit through its adoption.
It has already been noted that the pilot’s task has changed considerably as a result of increasing levels of automation on the flight deck, but regulations always lag behind technological advances. While regulations now require professional pilots to undertake multi-crew cooperation courses, there has been no such corresponding advance in the training requirements for the understanding and management of advanced automation. There is a considerable discrepancy between what pilots are required to know to gain a professional licence and what they need to be able to do to act as a First Officer. One such gap is in the management of flight deck automation (Dekker, 2000). Wood and Huddlestone (2007) observed that this problem was not an issue in managing the automation interface, but was rather an issue in understanding what the automation was doing and how it was trying to control the aircraft. Without this knowledge it is difficult to ‘manage’ the automation effectively. Automation should not be looked upon as a separate ‘add on’ as it is central to the design and operation of modern airliners (Rignér and Dekker, 1999). Any inspection of the avionics architecture of a modern fourth-generation airliner (e.g. Airbus A330/340 or Boeing 777) reveals this to be the case, but this is not reflected in the training of modern commercial pilots.
In September 2007, EASA implemented a new airworthiness rule (Certification Specification 25.1302) that mandates for the error-tolerant design of flight deck equipment on all new large commercial aircraft. The stimulus for the rule was the FAA Human Factors Team Report on the Interfaces between Flightcrews and Modern Flight Deck Systems (1996), which was commissioned as a result of several accidents occurring to new (at the time) technology airliners. This was the first time a rule explicitly addressing human factors issues had been implemented in the airworthiness regulations.
The roots of human error are manifold and have complex interrelationships with all aspects of the operation of a modern airliner, especially training. During the last decade ‘design-induced’ error has become of particular concern to the airworthiness authorities, particularly in the highly automated third (and now fourth) generations of airliners. While the high level of automation in modern airliners has doubtless contributed to considerable advances in safety, accidents have begun to occur as a direct result of the manner in which it has been instantiated (Woods and Sarter, 1998): for example, the Nagoya Airbus A300-600 (where the pilots could not disengage the go-around mode after inadvertent activation as a result of a combination of lack of understanding of the automation and poor design of the operating logic in the autoland system); the Cali Boeing 757 accident (where the poor interface on the flight management computer and a lack of logic checking resulted in a Controlled Flight Into Terrain (CFIT) accident); and the Air Inter Airbus A320 accident on Mount St Odile, near Strasbourg (where the flight crew inadvertently set an excessive rate of descent on the mode control panel instead of manipulating the flight path angle, as a result of both functions utilising a common control interface and a poor associated display). However, as noted at the beginning of this chapter, many aspects of ‘pilot error’ are actually ‘designer error’.
As a result of such accidents the FAA commissioned an exhaustive study of the pilot–aircraft interfaces in highly automated aircraft (FAA, 1996). This report identified several major shortcomings and deficiencies in flight deck design. There were criticisms of the interfaces, such as pilots’ autoflight mode awareness/indication, energy awareness, confusing and unclear display symbology and nomenclature, and a lack of consistency in flight management systems’ interfaces and conventions. The report also heavily criticised the flight deck design process itself, identifying in particular a lack of human factors expertise in design teams and placing too much emphasis on physical ergonomics and insufficient on cognitive ergonomics. Fifty-one recommendations came out of the report, including, ‘The FAA should require the evaluation of flight deck designs for susceptibility to design-induced flightcrew errors and the consequences of those errors as part of the type certification process.’
Subsequently, in July 1999, the US Department of Transportation tasked the Aviation Rulemaking Advisory Committee to develop new regulatory standards and/or advisory material to address design-related flight crew performance vulnerabilities and the prevention (including the detection, tolerance and recovery) of error (US Department of Transportation, 1999). Since September 2007 the rules and advisory material developed from this process have been adopted by EASA as CS 25.1302 and AMC (Acceptable Means of Compliance) 25.1302. At the time of writing, the FAA is shortly expected to adopt the same rule in 2012. The rule requires that it must be demonstrated that flight deck equipment can be used by qualified flight-crew members to perform their tasks safely by providing them with the necessary information in a timely and appropriate format and that, if a multi-function interface is used, the information should be accessible in a manner consistent with the urgency, frequency and duration of the flight task in question. Furthermore, the flight deck interfaces must be predictable, unambiguous, and designed to enable the crew to intervene in a manner appropriate to the task if necessary. Finally, the rule requires that as far as is possible the flight deck equipment must enable the flight crew to manage errors resulting from the kinds of interaction that can be reasonably expected (see CS 25.1302 for the full wording of the airworthiness requirement).
However, one of the great challenges of devising a certification rule to address the adequacy of the human factors aspect of the flight deck is that it is essentially attempting to evaluate a hypothetical construct. The pilot/aircraft interface doesn’t really exist!
Figure 7.2 (from Harris, 2011) shows why this is the case. On the output/human input side of the interface, ‘images’ on the aircraft displays should convey appropriate information to the pilot. This information should be interpreted correctly (within the context of the situation and the mission goals) and be transformed into knowledge and understanding that allows management of the aircraft. This is a function of part 61 of the Federal Airworthiness Regulations, which addresses pilot training. On the human output/machine input side of the control and monitoring loop, control intent from the pilot needs to be translated into the desired aircraft output. A good control system (be it a flight control system or system management interface) will translate pilot intent into system output in the manner desired and with minimal physical or mental effort. This is a function of part 25 of the Regulations, which addresses aircraft design. A ‘high-quality’ pilot/aircraft interface consists of a good ‘fit’ between the skills, knowledge and ability of the user and the controls and displays of the machine. However, neither certification of the aircraft alone nor satisfactorily attaining the requirements for an Air Transport Pilot’s Licence (ATPL) can ensure the pilot/aircraft fit. At the moment, the only way that this fit is managed is via the requirement for an aircraft type rating, which attempts to make sure that the generic training obtained as part of the ATPL can be translated into the specific requirements to manage the flight deck interfaces of one particular type of aeroplane.
To a certain degree the fragmented nature of the discipline reflects the fragmented nature of the regulations within which airlines operate. This in turn mitigates the opportunities for a truly systemic approach to the implementation of the Human Science in the aviation industry.
Taking a socio-technical systems approach based upon the 5Ms model, some very basic, high-level representations of the manner in which human factors operates in the airline industry can be proposed that incorporate many of the issues addressed so far (see Fig. 7.3) (Harris, 2007b). These representations are by no means a comprehensive model. They do not address such things as the negative effects of psychological performance shaping factors (e.g. stress, fatigue and workload) or physiological performance shaping factors (e.g. noise, vibration and temperature). However, they do attempt to make explicit the relationships between the various human factors disciplines and how they contribute to the overall function of an airline.
To start off with, the requirements of the flight task (Mission) drive the design of the Machine (and, hence, to a certain degree the design of the flight deck). The role of the flight deck is to support the four basic tasks of the flight crew (aviate, navigate, communicate and manage) and to protect the crew from the physiological stressors imposed by the Medium. These basic functions of the flight deck are pivotal to the design of the training (essentially a process of modifying the huMan). Training is all about teaching someone how to do something, with something. To train someone you need training devices (a full flight simulator approved by the regulatory authorities for training and licensing purposes, equipped with a daylight visual system and six-axis motion platform; cockpit procedures trainers; other part-task trainers; computer-based training facilities for aircraft systems and procedures; and also ‘regular’ classroom facilities). The training overlay is more than just a curriculum: it also dictates which training devices will best be suited for delivering which aspects of the training course. The nature of the task will also dictate what sort of person is required: what are the basic aptitudes that they need that will make them likely to successfully complete their training and become a safe and efficient pilot? It has already been seen that basic ‘stick and rudder’ skills are only a very small part of the makeup of the modern pilot. Cognitive and team working (flight deck Management skills) are now also essential. There is an intimate relationship between all these components, all of which help to inform the design of each other. Hopefully, as a result of this process there will be a positive, beneficial effect on the huMan under training who, by thrashing around on the stick and throttle and jabbing at various buttons, will have the desired effect on the aircraft (i.e. use it effectively to complete the Mission tasked by the Management).
The role of airline Management, in addition to making money, is to ensure that all operations fall within the legislative requirements (e.g. for flight crew licensing and aircraft maintenance) required by society. This is essentially a safety management role that itself contains lots of human factors issues associated with confidential reporting schemes, safety culture and the analysis of accident and incident data. These issues, though, are well beyond the scope of this short chapter.
The major emphasis of the work in these formative years of the discipline (particularly in commercial aviation) has been safety related. Human factors has been seen as a discipline necessary to help in avoiding error (and hence accidents). Poor human factors increase the likelihood of error. Human factors is almost regarded as a ‘hygiene factor’. From a design perspective, a failure to consider the requirements of the aircraft/pilot interface will result in a product that is difficult to use and promotes error. However, from a manufacturer’s perspective, providing a ‘good’ aircraft/pilot interface does not ‘add value’ to a product: a failure to provide a user-friendly interface merely detracts from its value. As a result, it is difficult to make a convincing argument to invest heavily in human factors research and for airlines to indirectly pay for such research via increased unit cost. All modern flight deck interfaces are generally very good; hence minor deficiencies become a training or selection issue to be dealt with within the airline rather than being the manufacturer’s problem. But training is also a major cost. There is the cost to the airline in terms of the time and expense devoted to training and in the provision of the training equipment (simulators, computer-based training equipment, etc.). There is also the cost as a result of time lost to revenue-earning operations.
Human factors alone cannot improve the operational efficiency of an aeroplane (Harris, 2006). A wider, ‘system perspective’ is required. Human factors integration (HFI), which is a sub-discipline of systems engineering, began to emerge as a concept during the 1990s. HFI provides an integrative framework for the application of human factors. HFI originally encompassed six domains that were regarded as essential for the optimum integration of the human element into a system (UK Ministry of Defence, 2001). These were:
Simultaneous with the emergence of the HFI approach has been the development of powerful and robust computing technology, the development of large, cheap, flexible displays and the advent of robust, high-speed data links. These technologies provide the support for truly human-centred design and a revolution on the flight deck (and consequently across airline operations as a whole). Previously, to a large degree human-centred design had been a technology-driven process. However, with these technologies being developed, it is possible to start designing the tools and automation that are really required to support users in their tasks, rather than simply making the interfaces more ‘user friendly’ and less likely to induce error. Put together, the emergence of these new technologies coupled with a system-wide HFI approach means that human factors need no longer merely be a hygiene factor. It can have positive benefits by enhancing performance and reducing both operational and through-life costs. In short, it can ‘add value’.
The trend in flight deck design has been one of progressive ‘de-crewing’. The common flight deck complement is now just two. Fifty years ago, it was not uncommon for there to be five crew in the cockpit of a civil airliner (two pilots, flight engineer, navigator and radio operator). Now just two crew, with much increased levels of assistance from the aircraft, can accomplish the same tasks once undertaken by five personnel. Many of the functions once performed by flight crew are now wholly (or partially) performed by the aircraft itself. As referred to several times earlier, the emphasis in the role of the pilot has changed from one of being a ‘flyer’ to one of being a flight deck manager, where the aircraft and its systems are usually under supervisory, rather than manual, control. The pilots are now effectively outer-loop controllers (setters of high-level goals) and monitors of systems, rather than inner-loop (‘hands-on’, minute-to-minute) controllers. Emphasis is now on crew and automation management rather than flight path control per se.
Harris (2007a) has suggested that with the judicious use of a range of technologies developed during the past decade there are no major reasons why a single-pilot-operated aircraft is not feasible. The individual technologies required have now all reached a suitable level of maturity. The military have been operating complex, high-performance single crew aircraft for many years, and uninhabited air vehicles (UAVs) are now a regular part of operations. It is time for these technologies to be spun out into the commercial domain. The greatest obstacle to the operation of a civil single-pilot aircraft is not technological. It is combining the extant technology, designing the user interfaces and developing a new concept of operations to make such an aeroplane. The human factors requirements are the prime driver, not the hardware and software technologies. Such an aircraft will offer considerable cost savings. For regional aircraft of up to 50 seats it has been estimated that, over a 200 nautical mile leg, between 15 and 35% of the direct operating costs can be accounted for by crew costs (Alcock, 2004). As a result, considerable savings are possible with a reduction in the number of flight deck crew. Furthermore, with such a completely new design, there are relatively few legacy issues to overcome, thereby giving the designers a much freer hand to explore new concepts of operation.
Many assumptions about the role of the second pilot are also incorrect (or at least the received wisdom is questionable); for example, the second pilot’s role as a means of reducing workload and acting as an error-checking mechanism. This is not necessarily true. Firstly, there is a workload cost associated with crew management. It costs effort to work as a team. The requirement to coordinate crew, cooperate and communicate on the flight deck itself has workload associated with it. Doubling the number of crew does not halve the workload on each member. Far from it. Furthermore, poor CRM has been implicated as a causal factor in over 17% of all fatal commercial jet aircraft accidents (Civil Aviation Authority, 1998). From the same data set the effectiveness of the second pilot as an ‘error checker’ can also be questioned. Omission of action or inappropriate action was implicated in nearly 37% of accidents, and a deliberate non-adherence to procedures was implicated in 12.2%. Becoming ‘low and slow’ (a failure to cross-monitor the flying pilot) was a factor in 19% of accidents. As a cross-check on the position of the aircraft, the pilot not flying’s (PNF) effectiveness would seem to be questionable, as a lack of positional awareness was identified as a causal factor in over 41% of cases. It is accepted that these data cannot show the number of cases where the actions of the second pilot averted an accident. However, LOSA data obtained during routine operations again question the effectiveness of the second pilot. Thomas (2003) reported that 47.2% of errors committed by captains during normal line operations involved intentional non-compliance with standard operating procedures or regulations, and 38.5% were unintentional procedural non-compliance. He also reported that, in observations of line operations, crews did not demonstrate effective error detection, with more than half of all errors remaining undetected by one or both of the flight crew. It can be argued that removing the second pilot actually reduces the scope for accidents occurring as a result of miscommunication or misunderstanding between the pilots and, furthermore, it would not double the workload.
Modern commercial aircraft are already equipped with intelligent electronic checklist systems that effectively perform the cross-checking role of the second pilot. However, it would be tempting simply to adopt an approach whereby the automation in the aircraft simply undertakes the role of the second pilot. This would simply be falling into the ‘electric horse’ trap. The single-crew commercial aircraft provides an opportunity to fundamentally re-think the role of the pilot and to operate aircraft in a new and better way. Simply automating the functions of the second pilot would not provide a step change in operational effectiveness. New designs and new operating concepts are required. A system-wide (HFI) approach is required. For example, there is no reason why many functions need to be physically undertaken on the flight deck, for example navigation, surveillance and routine system monitoring. Harris (2007a) suggests not replacing many of these functions, but displacing them to dedicated, specialist teams on the ground, which will not necessarily be comprised of pilots. These teams can simultaneously look after many aircraft, giving economies of scale. Even with the advent of pilot incapacitation, UAV technology has become mature enough to allow the aircraft to be recovered.
Such a change in operational concept will require concomitant changes in training and organisational structures, not to mention a change in the airworthiness requirements. However, all these things are possible.
Human factors as a discipline has come of age. It must, however, avoid its natural inclination to rush to claim the moral high ground by marking its territory solely within the realm of aviation safety. The discipline must also coalesce once again in order that the maximum benefit from an integrated, through-life approach can be realised. To a large degree, while increasing levels of specialisation have served to develop the science, this has also simultaneously militated against its coherent application in commercial aviation. Nevertheless, the opportunity now exists to capitalise on the developments made by this relatively new discipline, which was originally born in the aviation domain just half a century ago.
Alcock, C., New turboprop push tied to rising fuel costs. Aviation International News. 2004 29 September 2004. Available at. http://www.ainonline.com/Publications/era/era_04/era_newturbop18.html [(Accessed 30 January 2007.)].
Chidester, T.L., Helmreich, R.L., Gregorich, S.E., Geis, C.E. Pilot personality and crew coordination: Implications for training and selection. International Journal of Aviation Psychology. 1991; 1:25–44.
European Aviation Safety Agency Certification Specifications for Large Aeroplanes (CS-25): Amendment 5. EASA, Cologne, 2008. Available at. http://www.easa.europa.eu/ws_prod/g/rg_certspecs.php#CS-25
Harris, D. Human Factors Integration (HFI) in civil aviation – Taking a systems perspective on Human Factors, 2007. [Taiwan (Republic of China). Taipei, Invited address to the Aviation Safety Council 1 November 2007].
Harris, D., Harris, F.J. Predicting the successful transfer of technology between application areas; a critical evaluation of the human component in the system. Technology in Society. 2004; 26(4):551–565.
Mackworth, N.H.Notes on the clock test – A new approach to the study of prolonged visual perception to find the optimum length of watch for radar operators. Cambridge: MRC Applied Psychology Unit, Cambridge University Report, 1944. [Subsequently published as].
Office of Statistical Control Army Air Forces Statistical Digest – World War II. Air Force Historical Research Agency, 1945 Available at. http://www.afhra.af.mil//
Pariés, J., Amalberti, R. Recent Trends in Aviation Safety: From Individuals to Organisational Resources Management Training. Roskilde, Denmark: Risøe National Laboratory Systems Analysis Department Technical Report (Series 1). Risøe National Laboratory, 1995; 216–228.
Reising, J.M., Ligget, K.K., Munns, R.C. Controls, displays and workplace design. In: Garland D.J., Wise J.A., Hopkin V.D., eds. Handbook of Aviation Human Factors. Mahwah, NJ: Lawrence Erlbaum Associates; 1998:327–354.
Rignér, J., Dekker, S.W.A. Modern flight training: Managing automation or learning to fly? In: Dekker S.W.A., Hollnagel E., eds. Coping With Computers in the Cockpit. Aldershot: Ashgate; 1999:145–152.
Thomas, M.J.W. Improving organisational safety through the integrated evaluation of operational and training performance: An adaptation on the line operations safety audit (LOSA) methodology. Human Factors and Aerospace Safety. 2003; 3:25–45.
US Department of Transportation. Aviation Rulemaking Advisory Committee; Transport Airplane and Engine: Notice of new task assignment for the Aviation Rulemaking Advisory Committee (ARAC). Federal Register. 64(No. 140), 1999. [22 July 1999].