Chapter 14: Cost, time and technical performance risk mitigation in large, complex and innovative aeronautics development projects – Innovation in Aeronautics

14

Cost, time and technical performance risk mitigation in large, complex and innovative aeronautics development projects

T. Browning,     Texas Christian University, USA

Abstract:

Large, complex, innovative development projects, such as many in the aeronautical industry, occur under conditions of consequential uncertainty, i.e. risk. Risk is manifest with respect to project cost, schedule, and technical performance. These risks need to be managed in relation to each other as well as to the project schedule, budget, and technical requirements. This chapter presents an integrated model to plan and monitor project value in terms of all of these areas. The model complements conventional approaches such as earned value management, which does not directly account for technical performance or any kind of risk. The model allows a user to quantify a project’s overall value, the portion of that value at risk, and the respective contributions of cost, schedule, and technical performance to that value and risk.

Key words

project management

risk management

project value

uncertainty

14.1 Introduction

A project is ‘a temporary endeavor undertaken to create a unique product, service, or result’ (PMI 2008). Unique circumstances, novel requirements, and limited resources all contribute to make projects uncertain and sometimes even ambiguous regarding exactly what work should be done and when (Loch et al. 2006). Uncertainty is at its greatest in large, complex, innovative development projects, which are common in the aerospace industry. When uncertainty allows potential outcomes that carry a negative consequence or impact to a project, uncertainty causes risk – risks of cost and schedule overruns, and risks of failing to provide desired levels of quality, functionality, or technical performance.

This chapter shows an integrated, quantitative framework for monitoring and managing project cost, schedule, technical performance, and overall uncertainties, impacts, and risks. The framework (Browning 2011) combines two existing frameworks: one for cost and schedule risks (Browning and Eppinger 2002; Browning 1998) and one for technical performance risks, the risk value method (RVM) (Browning 2006; Browning et al. 2002; Browning 1998). This combination into an integrated framework – effectively an extension of the RVM – provides a basis for exploring risk tradeoffs in projects. Risk tradeoffs are critical because reducing risk in one part of a project often just pushes risks into other parts, often inadvertently and unexpectedly. This supposed risk mitigation by risk transfer contributes to negative syndromes such as ‘whack-a-mole’ management where, like in the popular arcade game, managers try to ‘whack’ each new problem as it arises, but in doing so merely cause new problems elsewhere. Hence, an integrated framework is needed to support more optimal decisions at the overall project level.

After introducing concepts and definitions pertaining to project cost, schedule, and technical performance uncertainty and risk, this chapter briefly shows the integrated decision-support framework and provides an example of the development of an unmanned combat aerial vehicle (UCAV). The framework and example provide the basis for subsequent discussion of issues pertaining to risk tradeoffs. The chapter concludes with suggestions for further reading and references on this topic.

14.2 Interdependence of development cost, schedule, and technical performance

As a temporary endeavor, a project must produce its result with limited resources, especially time (deadlines or due dates) and money (budgets). While specific types of resources, such as skilled individuals or specialized facilities, may also be constrained, we will simplify the discussion in this chapter to the constraints posed by time and money. (The proposed framework could be extended to account for other specific resources of interest.) Collectively, we can think of all the resources expended for a project as the sacrifice for or investment in it.

The benefits of a project – its desired results – should meet expectations for quality. While time and money are relatively easy to measure, the quality of a result is often less so. Depending on the type of result expected, quality may be considered in terms of numbers of features or functions, numbers of goals met, degrees of capability, absence of ‘bugs’ or non-conformances to requirements, and/or aesthetic virtues. In this chapter, we will use the term ‘technical performance’ (or just ‘performance’) to refer to the degree of capability or ‘goodness’ of a project’s result. Technical performance should be understood in a general and generic sense as the attributes of the benefits or returns provided by a project, against which the costs or sacrifices are weighed.

Joint consideration of benefits for costs or sacrifices leads to the concept of value. A project’s value depends on the technical performance provided for the time and money spent. Exactly how these three variables are connected will be discussed below, but their general relationship implies that overall project value will tend to increase as time and money are spent, to a point, past which that value will begin to decrease (Browning 2003). In other words, everyone realizes that any worthwhile result will require some investment of resources, but at some point the marginal benefits cease to justify the marginal costs. Exactly where this point occurs depends on the specifics of the project and the desires of its stakeholders.

Another way to think of overall project value is illustrated in Fig. 14.1 as the volume of a rectangular solid with the three dimensions of project time (i.e. from the point of view of the customer, lead time), cost, and the technical performance of its result. As the time and cost of a project increase, the volume of the solid decreases, while an increase in technical performance increases the volume. However, an increase in technical performance usually requires an expenditure of time and money, so the three dimensions are not independent. It is precisely these interdependencies that cause time–cost–performance tradeoff problems in engineering design and management.

14.1 A stylized view of project value as the volume of a rectangular solid.

14.3 The aspect of risk

14.3.1 Uncertainty and the probability model

While definitions and usages vary by context and author, we will use the following definitions, adapted from Schrader et al. (1993):

• Uncertainty: the state of having some knowledge of the set of potential outcomes for a variable but not knowing what the outcome will actually be; ‘knowing what you don’t know.’

• Ambiguity: the state of not even knowing about the existence of a variable or its set of potential outcomes; ‘not knowing what you don’t know’; unknown unknowns or ‘unk-unks.’

In this chapter, we will focus on uncertainty, which addresses most of the issues on all but the most unprecedented of projects. We will presume that the ultimate outcomes of variables such as project cost, duration, and technical performance cannot be known with certainty, especially early in a project, but that, firstly, these variables are known to exist and, secondly, estimates of them can be made. Most project risk-management methodologies and literature deal with aspects of uncertainty. By its very nature, ambiguity is more challenging to address in a formal way, although its presence and potential should never be ignored. (Ambiguity is most likely to affect a project with dynamic goals. That is, if the whole point of the project is changing, then estimates of the time and cost required to achieve a superseded goal become irrelevant.)

A conventional way to model a variable about which we are uncertain is to use a random variable represented by a probability distribution, . Often, such distributions are inferred from only a few data points. For example, suppose we have three estimates of a project’s duration: an optimistic or best-case scenario for fast completion, a; a pessimistic or worst case scenario for slow completion, b; and a ‘most likely’ completion time, m, somewhere between a and b. Traditional project management methods (e.g. Meredith and Mantel 2009) use these three points to fit a beta or a triangle distribution (see Fig. 14.2), from which measures such as the mean, variance, and skewness can be inferred. Since the height of a probability distribution is normalized so that the sum of the area under the distribution equals one, can be used to address questions concerning the probability of an outcome. For example, in Fig. 14.2, the area under the triangle and to the left of the vertical bar represents the probability of completing the project by the deadline of 160 days (about a 42% chance). Meanwhile, about 58% of the potential outcomes imply a schedule overrun.

14.2 Probability distribution functions.

Much of the information about our level of certainty in an outcome is represented by the variance (or standard deviation) of a distribution – or, quite simply, its width. A distribution with small variance represents a narrow set of potential outcomes and, therefore, relative precision and confidence in the eventual outcome. A wide distribution conveys a broad range of potential outcomes and a lack of confidence in knowledge of the eventual outcome.

14.3.2 Impact and expected loss (risk)

While probability distributions provide a very useful model of uncertainty, a project manager faces a myriad of uncertainties, and some are more important than others. While more uncertainty in any dimension is usually bad, which uncertainties deserve the most attention? Moreover, in using only cumulative probabilities, we lose information about the outcomes themselves: are all outcomes that miss the deadline, for instance, equally bad? Is it worse to miss the deadline by 50 days than to miss it by one? In a simple probability model, both of these outcomes are equally undesirable. Hence, we supplement our basic model of uncertainty in outcomes with a model of the consequences or impacts of those outcomes.

Figure 14.3 shows an impact function, I(x), a mathematical expression of the consequences of each potential outcome in . In this example of a project’s duration relative to a deadline, early completion provides no reward, but being late has a penalty – $100 000 for the first day plus $10 000 per additional day. A project manager would prefer to make decisions that would increase confidence in the completion date (narrow the distribution) or move it up (shift the distribution to the left).

14.3 Probability distribution with impact functions.

Risk is not the same thing as uncertainty; it is uncertainty weighted by its impact. Standard approaches to risk management (e.g. PMI 2008) multiply probability and impact to arrive at a risk index. We can derive a richer risk index by looking at the impact of each potential outcome, weighted by its probability, to find the overall expected penalty or loss inherent in a set of potential outcomes:

[14.1]

where is the risk index and G is the goal (e.g. a deadline). Equation 14.1 pertains to a set of outcomes where ‘smaller is better’ (SIB), such as project duration, so the integration is performed over the region of undesirable outcomes that exceed the goal. For a set of outcomes where ‘larger is better’ (LIB), we would integrate over −∞ to G. For a set of outcomes where some nominal value is best (NIB), two integrations must be performed, one over each region of adversity. For the example in Fig. 14.3, where

[14.2]

[14.3]

That is, the risk index conveys the expected (in the sense of statistical expectation) penalty the project will face, given the prevailing uncertainties (i.e. the possibilities of a particular range of undesired outcomes). If the impact function is specified in monetary units, as in this example, then is the expected loss or ‘value at risk’ due to project duration. Impacts may also be expressed in other terms, such as customer utility, sales revenue, or sales volume. In any case, conveys the expected (average) penalty in those terms.

Note that isa function of G. All else being equal, for a SIB (LIB) attribute, decreases (increases) with G. In other words, using the analogy of a high jumper, as the ‘bar’ is raised, the risk of not clearing it increases. Equation 14.1 accounts for risk due to the height of the bar (G), the capability of the jumper (), and any consequences of failing the jump (I(x)).

14.3.3 Cost, schedule, and performance risks

While some project risk management methodologies treat risk as a fourth dimension along with cost, schedule, and performance, it seems that, since the risks threaten each of these three dimensions, they should be accounted for within them. Therefore, each risk is defined as follows (Browning 2011):

• Cost risk (): the expected additional cost (beyond the planned budget) to be incurred given, (a), the uncertainties in the project’s ability to develop an acceptable result, (b), within a given budget and, (c), the impacts of any cost overruns.

• Schedule risk (): the expected additional project duration (beyond the planned deadline) to be required given, (a), the uncertainties in the project’s ability to develop an acceptable result, (b), by a given deadline and, (c), the impacts of any schedule overruns.

• Technical performance risk (): the expected loss due to, (a) the uncertainties in the project’s ability to develop a result, (b) that will satisfy technical performance goals and, (c), the impacts of any performance shortfalls.

14.3.4 Risk tradeoffs

The previous section mentioned the potential for cost, schedule, and performance tradeoffs in projects. Indeed, time–cost tradeoffs in particular have received much attention in the literature on project scheduling (e.g. Roemer et al. 2000). However, most if not all of this literature discusses the tradeoffs in terms of single-point estimates for cost, duration and quality, neglecting the effects on the distributions of potential outcomes. Indeed, some tradeoffs may affect characteristics of the distribution besides the mean or most likely outcome and therefore could be inadequately represented by other models. For example, an investment in information (at a cost), such as performing a test activity, might improve confidence in a dimension of technical performance, thereby reducing the variance in its distribution, but without shifting its mean. This trades an expense (which may increase ) for a reduction in . To determine whether this tradeoff is rational, we need an integrated model to support such analyses and decisions.

14.4 An integrated decision-support model – the risk value method (RVM)

The components presented in the previous section combine to form an integrated model with the following inputs:

• Dimensions of technical performance: the J critical-to-quality (CTQ) characteristics or attributes of the project’s result,

• Project capabilities (): estimates of the potential outcomes in each dimension of cost, schedule, and the J dimensions of technical performance, consolidated into a vector of length J + 2 of probability distributions,

• Project goals (G): a vector of length J + 2 of goals, targets, or requirements for each dimension, including the budget, the deadline, and a level of technical performance in each dimension,

• Impact functions (I): a vector of length J + 2 of impact functions, representing the penalties (in similar terms) of outcomes that fail to achieve their goal.

Using these inputs and Eq. 14.1, we can find the vector of length J + 2 of risk indices (). To get a sense of the project’s overall risk, we can combine the dimensions of risk in one of the following ways:

[14.4]

where all of the w sum to one (an arithmetic mean), or

[14.5]

where overall project risk depends on the weakest attribute, or a weighted geometric average (formula not shown), or others. Multi-attribute decision analysis is notoriously difficult due to the interdependencies among the dimensions, and each of the potential methods has its benefits and drawbacks. Equation 14.4 has the advantage of simplicity and the disadvantage of potentially obscuring some high risks. Equation 14.5 is also simple and provides a constraint-based approach. Whichever methods a project manager selects, the intent is to keep tabs on the evolving levels of risk in the critical dimensions of a project and to support decisions about risk management and tradeoffs.

In a simplistic way, may be thought of as the portion of the project’s overall value that is threatened by the potential occurrence of adverse outcomes. Augmenting the stylized representation in Fig. 14.1, one can think of as the potential reduction in volume down to the smaller, shaded solid in Fig. 14.4. As decreases, the shaded solid grows until , at which point the shaded solid is the same size as the unshaded one.

14.4 A stylized view of the portion of a project’s value at risk.

This framework, an extension of the risk value method to include project time and cost, maintains the RVM’s connection of project progress and added value with the reduction of the portion of a project’s value at risk.

At the same time, it is important to note that the framework can also account for the benefits of the uncertainties facing projects – namely, the possibility of opportunities residing in the upside potential of each probability distribution (i.e. the portion of the outcomes that exceed the goal). If exceeding a goal has a positive reward, then an impact function can similarly be used to account for such benefits. For further information on opportunities and their relationship to risks, see Browning (2011) and Hillson (2003).

14.5 Example: an unmanned combat aerial vehicle (UCAV) development project

This section uses the example of the preliminary design phase of a UCAV development project to illustrate the application of the risk framework. The example is hypothetical but based on an actual UCAV development project at The Boeing Company (Browning 1998).

14.5.1 Model inputs and initial conditions

We will focus on four attributes of technical performance for the UCAV: maximum payload weight (in pounds), maximum range (in miles), reliability (in terms of mean time before failure (MTBF)), and stealth (in terms of a ratio to the performance level of the previous generation product, such that a measure above 1.0 represents an improvement). Adding project cost and schedule (duration) to these yields six dimensions of project attributes. Consultation with experts in each of these areas led to the initial estimates of project capabilities shown as the probability distributions in Table 14.1. A variety of inputs were distilled into three point values (a, m, and b) from which the triangle distributions were derived:

Table 14.1

Initial (t0) project data and risk calculations

[14.6]

Other types of distribution could be used instead of triangles. In this example, the distributions for project cost and duration were derived from project simulations (Browning and Eppinger 2002). Table 14.1 also shows the goal for each dimension, the impact function (in terms of the potential for lost revenue), and the implied level of risk, as calculated with a discrete form of Eq. 14.1. The weights shown in the second-to-last column are used in Eq. 14.4 to calculate the overall project risk (shown at the bottom of Table 14.1). The following simplifying assumptions are also used to relate project cost and duration to product unit cost and delivery date. (These assumptions are necessary since the latter attributes are actual customer CTQs, whereas preliminary design cost and duration are not.) Firstly, each dollar of cost overrun in the preliminary design phase project would likely cause a two-dollar increase in the product’s unit cost. Secondly, each day of overrun in the project’s duration would likely increase the product’s final delivery date by two days.

14.5.2 Project progress

As the UCAV preliminary design project proceeded, workers completed activities, which produced new information and knowledge, which enabled updates to the probability distributions. Doing the work consumed time and money, which replaced some cost and schedule uncertainties with actual results, thus narrowing the cost and schedule distributions. (Regardless of which way the mean shifts, the typical trend is for cost and schedule variance to decrease over the course of a project.) Note that we use the term ‘variance’ in the statistical sense (referring to the width of the distribution), not in the sense of a difference from a planned value, which is how the term is used in earned value management (e.g. PMI 2008). The work resulted in new information about the dimensions of technical performance that tended to narrow their distributions, albeit without any guarantee of the mean shifting in a desirable direction. Figure 14.5 plots the weighted risk indices derived from Eq. 14.4 over project time.

14.5 Evolution of project risk dimensions over time.

We can observe the following trends as the project proceeded:

• The width of the distributions tends to decrease as ignorance is replaced with knowledge and uncertainty with certainty. This does not imply that a project will always progress in a desirable direction (i.e. the mean of each distribution could shift in an undesirable way), but at least one gains more precise knowledge of what the project’s outcomes will actually be.

• While uncertainty decreases over the course of the project, risk may grow. Since the risk values depend on the overall location of the probability distribution, not just its variance, we may find a project moving in the direction of more certainty about a bad outcome, which of course implies increasing risk. For example, over the course of the UCAV project, the weight of the aircraft trended upwards, which implied a reduction in aircraft range (weeks 4–10 in Fig. 14.5). When such a trend is observed, a project manager may choose to reallocate resources to address the issue.

• Attributes can be traded off, as guided by their prevailing level of risk and their relative weighting (in Eq. 14.4). For example, at week 17 the project reduced the estimated range of the aircraft by decreasing the size of its fuel tanks, which allowed for carrying additional payload. While this slightly increased the risk in the range attribute, it provided a major reduction of the risk in the payload attribute.

14.6 Discussion

14.6.1 The notion of progress in projects

‘To solve our basic problem [of improving the product development process], any methodology that is to be developed must be useful in evaluating the partially-developed product at any time during its development life.’

Sidney Sobelman (1958).

The problem of evaluating a partially designed product and thus measuring progress (or the value earned) in a product development project is a long-standing one. Approaches such as earned value management (EVM) (e.g. Fleming and Koppelman 2000, PMI 2008) attempt to measure progress as a function of activities completed and time and money spent in relation to a plan. However, EVM does not account for cost and schedule uncertainty or risk or for any aspects of technical performance. Moreover, it is not clear that true progress results from doing a preconceived slate of activities. As distilled by Steward (2001), progress is what you accomplish, not what you do. The framework presented in this chapter enables progress to be measured in terms of the attributes that determine the overall value of a project to its stakeholders.

Moreover, by accounting for the prevailing uncertainties and their implications (risks), the framework allows value to be added to a project simply by gaining confidence in the outcomes. For example, suppose a person is about to embark on a coast-to-coast road trip across the US. This person plans to use his own personal automobile, which is seven years old, but he is not fully confident about its suitability for such an adventure. So, he takes the car to a mechanic and asks him to check it over. After an hour of work, the mechanic says that the car is indeed good to go – and charges $100. Did this guy just waste $100? Did he receive any value from the transaction? Actually, he was quite happy to verify that his car was in good condition, and he left with a confidence that he did not have before. Similarly, some activities in a product development process (e.g. tests and verifications) add value and contribute to progress merely by producing information that increases confidence in the outcome (by ruling out the possibility of some outcomes). While approaches such as Lean may classify such activities as non-value-adding, this designation seems suspect (Browning and Heath 2009).

14.6.2 Relationship to technical performance measure (TPM) tracking

The probability distributions used to represent a project’s capabilities and estimations regarding each dimension of technical performance are related to TPM tracking, which has long been used in the development of complex systems (e.g. Coleman et al. 1996; DSMC 1990; Kulick 1997; NASA 1995; Pisano 1996) to monitor the status of important attributes of an evolving system design. Figure 14.6 shows an example of a basic TPM tracking chart, where the estimated maximum range of an aircraft is updated over the course of a project, albeit in hindsight, as new data become available. Some TPM tracking charts have been known to exhibit definite tendencies and trends that can be anticipated as typical, such as the notorious increase in weight over the course of an aircraft development project. TPM charts can be augmented to display planned trajectories, requirements, and margins. While it is also possible to augment the display at each point with an estimate of the prevailing uncertainty (an a and b to go with the m), this seems to be seldom done in practice. Research and observations of TPM tracking that did include uncertainty estimates (Browning et al. 2002; Browning 1998) led to the finding that, while the direction of change in the estimate of the most likely level of the TPM (m) was relatively difficult to predict from one period to the next, the uncertainty bounds around that most likely estimate tended to shrink more reliably. Thus, estimates of the uncertainties around any TPM estimate are quite important, and they can be used as a basis for determining , as in Eq. 14.6.

14.6 Example of a TPM tracking chart for aircraft maximum range.

While the set of TPMs worth tracking on a complex development project is larger than the set of high-level dimensions included explicitly in the RVM, the larger set of TPMs plays a supporting role and might even be formally related via a TPM breakdown structure or tree diagram. For example, the TPM ‘aircraft weight’ is not included in our UCAV example. While it could have been included, one could also argue that many customers do not care directly about a UCAV’s weight; rather, they care about the implications of weight on attributes such as maximum payload, maximum range, endurance, and cost. In such a case, while designers might track the weight TPM explicitly, it would be an input to revisions of , even if it was not an actual member of that set of distributions.

14.6.3 The role of project activities in the RVM

Activities – packages of work that consume resources – have long been the basic unit of analysis for project time and cost models. However, the results of activities – namely, the information they produce that adds value by revising – have received much less attention. Hence, activities serve as the basic building blocks of an RVM model. For further details on how to associate the results of activities with changes in , see Browning et al. (2002). For a simulation that treats activities as the agents in an agent-based model of a development project, see Lévárdy and Browning (2009).

14.6.4 Benefits of a holistic framework

Many projects incorporate the conventional techniques of risk management, including TPM tracking, project scheduling, budgeting, and earned value management. However, especially on large projects, such activities are often undertaken by different individuals or teams, and they may not always communicate frequently or completely. The result is that the various plans and schedules used to direct the project can become disintegrated and unsynchronized. For example, the scheduling group may have one list of project activities in their plan, while the risk management group may have another (one that includes additional activities they have nominated to mitigate risks). Moreover, as mentioned previously, some groups (e.g. the ‘managers’) tend to plan and monitor the use of resources while others (e.g. the ‘systems engineers’) focus on the implications of the work done on the system development. What is needed is a framework that integrates these two perspectives and information sets, as well as a third perspective, that of the customer (or, as their proxy, the marketing department), which is represented in the RVM by the goals and impact functions. The RVM provides a common language for discussions among these groups about the sources and implications of project risks. Without a holistic framework, a project’s decision makers will operate with either a subset of the various available plans, schedules, and technical data or else conflicting assumptions and recommendations from the various groups attending to each subset.

14.7 Conclusion and future trends

Implementation of the RVM on a large scale would be greatly facilitated by a software tool that automated the input, analysis, and representation of the relevant data. As more projects adopt this way of thinking about and modeling risk, the demand for such a tool will increase.

Practical experience has shown that the first RVM models built by an organization do not come easily, because their construction prompts questions that have not been asked before. While much of the needed information can be pulled together from existing risk management plans, project schedules, etc., estimates of impact functions and project capabilities in each critical dimension of performance tend to require more work. RVM implementation provides a ‘forcing function’ to gather information that a project should have anyway but usually does not. Once an initial model is built, it is much more easily revised and improved. Once project participants have gotten used to thinking about risk and its implications for project value in the new way, they ‘get down the learning curve’ and find it possible to provide much better estimates.

14.8 Sources of further information and advice

For further background on technical performance risk management methods, see the primary works upon which this chapter is based (Browning 1998, 2006, 2011; Browning et al. 2002). To see relationships of this type of work to decision analysis, see Ullman (2006). For primers on general methods for project risk and opportunity management, see Smith and Merritt (2002) and Hillson (2003). For focus on project cost and schedule risks and time–cost tradeoffs, see Browning and Eppinger (2002) and Roemer et al. (2000). Finally, for a simulation that incorporates many of the methods discussed in this chapter to represent decision-making in an adaptive process, see Lévárdy and Browning (2009).

14.9 References

Browning, T.R.Modeling and analyzing cost, schedule, and performance in complex system product development. Cambridge, MA: Massachusetts Institute of Technology, 1998. [PhD Thesis (TMP)].

Browning, T.R. On customer value and improvement in product development processes. Systems Engineering. 2003; 6(1):49–61.

Browning, T.R. Technical risk management. In Hillson D., ed.: The Risk Management Universe: A Guided Tour, 292–320, London: BSI, 2006.

Browning, T.R., A quantitative framework for managing progress, uncertainty, risk, and value in projects. TCU Neeley School of Business, Working Paper. 2011.

Browning, T.R., Eppinger, S.D. Modeling impacts of process architecture on cost and schedule risk in product development. IEEE Transactions on Engineering Management. 2002; 49(4):428–442.

Browning, T.R., Heath, R.D. Reconceptualizing the effects of lean on production costs with evidence from the F-22 program. Journal of Operations Management. 2009; 27(1):23–44.

Browning, T.R., Deyst, J.J., Eppinger, S.D., Whitney, D.E. Adding value in product development by creating information and reducing risk. IEEE Transactions on Engineering Management. 2002; 49(4):443–458.

Coleman, C., Kulick, K., Pisano, N., Technical performance measurement (TPM) retrospective implementation and concept validation on the T45TS Cockpit-21 program. Program Executive Office for Air Anti-Submarine Warfare, Assault, and Special Mission Programs, White Paper. 1996.

DSMCSystems Engineering Management Guide. Fort Belvoir, VA: Defense Systems Management College, 1990.

Fleming, Q.W., Koppelman, J.M. Earned Value Project Management, 2nd ed. Upper Darby, PA: Project Management Institute, 2000.

Hillson, D.A.Effective Opportunity Management for Projects: Exploiting Positive Risk. New York: Marcel Dekker, 2003.

Kulick, K.A., Technical performance measurement: A systematic approach to planning, integration, and assessment (3 Parts). The Measurable News. 1997.

Lévárdy, V., Browning, T.R. An adaptive process model to support product development project management. IEEE Transactions on Engineering Management. 2009; 56(4):600–620.

Loch, C.H., DeMeyer, A., Pich, M.T.Managing the Unknown. New York: Wiley, 2006.

Meredith, J.R., Mantel, S.J. Project Management, 7th ed. New York: Wiley, 2009.

NASA. NASA Systems Engineering Handbook, 1995. [NASA Headquarters, Code FT, SP-6105].

Pisano, N.D., Technical performance measurement, earned value, and risk management: An integrated diagnostic tool for program management. Program Executive Office for Air ASW, Assault, and Special Mission Programs (PEO(A)), US Air Force, Unpublished white paper. 1996.

PMI. A Guide to the Project Management Body of Knowledge, 4th ed. Newtown Square, PA: Project Management Institute, 2008.

Roemer, T.A., Ahmadi, R., Wang, R.H. Time-cost trade-offs in overlapped product development. Operations Research. 2000; 48(6):858–865.

Schrader, S., Riggs, W.M., Smith, R.P. Choice over uncertainty and ambiguity in technical problem solving. Journal of Engineering and Technology Management. 1993; 10(1):73–99.

Smith, P.G., Merritt, G.M.Proactive Risk Management: Controlling Uncertainty in Product Development. New York: Productivity Press, 2002.

Sobelman, S.A Modern Dynamic Approach to Product Development. Dover, NJ: Office of Technical Services (OTS), 1958.

Steward, D.V., Perceiving DSM as a problem solving method. Proceedings of the 3rd MIT Design Structure Matrix Workshop, 2001. [Cambridge, MA, 29–30 October].

Ullman, D.G.Making Robust Decisions. Victoria, BC: Trafford, 2006.