Modeling Uncertainty of Induced
This chapter presents a new method for modeling induced technological learning and uncertainty in energy systems. Three related features are introduced simultaneously: (1) increasing returns to scale for the costs of new technologies; (2) clusters of linked technologies that induce learning depending on their technological “proximity” and the technology relations through the structure of the energy system; and (3) uncertain costs of all technologies and energy sources.
The energy systems-engineering model MESSAGE developed at the International Institute for Applied Systems Analysis (IIASA) was modified to include these three new features. MESSAGE is a linear programming optimization model. The starting point for this new approach was a global (single-region) energy systems version of the MESSAGE model that includes more than 100 different energy extraction, conversion, transport, distribution, and end-use technologies. A new feature is that the future costs of all technologies are uncertain and assumed to be distributed according to the log-normal distribution. These are stylized distribution functions that indirectly reflect the cost distributions of energy technologies in the future based on the analysis of the IIASA energy technology inventory. In addition, the expected value of these cost distributions is assumed to decrease and variance to narrow with the increasing application of new technologies. This means that the process of technological learning is uncertain even as cumulative experience increases. New technologies include, for example, fuel cells, photovoltaics (PVs), and wind energy conversion technologies.
The technologies are related through the structure of the energy system in MESSAGE. For example, cheaper wind energy has direct and indirect effects on other technologies that produce electricity upstream and on electric end-use technologies downstream. In addition, technologies are grouped into clusters that depend on technological “proximity.” For example, the costs of all fuel cells for mobile applications are a function of their combined installed capacity weighted according to their expected unit sizes. This relationship depends on how closely the technologies are related. This varying degree of “collective” technological learning for technologies belonging to the same cluster is also uncertain.
Each scenario of alternative future developments for a deterministic version of the global energy systems model MESSAGE requires approximately 10 minutes of run-time on a PC. Therefore, it is simply infeasible to generate alternative future developments under uncertainty based on a simple Monte Carlo type of analysis where one sequentially draws observations from the more than 200,000 cost distributions (100 technologies, 11 time steps, 10 technological clusters with 22 technologies included) assumed here for modeling technological learning and uncertainty. Instead, the new approach proposed here starts with a large but finite number of alternative energy system “technology dynamics” and generates “in parallel” another large but finite number of deterministic scenarios by sampling from the distributions simultaneously for each of these technology dynamics. In this application, about 130,000 scenarios were generated. There were 520 alternative technology dynamics, each with about 250 alternative deterministic scenarios resulting from the simultaneous stochastic samplings. Both numbers were initially varied before deciding that about 500 is a sufficient number of different technology dynamics required for a wide spectrum of alternative technological learning possibilities and that about 250 different deterministic scenarios are sufficient to generate most of the interesting future energy system structures for each of the technology dynamics based on the analysis that in total produced roughly one million different scenarios. These large numbers of scenarios represent a very small subset of the basically infinite number of all possible scenarios. They were not chosen randomly, but are a result of applying adaptive global search techniques to the formulated nonconvex, non-smooth stochastic problem.
From the 520 alternative technology dynamics, about 53 resulted in scenarios with very similar overall energy systems costs. They have fundamentally different technological dynamics and produce a wide range of different emergent energy systems, but can be considered to be approximately equivalent with respect to “optimality” criteria (in this case, simultaneous cost and risk minimization). Thus, one result of the analysis is that different energy system structures emerge with similar overall costs; in other words, there is a large diversity across alternative energy technology strategies. The strategies are path dependent and it is not possible to choose a priori an “optimal” direction of energy systems development.
Another result of the analysis is that the endogenous technology learning with uncertainty and spillover effects will have the greatest impact on the emerging energy system structures during the first few decades of the twenty-first century. Over these “intermediate” periods of time, these two processes create endogenous lock-in effects and increasing returns to adoption. In the very long run, however, none of these effects is of great importance. The reason is that over such long periods many doublings of capacity of all technologies with inherent learning occur, so that little relative cost advantage results from large investments in only a few technologies and clusters. Therefore, the main finding is that, under uncertainty, the near-term investment decisions in new technologies are more important in determining the direction of long-term development of the energy system than are decisions that are made later, toward the end of the time horizon. Thus, the most dynamic phase in the development of future energy systems will occur during the next few decades. It is during this period that there is a high degree of freedom of choice across future technologies, and many of these choices lead to high spillover learning effects for related technologies.
One policy implication that can be made based on the emerging dynamics and different directions of energy systems development in this analysis is that future research, development, and demonstration (RD&D) efforts and investments in new technologies should be distributed across “related” technologies rather than directed at only one technology from the cluster, even if that technology appears to be a “winner.” Another implication is that it is better not to spread RD&D efforts and technology investments across a large portfolio of future technologies. Rather, it is better to focus on (related) technologies that might form technology clusters. Finally, the results imply that fundamentally different future energy system structures might be reachable with similar overall costs. Thus, future energy systems with low carbon dioxide (CO2) emissions need not be associated with costs higher than those of systems with high emissions.
Fundamental changes in global energy systems tend to occur slowly. The replacement of traditional energy sources—such as the substitution of coal for fuelwood with the advent of steam, steel, and railways—took most of the nineteenth century. The subsequent replacement of coal with oil and gas and associated technologies lasted the better part of the twentieth century. In contrast to these very slow processes of change in the global energy system, in some parts of the energy system change can be more dynamic—especially in the evolution of end-use technologies. However, the fact that fundamental changes occur over many decades rather than a few years means that technological changes that have inherently shorter time constants need to be consistent with the overall slower processes of change in the energy system. Thus, the many generations of individual technologies that are replaced through the normal rate of capital turnover are a part of the overall slow change from older to newer sources of energy and other related structural changes in energy systems. This means that many generations of new technologies are likely to come and go before the possible transition to the post-fossil era or to new fossil systems is achieved. The directions of these future transitions are uncertain. Future energy systems could rely on renewable energy sources, on clean coal, on less carbon-intensive fossils such as natural gas, or on nuclear power. Therefore, there is an infinite number of alternative scenarios that lead to all possible future energy systems.
As mentioned, the replacements of primary energy sources have in each case required the better part of a century, and similar changes are conceivable during this century. Climate change is characterized by long time constants, just as energy systems are. It might take a few decades to resolve the uncertainty surrounding the influence of human intervention in the climate system resulting from emissions of greenhouse gases and aerosols. The main sources of emissions for most of these gases are associated with energy activities. This and other environmental concerns are another reason why the direction of technological changes in the energy system is important. Changes in the energy system that lead to radically lower future emissions would need to be implemented before this uncertainty about possible climate change is resolved because of the long time constants of change in both the energy and climate systems. This is especially true for the introduction of new energy technologies if sufficient cumulative experience with these technologies is to be achieved in time to facilitate rapid technological learning and their widespread diffusion.
An important motivation for developing this new approach for endogenizing technological learning and uncertainty in energy system scenarios was the desire to capture the different directions of possible future technological change resulting from the many technology replacements and incremental improvements that may occur during this century. Our basic assumption is that endogenous learning is a function of cumulative experience, measured by cumulative installed capacity, and that this process is uncertain. Clearly, this is a strong oversimplification. Although there are many other indicators of technological learning, we chose this one because it is relatively easy to measure. Nevertheless, we feel that the oversimplification is justifiable in a tool for analyzing the cumulative effect that incremental investments in new technologies have on the direction of alternative energy systems development.
Energy services are expected to increase dramatically during the twenty-first century, especially in today's developing countries. This means that the installed capacities of energy extraction, conversion, transport, distribution, and end-use technologies will increase accordingly, perhaps at a somewhat lower rate owing to the overall improvements of efficiencies throughout the energy system as older technologies are replaced by newer ones. Here again, the alternative directions of energy systems development are important. To a large extent, they will determine the eventual energy requirements needed to satisfy the increasing demand for energy services. The actual energy requirements for a given provision of energy services can range from very high to extremely low compared with current standards. Similarly, the future environmental impacts of energy systems will vary accordingly, as well. For example, CO2 emissions range from 10 times the current levels to virtually no net emissions by 2100 for scenarios in the literature. Figure 10.1 shows the range of future CO2 emissions derived by the new modeling approach for the set of 520 technology dynamics (some 130,000 scenarios) versus the set of 53 “optimal” dynamics (more than 13,000 scenarios). In comparison, Figure 10.2 shows the range of emissions for some 400 scenarios from the published literature collected for the new Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (Morita and Lee 1998; Nakicenovic et al. 1998b). The emissions range from 7 to 41 gigatons of carbon (GtC) by 2100, compared with about 6 GtC in 1990. As these figures illustrate, the set of scenarios developed for capturing endogenous technological learning and uncertainty covers most of this range. The scenarios from the literature span this range owing to the variation of the driving forces of future emissions, such as energy demand. In contrast, the set of scenarios with endogenous learning spans the range owing to different technological dynamics alone. It is interesting to note that the “optimal” scenarios match the distribution of the scenarios from the literature quite closely, but with a somewhat narrower range (they leave the extreme tails of the distribution uncovered). In contrast, the frequency distribution of the full set of 520 technology dynamics is different from the other two, with many more scenarios in the mid-range of the distribution. This means that the optimal or most “cost-effective” development paths correspond quite closely to the scenario distribution from the literature. The “median” or “central” futures are underrepresented both in the literature and among the scenarios, indicating a kind of “crowding-out” effect surrounding balanced and median scenarios. In any case, technological learning as specified in our approach leads to future energy systems that are marked by either high or low emissions ranges with a single useful energy demand trajectory, demonstrating a kind of implicit bifurcation across the range of possible emissions.
All scenarios share a given useful energy trajectory; emissions ranges in gigatons of carbon (GtC).
Emissions ranges in gigatons of carbon (GtC) (Morita and Lee 1998; Nakicenovic et al. 1998b). Some of the IPCC IS92 and SRES scenarios are indicated within the appropriate emissions intervals shown in the histogram.
To simplify matters, we have assumed a single trajectory of global useful (end-use) energy requirements as an input assumption for all 130,000 scenarios considered in this analysis. What is varied endogenously are the technologies that make up the energy system and their costs. Figure 10.3 shows the single useful energy trajectory that is common to all scenarios. It represents relatively high useful energy demand compared with the scenarios in the literature. However, it is associated with considerable variations of final and primary energy demand trajectories across the scenarios. The figure shows that a very wide portfolio of future energy systems characteristics is consistent with a single end-use demand trajectory. The scenarios map the higher part of the range of future primary energy requirements found in the scenario literature, but leave uncovered the lower part of the range, which is associated with very low demand scenarios in the literature. As mentioned, the scenarios cover most of the emissions range.
Time horizons of a century or longer are frequently adopted in energy studies. Modeling energy systems developments over such long time horizons imposes a number of methodological challenges. Over longer horizons, technological change becomes fluid and fundamental changes in the energy system are possible. It has been especially difficult to devise an appropriate representation of endogenous technological change and its associated uncertainties. In general, induced technological change and uncertainties are interconnected. It is widely recognized that together they play a decisive role in shaping future energy systems. Many approaches to modeling these processes have included elements of increasing returns to scale and decreasing uncertainty to scale. This basically means that technologies improve with cumulative experience, as expressed by the scale of their application. Costs and uncertainty are assumed to decline with increasing scale of application. Such processes are frequently represented by learning or experience curves.
In contrast, the “standard” modeling approaches with diminishing returns do not allow for such consequences of technological learning processes. Despite this deficiency, diminishing returns dominate standard economic theory, perhaps because of the very elegant and simple concept of equilibrium that can be achieved under those conditions. Diminishing returns to scale generate negative feedbacks, which tend to stabilize the system by offsetting major changes and inevitably produce a unique equilibrium independent of the initial state of the economy. In mathematical terms, the models are convex and generally lead to unique solutions.
Increasing returns, on the other hand, lead to disequilibrium tendencies by providing positive feedbacks. After (generally large) initial investments in RD&D and early market introduction, the incremental costs of further applications become cheaper and cheaper per unit capacity (or as assumed here, per unit output). Thus, the more widely adopted a technology becomes, the cheaper it becomes (with lower uncertainties, leading to lower risks to adoption). There are many incarnations of this basic principle. One of the better known is the concept of “lock-in.” As a technology becomes more widely adopted it tends to increasingly eliminate other possibilities, thus the lock-in. Another concept frequently used in empirical analysis is the so-called learning or experience curve. At the core of all of these processes is technological learning: the more experience that is gained with a particular technology, the greater the improvements in performance, costs, and other important technology characteristics.
Despite the fundamental importance of technological learning, modeling of these processes has received inadequate attention in the literature. Several reasons may explain the apparent lack of systematic approaches. Among them, the complexity of appropriate modeling approaches is perhaps the most critical. Increasing returns to scale lead to nonconvexities. Thus, the standard optimization techniques cannot be applied. In conjunction with the treatment of uncertainties, modeling of technological learning becomes methodologically and computationally very demanding. It requires the development of so-called global nonsmooth stochastic optimization techniques, which are only now under development (Ermoliev and Norkin 1995, 1998; Horst and Pardalos 1995).
Cost improvements per unit installed capacity, in US(1990)$ per kilowatt electric (kWe), versus cumulative installed capacity, in megawatts electric (MWe), on a logarithmic scale. Sources: MacGregor et al. (1991); Christiansson (1995); Nakicenovic et al. (1998a).
Figure 10.4 gives learning or experience curves for three electricity-generating technologies. Costs per unit installed capacity are shown versus cumulative installed capacity. The lowest curve shows cost improvements of gas turbines. Today, gas turbines are the most cost-effective technology for electricity generation. This was certainly not the case three decades ago, when the costs were high and it was by no means certain that the great technology improvements suggested by the curve would be achieved. The technology can be characterized as “precommercial” until the early 1960s: the costs were very high and the improvement rates were particularly rapid, about a 20 percent reduction in unit costs per doubling of cumulative capacity. After about 1963 the improvement rate declined, and it has since averaged less than 10 percent per doubling. This development phase was no doubt also associated with a significant reduction in uncertainties. In the early development phases, the investments in this technology were indeed risky, as many accounts indicate.
Figure 10.4 also shows two relatively new electricity-generating technologies. Wind power is becoming a “commercial” technology in many parts of the world, especially where wind is abundant. A typical example is wind electricity generation in Denmark. The cost reductions for this technology are impressive at about 20 percent per doubling of cumulative capacity. However, as a source of electricity, wind is on average significantly costlier than gas turbines and the risk is higher. PVs show equally impressive performance improvements of about 20 percent unit cost reductions per doubling, but from a very high level of costs. They are about an order of magnitude more expensive than gas turbines per unit capacity. The prospects for this technology are thus very promising, but they are also associated with great risks for potential investors.
The learning curves have been used in a stylized form in a number of energy modeling approaches to capture elements of endogenous technological change. At IIASA, Messner (1995) and Messner et al. (1996) incorporated learning curves for six electricity-generation technologies in the simplified version of the (deterministic) energy systems engineering model MESSAGE. As this is a linear programming framework, integer programming was needed to deal with emerging nonconvexities in the problem formulation. It was assumed that “new” energy technologies have a certain cost reduction per doubling of cumulative installed capacity. The approach was very innovative and led to a number of important insights for further modeling of endogenous technological change (Grübler and Messner 1996; Nakicenovic 1996, 1997). However, the principal drawbacks were the significantly greater complexity and the very high computational demands. Another important deficiency of the approach was that the learning rates were deterministic. MESSAGE is a model with perfect foresight, so that early investments in new, costly technologies were always rewarded with increasing returns. While it is clear that such reductions are possible on average, they are associated with considerable uncertainty.
The next step at IIASA was to introduce uncertainties into the distributions of future costs. The basis for this approach was the IIASA technology inventory, which now contains information on the costs and technical and environmental characteristics of some 1,600 energy technologies (Messner and Strubegger 1991). Figure 10.5 shows future cost distributions for three energy technologies from the inventory (Nakicenovic et al. 1998a). The figure illustrates that the distributions are not symmetric and that they have very pronounced tails with both very “pessimistic” and very “optimistic” views on future costs per unit capacity. Such cost distributions were introduced explicitly into a simple, stochastic version of MESSAGE and have led to spontaneous “hedging” against this uncertainty as an emerging property of the model (Golodnikov et al. 1995; Messner et al. 1996).
Finally, both approaches to modeling endogenous learning and uncertainty were combined for a very highly stylized stochastic version of MESSAGE with increasing returns for “just” three “technologies” (see Chapter 11 in this volume). One technology is characterized by no learning whatsoever; another displays moderate learning of about 10 percent per doubling; and the third shows a much more rapid 20 percent per doubling.1 The last two learning rates are associated with uncertainties based on the future cost distribution functions discussed above. In this much more complicated approach, the diffusion of new technologies occurs spontaneously, displaying the S-shaped patterns so characteristic of technological diffusion. This occurs without any explicit technology inducement mechanisms other than uncertain learning and hedging. The disadvantage of the approach was that it is very computationally demanding and basically infeasible for application with a large number of technologies, as is required for development of long-term energy scenarios.
Sources: Messner and Strubegger (1991); Nakicenovic et al. (1998a).
Here, we retain this basic approach and combine technological learning with uncertain outcomes while significantly extending the application to about 100 technologies. This is possible with the use of new global nonsmooth stochastic optimization techniques in conjunction with “parallel” problem structure and computing techniques. Cost reductions are assumed to be uncertain and thus are not specified by a given deterministic learning rate value. The learning rates are uncertain and are captured by assumed distribution functions. We assume that the generic cost reduction function has the following form:
where CIt is the cost reduction index, or the ratio between the technology unit costs (or, more precisely, the annual levelized costs) at time t and the initial cost in the base year; NDt is the number of doublings of cumulative output achieved by time t compared with the initial output; and β is the progress ratio indicating the cost reduction rate per doubling of output. β is a random variable with a known distribution function. We have assumed that β is normally distributed with known mean and variance. It is important to note that the suggested algorithmic approach is not limited to the type of distribution assumed here and that it does not require any prior knowledge about the type of distribution function.2
The expected value of β, the mean learning index (rate), corresponds to a 20 percent cost reduction per doubling of cumulative output. The numbers between the isolines of different learning indices indicate probability ranges. There is a small probability of no learning at all between any given doubling.
Figure 10.6 illustrates the uncertain learning index as a function of each doubling of cumulative output. In the example shown, the expected value for the cost reduction rate is 20 percent per doubling. The numbers between the isolines indicate the probability ranges of the occurrence of different learning rates. For example, there is a 50 percent chance that the cost reduction rate will fall between 14 percent and 25 percent per doubling. Note that there is a small chance (∼ 5 percent) that the cost reductions will range from very small to an actual cost increase, and that there is a very small probability (0.1 percent) of a significant cost increase per doubling. This indicates a real possibility of negative learning or “induced forgetting” rather than learning. Such representation of uncertain learning illustrates the true risk of investing in new technologies. There is a high chance that technology will improve with accumulated experience, but there is also a small chance that it will be a failure and an even smaller chance that it will be a genuine disaster.
Here, we extend the application of uncertain learning to many new technologies, ranging from wind and PVs to fuel cells and nuclear energy. In keeping with IIASA's earlier approaches to capturing learning, we assume that traditional, “mature” technologies do not benefit from learning (another interpretation is that cost reductions as a result of learning are insignificant compared with other uncertainties that affect costs). Altogether there are 10 clusters of new technologies that benefit from induced learning.
As mentioned, we also assume that all technologies—both traditional and new ones—have stochastic costs with known distributions in any given period (similar to the distributions of electricity-generation technologies used by Golodnikov et al. 1995). The difference in the treatment of new and traditional technologies is that we assume that the cost distributions of traditional technologies are static over time and the costs in different time periods are independent random values. Owing to possible cost reductions resulting from technological learning (as described above), the costs for new technologies are dynamic. They are specified by conditional probabilities that result from the realization of a particular value for the uncertain learning rate. Again for simplicity we assume that all initial cost distributions are log-normal with different means and variances based on empirical analysis of technological characteristics using the IIASA technology inventory (see Strubegger and Reitgruber 1995).
We assume that the cost distribution function for each new technology at any given moment in time t, under the condition that N doublings of cumulative output have been achieved and the realized value for a random learning rate β is equal to b, is defined by the following expression:
where F0(·, ·) is the initial log-normal distribution function with parameters m0 and s0, and K is the ratio between the standard deviation and the expected mean value and defines the compactness of the distribution. K is assumed to be a function of typical unit size (K increases with the unit size). We decided to keep K constant over time because of the lack of empirical data; therefore, it can be obtained simply by solving the following equation:
where m0 and s0 are derived empirically from statistical analysis.3
In addition to the uncertain learning rates, another new feature of our approach is that the future costs of all technologies are uncertain and assumed to be distributed according to the log-normal distribution. These are stylized distribution functions that indirectly reflect the cost distributions of energy technologies in the future based on analysis of the IIASA energy technology inventory. In addition, the mean value of these cost distributions is assumed to decrease and variance is assumed to narrow with increasing application of new technologies according to the generic cost reduction function (specified above) with the normally distributed progress ratio. This means that the process of technological learning is uncertain even as cumulative experience increases. The uncertainty of new technologies is characterized by the joint distribution of cost uncertainty and learning uncertainty. In summary, we assume uncertain future costs for all technologies and uncertain learning for new technologies.
Another uncertainty considered here is associated with the magnitudes and costs of energy reserves, resources, and renewable potential, and their extraction and production costs. Based on estimates by Rogner (1997), Nakicenovic et al. (1996), and others, we assume a very large global fossil resource base corresponding to some 5,000 Gtoe and correspondingly large renewable potentials. We also assume that the energy extraction and productions costs are uncertain, varying by a factor of more than five. Following the approach proposed by Rogner (1997), we formulated aggregate, global, upward-sloping supply curves with uncertain costs. Thus, the supply of fossil and non-fossil energy sources is characterized by expected increasing marginal costs and is one of the few areas where we have not assumed increasing returns, although we do assume uncertain costs.
Technologies are related to one another. For example, jet engines and gas turbines for electricity generation are related technologies—in fact, the latter were derived from the former. These kinds of relationships among technologies are common. They imply that improvement in some technologies may be transferable to other, related technologies. For example, improvements in automotive diesel engines might lead to better diesel-electric generators, because the technologies are closely related to each other. Improvements in one area that lead to benefits in other areas are often referred to as spillover effects. In the case of related technologies, this is a real possibility. For example, we consider the different applications of fuel cells, such as for stationary electricity generation and for vehicle propulsion. We also consider fuel cells that have the same end-use application but different fuels, for example, hydrogen and methanol mobile fuel cells. These fuel cells are different but they are related in the technological sense, so that improvements in one technology may lead to improvements in the other. In this new approach to modeling technological learning and uncertainty, we explicitly consider the possibility of such spillover effects among energy technologies.
However, operational implementation of spillovers is not trivial. One important barrier is the lack of a technology “taxonomy.” Presumably, the possibility of positive spillovers from technological learning is greater for technologies that are “more” similar than for those that are “less” similar. Thus, some kind of measure or metric of technological “proximity” or “distance” is required, even though a genuine taxonomy does not exist. A number of proposals have been made that could conceivably lead to the development of a taxonomy in the future (Foray and Grübler 1990). Instead of venturing into more complex representations of technology relationships, we simply assume that there are basically two explicit types of spillover effects. One is indirect through the connections among energy technologies within the energy system. For example, cheaper gas turbines mean cheaper electricity, so that ceteris paribus this could favor electricity end-use technologies for providing a particular energy service compared with other alternatives. The other effect is more direct. Some technologies are related through their “proximity” from the technological point of view, as was suggested by the example of hydrogen and methanol mobile fuel cells. We explicitly define “clusters” of technologies, where learning in one technology may spill over into another technology. The spillover effects are assumed to be strong within clusters and weaker across clusters.
Technology clusters were explicitly prespecified. Table 10.1 shows the groupings of technologies into 10 clusters. Each cluster consists of technologies that are related either because they are technologically “close” (i.e., are similar) or because they enable and support one another through the connections among them within the energy system.
The nature of the spillover effects is assumed to be different within and across clusters. Technologies from the same cluster share total cumulative output and are assumed to have the same learning rate, but their actual costs are drawn independently from their respective distributions.
Figure 10.7 illustrates the spillover effects within one cluster of technologies. The example shown gives two density functions of technology costs in 2030 for decentralized fuel cells. The density function with the lower overall costs is for the case of spillover effects within the technology cluster; that with the higher overall costs is for the case without spillover effects. The costs are given in US(1990) cents per kilowatt hour (kWh) of electricity generation without the fuel costs. Both the expected costs and their variance are substantially higher without the spillover effects. Thus, the costs, as well as the uncertainty, are expected to be lower with spillover effects. Therefore, the probability of lower costs is overall much higher with spillovers. However, the high tail of the density distribution is proportionally more pronounced in the case of spillover effects. This is an interesting feature of these density functions: the expected costs are generally lower with spillovers, but at the same time the possibility of realizations of very high costs compared with the mean is higher. Thus, spillovers also amplify somewhat the small chance of induced “forgetting.”
Spillover rates between clusters are proportional (weighted) to the technological “proximity,” for instance, how closely the technologies are related to one another. One example is additive learning from all kinds of fuel cells. For instance, stationary fuel cells can contribute significantly to learning for mobile ones because of the large capacity (size); conversely, experimenting with small-scale mobile units could be an important factor in the early development of stationary units.
Figure 10.8 is a schematic diagram of the 10 technology clusters indicating how they are related to one another with respect to the assumed learning spillover effects within the energy system structure. Two of the technology clusters, the nuclear high-temperature reactors (HTRs) and hydrogen infrastructure clusters (also shown in Table 10.1), are characterized by generally large “unit size” compared with other technologies. Consequently, very large cumulative output is required for achieving a doubling compared with the other clusters. This leads to correspondingly high risks associated with induced learning. The expected learning rates are indicated for each cluster. The modular (smaller “unit size”) technologies generally have higher mean learning compared with the other technologies. The highest mean learning rate is indicated for the solar photovoltaic cluster; the lowest rates are for the solar thermal to hydrogen, the nuclear HTRs, and the hydrogen infrastructure clusters.
a Part of model assumptions; in many cases, there are no reliable statistics for global cumulative output.
b Contribute to other fuel cell clusters with weight 0.5 to decentralized and centralized units; accelerated by input from stationary units with weight 0.1 for decentralized and 0.01 for centralized installation.
c Contribute to other fuel cell clusters with weight 0.5 to centralized units and 0.1 to transportation; accelerated by input from centralized units with weight 0.1 and from transportation with weight 0.5.
d Contribute to other fuel cell clusters with weight 0.1 to decentralized units and 0.01 to transportation; accelerated by input from decentralized and transportation units with weight 0.5.
The density function with lower overall costs is for the case of spillover effects within the technology cluster; that with higher overall costs is for the case without spillover effects.
In the presence of uncertainties any realistic policy bears risks, particularly the risk of under- or overestimating future technology costs. Explicit introduction of these risks into the model structure creates a driving force for the development of new technologies needed to make the energy system flexible enough to withstand possible instabilities and surprises. Thus uncertainty concerning future technology costs and characteristics in itself induces technological change. When this uncertainty is broadened to include technological learning and spillovers, the complex interplay between all three mechanisms leads to the same patterns of technological change that are encountered in deterministic modeling approaches. But whereas in deterministic modeling these patterns emerge under conditions of exogenous constraints, here this behavior is the result of induced technological change that occurs “spontaneously” owing to the stochastic nature of technological learning within the energy system.
The conventional approaches of control theory are applicable only in cases with a small number of variables (e.g., for simple energy systems), since such approaches deal with unrealistically detailed long-term strategies attempting to provide the best choice for every combination of uncertainties and designs that may occur before the given moment in time. This “chess game” solution concept is essential for the application of standard dynamic programming equations.
Technologies in each cluster are listed along with their assumed expected mean learning rates.
The same type of solution concept is used in multistage stochastic optimization models. Although in such cases large-scale optimization techniques are used instead of recurrent equations, the actual size of solvable problems is again small. The actual size of the problem is essentially connected with the solution concept, which requires the expansion of the original finite-dimensional model to a model with an infinite number of variables. Both approaches seem to be meaningful only for “online” or short-term energy planning problems. They are unrealistic for the analysis of long-term energy policies.
As it is impossible to explore all the details of long-term energy developments, our approach is based on the so-called two-stage dynamic stochastic optimization model with a rolling horizon. The solution concept in this case depicts the ex ante path of developments, which is flexible enough to be adjusted to possible ex post revealed uncertainties (“surprises”). The concept of a rolling horizon requires adjustments of ex ante strategies each time essential new information is revealed. A particular type of this model was proposed by Ermoliev and Norkin (1995) for the analysis of global change issues and is ideally suited for energy system engineering analyses as represented in some applications of the MESSAGE model. The stochastic version of MESSAGE (see Golodnikov et al. 1995) is also a two-stage dynamic stochastic optimization model. This model explicitly incorporates risks of underestimating costs, which leads to a convex, generally nonsmooth, stochastic optimization problem.
The overall approach is based on the idea of representing energy systems development as a dynamic network where flows from one energy form to another and transformations of one energy form into another correspond to energy technologies such as electricity generation from coal or gas power plants. Figure 10.9 illustrates the assumed reference energy system as one composed of about 100 different technologies. Four types of energy flows are shown: (1) energy extraction from energy resources; (2) conversion of primary energy into secondary energy forms; (3) transport and distribution of energy to the point of end use, resulting in the delivery of final energy; and (4) the conversion at the point of end use into useful energy forms that fulfill the specified demands (as discussed above). All possible connections between the individual energy technologies are also specified in Figure 10.9. Various demands for useful energy are shown for different sectors of the economy. Each technology in the system is characterized by levelized costs, unit size, efficiency, lifetime, emissions, etc. In addition to various balance constraints, there are limitations imposed by the resource availability as a function of (uncertain) costs. The overall objective is to fulfill various demands using technologies and resources with minimal total discounted system costs.4
When future costs, demands, and other parameter values are known, it is possible to find a unique “optimal” solution for the evolution of the reference system shown in Figure 10.9. It is obtained by solving the following deterministic, linear optimization problem:
• xt =(xt1, ...xtn) are activity levels of technologies and resources at time t;
• Bt is the matrix of input and output relations among the technologies and dt is the demand vector;
• Rt is the matrix for approximating the quadratic costs of resources and balances for resource use and r are corresponding quantities;
• Pk is the matrix of systems constraints, like market penetration constraints and maximum shares of specific resource and technology activities, and et are corresponding limits; and
• t is the upper limit on technological activities.
Such deterministic formulations of future energy systems development result in highly restrained possibilities. In addition, the dynamics of future developments are prescribed by the system of assumed constraints. In contrast, there is a wide range of possible alternative future developments of the energy system, especially in the long run (at the scale of a century). This is amply demonstrated by the enormous range of future energy requirements and CO2 emissions in energy scenarios in the literature (see Figure 10.1).
In contrast, the alternative formulation of the problem proposed here is highly unrestrained and “open.” We assume that there is a priori “freedom of choice” among fundamentally different future structures of the energy system and possible future dynamics. The uncertainty is resolved through a simultaneous drawing from all distributions from each particular technology dynamics (see the box on Terminology). To make a rational choice among alternative energy system structures, technology dynamics are compared on the basis of expected systems costs and risks associated with each particular technology dynamics. Risks or benefits are defined here as functions of the difference between the expected and realized costs of each technology dynamics. There are a number of ways to quantify risk (see, e.g., Markowitz 1959). We adopt a technique whereby the risk is represented by piecewise linear functions of the following form:
This clearly asymmetric form of the risk function has an obvious advantage over a more standard approach based on variance minimization. Splitting the risk function into two parts representing risk associated with cost underestimation5 and the benefit associated with cost overestimation is a natural reflection of the highly asymmetric risk perception of “losses” and “gains.” Moreover, different “actors” (energy agents) may have quite different levels of risk aversion. In principle, this approach allows for the representation of different risks perceived by different decision actors or agents.
This asymmetric treatment of “risk” and “benefit” significantly increases the complexity of the problem—risk cannot be expressed simply in terms of a functional relation of expected values and corresponding variances, as is done in Markowitz's formulation (Markowitz 1959). Formally speaking, the objective function specified above is a nonsmooth and obviously highly complicated non-convex function defined on probabilistic space. In its general form, it is an analytically intractable problem, even in the case of a relatively small system where just a few technologies are considered. The problem could be solved using a stochastic approximation technique (see Ermoliev and Norkin 1995). This stochastic approximation approach is based on the idea of estimating the solution of the original problem by solving another stochastic problem where the original probabilistic space is replaced with a finite (sufficiently large) number of simultaneously generated “samples” according to the distribution function for uncertain parameters (see Grübler and Gritsevskyi, 1998). This approach differs significantly from the conventional Monte Carlo approach. All drawings are performed simultaneously and the resulting policy conclusion is formulated against the background of all considered outcomes. There are strong methodological similarities between so-called exploratory modeling (see Bankes 1993; Lempert et al. 1996; Robalino and Lempert 2000) and the approach used here, although we use a very different implementation and analysis technique.
A systematic approach to aggregating scenario-specific solutions into a robust solution is examined in Ermoliev and Wets (1988). These techniques require explicit characterization of scenario-specific solutions, which may lead to extremely large optimization problems. Different stochastic optimization techniques deal with the design of robust solutions from a set of previously or sequentially simulated scenarios. In the latter case, the stochastic optimization procedure can be viewed as a sequential adaptation of a given initial energy policy by learning from the simulated history of its implementation.
In our test runs we initially used between 100 and 500 simultaneously drawn scenarios from each technology dynamics specification as an approximation of a theoretically infinite number of possible realizations. These ranges for the appropriate number of scenarios were obtained as a result of practical experiments and represent optimal trade-offs between exponentially growing computational complexity and reasonable accuracy of obtained solutions. Eventually, we decided that 250 simultaneously drawn scenarios are sufficient for a given technology dynamics. It is important to emphasize that it is not necessary to maintain high accuracy by using many drawings during the initial calculation steps, when the value of the objective function is far from “optimal,” as the difference between a solution value and the value of the new draw is much larger than the errors resulting from “rough” stochastic approximation. However, at the final stage the number of drawings needs to be increased. At that stage, we use an alternative drawing technique to obtain better estimates for error bounds.
A scenario is a particular deterministic realization of a future energy system. Here it specifies unique values for all activity levels, such as energy flows, increases in capacities, total systems costs, energy extraction, etc. Technological dynamics denotes a more generic characterization of future developments with inherent uncertainties surrounding, for instance, future costs. Each resolution of these uncertainties inherent in technological dynamics results in a given scenario. There is an infinite number of possible scenarios that share exactly the same technological dynamics. Thus, technology dynamics specifies a set of uncertain, generic relations. In particular, it specifies the set of uncertain cost reductions as a function of doublings of output, the cost distributions in any given period, and possible spillover effects within and across the 10 technology clusters within the reference energy system. Our approach to analyzing and comparing alternative technological dynamics is to assume specific distribution functions for uncertain parameters and relations. The uncertainty is resolved through a simultaneous drawing from all distribution functions for a given technology dynamics that then results in a deterministic scenario. After many such drawings, expected costs and other characteristics of the scenario sample for a particular technology dynamics can be estimated. The expected costs and other sample statistics can then be used to obtain risk estimates associated with each technology dynamics. Each scenario within the set belonging to one specific technology dynamics can be characterized by a conditional probability relative to the other scenarios in that set.6 Feasible technology dynamics are those that satisfy given energy demands and other systems constraints. A run of scenarios refers to all scenarios generated from a given set of technology dynamics through simultaneous drawings from all uncertain distributions. In this application we have analyzed 520 alternative technology dynamics and have drawn some 250 scenarios for each one, resulting in a run of about 130,000 scenarios.
We call a given technology dynamics optimal (suboptimal) for a given run if it is optimal (suboptimal) compared with all other technology dynamics in the run of scenarios with respect to the weighted sum of its expected systems costs, and risk functions based on these costs, for all drawn scenarios.
More formally, the problem is given by the following:
• x|t0 is (x0, x1, ..., xt);
• Ct(x|t0, ω) are stochastic costs under the condition that technology dynamics x|t0 is chosen and such that
• , the cost reduction index, has a distribution function as described before (with the number of doubling NDt calculated from x|t0) and the initial distribution function for C0(ω) is equal to F0(·);
• Δĉi are given “threshold” values for total cost deviations; and
• Rk(ω) and r(ω) reflect uncertain quantity-to-cost relations.
As mentioned above, original stochastic global nonsmooth optimization is approximated by solving a sequence of large-scale linear optimization problems. This is done by applying a two-level nested structure. The global optimization part, which defines technological dynamics with respect to new unit installations for technologies with increasing returns to scale, is an implementation of an adaptive global optimization random search algorithm specifically tailored to network flows optimization problems. [For a description of such algorithms, see Horst and Pardalos (1995) and Pinter (1996).] The inner algorithm is the interior-point method for linear optimization. The PCx and pPCx solvers used in this study were provided by the Argonne National Laboratory (Wright 1996a, 1996b; Czyzyk et al. 1997). These solvers are written in C code, modified to increase computational efficiency for our specific problem formulation and to link the solvers directly to the global optimization part of the structure.
A big advantage of the adaptive random search algorithm is that it does not require strict sequential updating of the approximated solution. Rather, it refines the approximated solution when new information becomes available. This allowed us to devise a “parallel” adaptation of this technique. The inner linear optimization problem is relatively large and difficult to solve. Finding a solution for a given technology dynamics (with fixed uncertainty distribution parameters) using the global optimization algorithm requires approximately 10–40 minutes of run time, depending on the number of simultaneous drawings from uncertain distributions, the number of parameters to be considered, and whether or not approximation of the starting point is available (the partial “hot” restart technique).
The original problem implementation was done using a CRAY T3E-900 supercomputer at the National Energy Research Scientific Computing Center (NERSC), in the United States. NERSC is funded by the US Department of Energy, Office of Science, and is part of the Computing Sciences Directorate at the Lawrence Berkeley National Laboratory. All initial feasibility runs and a number of experiments were performed using 32–64 processing units on the CRAY T3E-900. The problem was then ported to the IIASA computer network environment and re-implemented using the Message Passing Interface (MPI) standard. We used a public portable implementation of MPI–MPICH developed and supported by the Argonne National Laboratory and a special implementation for Windows NT network clusters (WMPI) provided by the University of Coimbra, Portugal. Currently it is operational on a network cluster that contains from 6–16 Intel Pentium II 233 MHz PCs (the number of PCs can be changed dynamically). Typical runs take from 22–46 wall clock hours. Owing to the extended logging procedure, calculations can be easily operated remotely, for example, stopping and re-activating at any time. This technique allows the computer to be used during “off-peak” and weekend hours.
From the 520 alternative technology dynamics, about 53 resulted in scenarios with very low overall energy system costs. They all fall within 1 percent of the best values achieved. We designate this set of 53 technology dynamics as “optimal” because they fulfill the “optimality” criteria. Most of the statistical and other analyses here focus on these 53 optimal technology dynamics.
These 53 optimal, but fundamentally different, technology dynamics produce a wide range of alternative emergent energy systems. They all share the same useful energy demand trajectory but cover most of the range of CO2 emissions found in the literature and unfold into all possible future energy system structures. The underlying scenarios include futures that range from an increasing dependence on fossil energy sources to a complete transition to alternative energy sources and nuclear energy. Thus, one result of the analysis is that different energy system structures emerge with similar overall costs; in other words, there is great diversity across alternative energy technology strategies. The strategies are path dependent, and it is not possible to choose a priori “optimal” directions of energy systems development.
The scenarios from the literature span a wide range of future energy requirements and emissions owing to the variation of the driving forces of future emissions, such as energy demand. In contrast, the set of scenarios with endogenous learning spans the range as a result of different technological dynamics alone. It is interesting to note that the “optimal” scenarios quite closely match the distribution of the scenarios from the literature, but with a somewhat narrower range (they leave the extreme tails of the distribution uncovered). In contrast, the frequency distribution of the full set of 520 technology dynamics is different from the other two, with many more scenarios in the mid-range of the distribution. This means that the optimal or most “cost-effective” development paths correspond quite closely to the scenario distribution from the literature. The “median” or “central” futures are underrepresented in the literature and among the scenarios, indicating a kind of “crowding-out” effect surrounding balanced and median scenarios. In any case, technological learning as specified in our approach leads to future energy systems that are marked by either high or low emission ranges (with a single useful demand trajectory), demonstrating a kind of implicit bifurcation across the range of possible emissions.
Another finding from the analysis is that endogenous technological learning with uncertainty and spillover effects will have the greatest impact on the emerging energy system structures during the first few decades of the twenty-first century. Over these “intermediate” periods of time, these two processes create effective lock-in effects and increasing returns to adoption. In the very long run, however, none of these effects is of great importance. The reason is that over such long periods many doublings of capacity of all technologies with inherent learning occur, so little relative cost advantage results from large investments in only a few technologies and clusters. Therefore, the main finding is that, under uncertainty, the near-term investment decisions in new technologies are more important in deciding the direction of long-term development of the energy system than are decisions made toward the end of the time horizon. Thus, the most dynamic phase in the development of future energy systems will occur during the next few decades. It is during this period that there will be a high degree of freedom of choice among the future technologies, and many of these choices will lead to high spillover learning effects for related technologies.
Our analysis of the emerging dynamics and different directions of energy systems development suggests some policy implications, First, future RD&D efforts and investments in new technologies should be distributed across “related” technologies rather than directed at only one technology from the cluster, even if that technology appears to be a “winner.” Second, RD&D efforts and technology investments should not be spread across a large portfolio of future technologies, but should focus on (related) technologies that might form technology clusters.
We would like to thank Sabine Messner, Gordon J. MacDonald, Yuri Ermoliev, and Manfred Strubegger, all from IIASA, for their help and advice. Sabine Messner and Gordon J. MacDonald worked with us on the grant from the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory that is funded by the US Department of Energy. This grant allowed the original problem implementation on a CRAY T3E-900 supercomputer, and we are grateful for the financial support provided by the US Department of Energy. The original MESSAGE model implementation and technology assumptions are based on the work of Sabine Messner. We are also grateful to Yuri Ermoliev, who helped with the development of the solution methods and continuously provided help and advice. Last, but not least, we thank Manfred Strubegger, who provided the fossil energy supply functions, implemented important changes in the problem solution processing module, and developed the new script that was used for storing the results of this analysis.
We would also like to thank colleagues from other institutions who have provided software and support for our research, including Michael Wagner from the Argonne National Laboratory, Steve Wright of Cornell University, and Francesca Verdier from the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory for their assistance.
1. Cost reduction and the learning rate may be quite different, depending on how “learning” is measured. The learning rate for photovoltaics in Figure 10.4 is about 20 percent per doubling of cumulative capacity. For example, Watanabe (1995) analyzed direct investment in photovoltaics in Japan and found that the unit costs decreased by about 50 percent per doubling of cumulative investment. Grübler (1998) estimated the learning rate at 30 percent per doubling of cumulative installed capacity based on the data set from Watanabe (1995).
2. To use our approach, we must be able to compute the mean value for the corresponding distribution and to generate random samples based on that distribution. Implementation in the form of a “black box” is perfectly suitable.
3. The suggested technique does not require or utilize the specific relation between Ft and the initial distribution F0. It also does not require that K be kept constant over time. In the absence of a better understanding of the quite complex and nonlinear relationship and owing to the lack of empirical data, we decided to use the simplest assumptions possible— the type of distribution stays the same the (the distribution does not change its shape), the mean value follows the realized cost reduction curve, and variance decreases proportionally with the expected cost reduction.
4. As in many other models, a 5 percent discount rate was adopted.
5. More then one factor could lead to underestimation of realized cost. For all new technologies (especially in the early stages of development), even in cases where the cost reduction rate is as good or better then expected, there is significant cost fluctuation resulting from uncertainty associated with such cost distributions (high variance, heavy tails, and so on). Factors such as a high dependence on a particular resource form, a low level of technological diversification, and a strong linkage between system parts largely contribute to the increasing probability of substantial cost underestimation. Such analysis would be nearly impossible to perform on the basis of simple cost-to-cost analysis for alternative energy supply chains.
6. Each scenario has exactly zero probability of realization. It makes sense to talk about scenario probability under some conditions. For example, under conditions that from the set of N scenarios one should happened, it is possible to introduce and compare relative probability defined on this set of N scenarios.
Bankes, S.C., 1993, Exploratory Modeling and Policy Analysis, RAND/RP-211, Santa Monica, CA, USA.
Christiansson, L., 1995, Diffusion and Learning Curves of Renewable Energy Technologies, WP-95-126, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Czyzyk, J., Mehrotra, S., Wagner, M., and Wright, S.J., 1997, PCx User Guide (Version 1.1), Technical Report, OTC 96/01, http://www-unix.mcs.anl.gov/otc/Tools/PCx/doc/PCx-user.ps.
Ermoliev, Y.M., and Norkin, V., 1995, On Nonsmooth Problems of Stochastic Systems Optimization, WP-95-096, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Ermoliev, Y.M., and Norkin, V., 1998,Monte Carlo Optimization and Path Dependent Non-stationary Laws of Large Numbers, IR-98-009, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Ermoliev, Y.M., and Wets, R.J.-B., 1988, Numerical Techniques for Stochastic Optimization, Springer-Verlag, Berlin, Germany.
Foray, D., and Grübler, A., 1990, Morphological analysis, diffusion and lock-out of technologies: Ferrous casting in France and Germany, Research Policy, 19(6):535–550.
Golodnikov, A., Gritsevskyi, A., and Messner, S., 1995, A Stochastic Version of the Dynamic Linear Programming Model MESSAGE III, WP-95-094, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Grübler, A., 1998, Technology and Global Change, Cambridge University Press, Cambridge, UK.
Grübler, A., and Gritsevskyi, A., 1998, A model of endogenized technological change through uncertain returns on learning, http://www.iiasa.ac.at/Research/TNT/WEB/Publications/
Grübler, A., and Messner, S., 1996. Technological uncertainty, in N. Nakicenovic, W.D. Nordhaus, R. Richels, and F.L. Toth, eds, Climate Change: Integrating Science, Economics, and Policy, CP-96-001, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Horst, R., and Pardalos, P.M., eds, 1995, Handbook of Global Optimization, Kluwer, Dordrecht, Netherlands.
Lempert, R.J., Schlesinger, M.E., and Bankes, S.C., 1996, When we don't know the costs or the benefits: Adaptive strategies for abating climate change, Climatic Change, 33(2):235–274.
MacGregor, P.R., Maslak, C.E., and Stoll, H.G., 1991, The Market Outlook for Integrated Gasification Combined Cycle Technology, General Electric Company, New York, NY, USA.
Markowitz, H., 1959, Portfolio Selection, Wiley, New York, NY, USA.
Messner, S., and Strubegger, M., 1991, Part A: User's Guide to CO2DB: The IIASA CO2 Technology Data Bank–Version 1.0, WP-91-031, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Messner, S., Golodnikov, A., and Gritsevskyi, A., 1996, A stochastic version of the dynamic linear programming model MESSAGE III, Energy, 21(9):775–784.
Morita, T., and Lee, H.-C., 1998, Appendix to emissions scenarios database and review of scenarios, Mitigation and Adaptation Strategies for Global Change, 3(2–4):121–131.
Nakicenovic, N., 1996, Technological change and learning, in N. Nakicenovic, W.D. Nordhaus, R. Richels, and F.L. Toth, eds, Climate Change: Integrating Science, Economics, and Policy, CP-96-001, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Nakicenovic,N., 1997, Technological Change as a Learning Process, paper presented at the Technological Meeting ’97, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Nakicenovic, N., Grübler, A., Ishitani, H., Johansson, T., Marland, G., Moreira, J.R., and Rogner, H.-H., 1996, Energy primer, in Climate Change 1995: Impacts, Adaptations and Mitigation of Climate Change: Scientific-Technical Analysis, Contribution of Working Group II to the Second Assessment Report of the IPCC, Cambridge University Press, Cambridge, UK, pp. 75–92.
Nakicenovic,N., Grübler, A., and McDonald, A., eds, 1998a, Global Energy Perspectives, Cambridge University Press, Cambridge, UK.
Nakicenovic, N., Victor, N., and Morita, T., 1998b, Emissions scenarios database and review of scenarios, Mitigation and Adaptation Strategies for Global Change, 3(2–4): 95–120.
Pinter, J., 1996, Global Optimization in Action, Kluwer, Dordrecht, Netherlands.
Robalino, D., and Lempert, R.J., 2000, Carrots and sticks for new technology: Crafting greenhouse gas reduction policies for a heterogeneous and uncertain world, Integrated Assessment, 1(1):1–19.
Rogner, H.-H., 1997, An assessment of world hydrocarbon resources, Annual Review of Energy and the Environment, 22:217–262.
Strubegger, M., and Reitgruber, I., 1995, Statistical Analysis of Investment Costs for Power Generation Technologies, WP-95-109, International Institute for Applied Systems Analysis, Laxenburg, Austria.
Watanabe, C., 1995, Identification of the role of renewable energy, Renewable Energy, 6(3):237–274.
Wright, S.J., 1996a, Modified Cholesky Factorizations in Interior-Point Algorithms for Linear Programming, Preprint ANL/MCS-P600-0596, Argonne National Laboratory, Argonne, IL, USA.
Wright, S.J., 1996b, Primal-Dual Interior-Point Methods, SIAM, London, UK.
Reprinted from Energy Policy, Volume 28, Gritsevskyi, A., and Nakicenovic, N., Modeling uncertainty of induced technological change, pp. 907–921, c 2000, with permission from Elsevier Science.