Chapter 6Time Dependence in the 1-D Models

In Chapter 5, we found that time-independent, zonally symmetric models can be cast into the form of a linear differential equation in the latitude variable with zero-flux boundary conditions at the poles. In the case of a latitude-independent thermal diffusion coefficient, the system can be solved with Legendre polynomials. In this chapter, we extend this procedure to the case of time dependence. Such an extension allows for idealized seasonal-cycle simulations for the zonally symmetric planet. Introducing time dependence necessarily requires some kind of effective heat capacity for the column of air–land or air–ocean medium. When we take the planet to have a homogeneous surface and constant diffusion coefficient, we also specify that the heat capacity is not spatially dependent, that is, it is a constant. An all-land planet is one for which the heat capacity is characteristic of a land surface. This means it takes a value calculated from a fraction (nominally half) of the mass of a column of air. This leads to a time constant of about 1 month. This is our new phenomenological coefficient for Chapter 6.

The time-dependent models are amenable to analytical solution for uniform planets (constant and ). Model solutions are in the form again of Legendre polynomial modes, each having a characteristic decay time with larger scales (low Legendre index) having longer relaxation times, and smaller scales, shorter times. The seasonal cycle of latitude-dependent insolation has a very simple form when truncated at the few-mode level. Legendre polynomials of index 0, 1, and 2 describe the latitude dependence of the seasonal cycle of insolation, except, of course, for the discontinuous derivative near the poles, which is necessary to describe the onset of perpetual night or day and the appropriate latitudes and times of year. This insolation formula allows us to obtain very simple expressions for the dependence of insolation on obliquity, precession of the equinoxes, and eccentricity. The seasonal cycle simulations also give us an idea about the lag of seasonal response to the driving seasonal heating cycle.

Later in the chapter, we return to the stochastic nature of climate forcing (presumably due to weather instabilities) by showing how the random walk of heated parcels is approximately imitated by diffusion in an ensemble average. This analysis relates the mean square of the parcel's velocity and relaxation time to the diffusion coefficient of the ensemble.

A short section is added to show how one might go about solving one-dimensional problems via finite difference methods. The appendices to the chapter givederivations of the insolation functions.

6.1 Differential Equation for Time Dependence

Consider the problem of a time-dependent one-dimensional climate system for a planet whose surface is uniformly land or ocean. The energy balance model for constant coefficients is as follows:

6.1

where is the (latitude), is time, is the latitude and time-dependent temperature field, is the insolation function possibly allowing for a seasonal cycle, is the coalbedo, which is a function of latitude and possibly time dependent; is the total solar irradiance divided by 4 = , , and are phenomenological coefficients as defined in Chapter 5, and is an effective heat capacity. As discussed in Chapter 2, this means a timescale of about 30 days () for an all-land planet and a few years for a planet covered with a mixed-layer ocean. This amounts to using the heat capacity of half a column of air (at constant pressure) for the all-land case and that of 50–100 m of water for the mixed-layer ocean case. In the latter, we assume the ocean below is uncoupled with the mixed layer.

We expand the time-dependent temperature field into Legendre polynomials:

6.2

Note that each coefficient is a function of . After inserting the series, multiplying through by and integrating as before we obtain an ordinary differential equation for the :

where

6.4

It is important to note that none of the terms in the array indicated by (6.3) is coupled to any of the others. In other words, for the equation dependent on mode number there is no other mode index, say , that is referred to in the equation indexed by .

6.2 Decay of Anomalies

Consider first the case where and are set at their mean annual values as in Chapter 5. We can examine the behavior of the climate as it is perturbed away from steady state. We have the solution to the initial-value problem:

6.5

where is the solution to the time-independent (steady-state) problem given above and

6.6

Each mode has its own characteristic time . Note that the characteristic times fall off rapidly as a function of as shown in Figure 6.1. This agrees with our intuition that larger spatial scales have longer characteristic (adjustment) times. We will establish a similar result for the autocorrelation times later in noise-driven models.

6.2.1 Decay of an Arbitrary Anomaly

An arbitrary distribution of thermal anomaly will decay in time according to a superposition of the modes just discussed. For example, an initial distribution of anomaly (departure from equilibrium) whose shape is will decay according to

For example, consider an initial shape rendered by a steady ring of heat source (Chapter 5) at a specific latitude as given by (5.11) and shown in Figure 5.11. The solid line in Figure 6.2 shows such a steady-state anomaly located at (37N). If suddenly the heat source is switched off, the distribution will decay according to (6.7). The figure shows stages of the decay at 0.25, 0.5, 0.75, and 1.00. We see that the higher modes, those rendering the cusp-like peak, are lost quickly leaving behind only the smoother long-lived modes.

6.3 Seasonal Cycle on a Homogeneous Planet

We can begin the study of the seasonal cycle by placing the solar heating distribution as a forcing on the right-hand side of the energy-balance equation. The heating distribution can be written for a circular orbit as follows:

6.8

where , and (see Figure 6.3). The subscripts denote the Legendre index first, then seasonal time harmonic second. A derivation for this heating distribution function is sketched in the appendix to this chapter. But for now we can observe some important properties. The mean annual version can be quickly recovered by averaging from 0 to 1 year in ,

6.9

which is the formula used earlier in Chapter 5. The main part of the seasonal cycle is carried by the coefficient of the term. Note that it is antisymmetric between the hemispheres (recall the identity ). The seasonal forcing is largest near the poles (). Another interesting feature captured by this representation is that along the equator () there are two maxima (see Figure 6.3). This is the effect of the Sun crossing the equator twice each year at the equinoxes. Table A.6.1 shows the coefficients for some higher terms in the expansion for the present elliptical orbit of the Earth. Figure 6.4 shows the seasonal insolation when only modes 0, 1, and 2 are included. This figure indicates that retention of only these three modes captures the most important features of the insolation. Figure 6.5 shows how in the tropics there are two peaks as one passes through the year. The figure shows that for the semiannual harmonic is strong. These three modes are able even to capture the passage of the Sun over the equator twice per year. Away from the equator, the seasonal harmonic dominates. Figure 6.6 shows latitudinal time sections of the forcing. The dashed lines show the forcing for zero eccentricity.

We can express the seasonal dependence of insolation as

6.10

If we take the coalbedo, diffusion coefficient, and heat capacity to be constants independent of season and latitude, we can write equations for the mode responses:

6.11

where is the constant coalbedo. To simplify the algebra. we employ complex notation (superscript indicates a complex variable) and the complex Fourier series:

6.12

Inserting this into the governing mode equations:

6.13

To recover the physical mode amplitudes,

6.14

whereas the phase lag behind the forcing is

6.15

The complex are related to the real Fourier coefficients as follows:

6.16

After a time (), the transients die out and we are left with the repeating steady-state solutions. First, consider the time-independent parts:

6.17
6.18

Next, consider the seasonal harmonic terms:

6.19
6.20
6.21

and finally, the semiannual terms:

6.22
6.23
6.24

In each case, we can compute the amplitude

6.25

with phase lag

6.26

It is interesting to evaluate these quantities for the all-land planet. Using typical values of the parameters, W m, W (m K), , , we find C, K, days, K, K, days. While the phase lag is approximately correct for an all-land planet, the amplitude of both harmonics is larger than observed values for the Northern Hemisphere by a factor of about 4 (Northern Hemisphere zonal averages). These large amplitudes are due to the absence of ocean surface in the zonal averages. In the Northern Hemisphere, the land fraction is about 60% and in the Southern Hemisphere, the fraction is 80%. We will attempt to remedy this situation in Chapter 8 by introducing land–sea geography. Figure 6.7 shows the forcing for a circular orbit and the response superimposed in solid line contours. Note the lag of about a tenth of a year in the response. There is also a displacement poleward of the maximum response from the heating. Figure 6.8 indicates the response through time at some selected latitudes, this time including the effects of the eccentric orbit.

Since EBMs make use of diffusion as a mechanism to transport heat poleward in the atmosphere/ocean system, it is useful to see how diffusion is related to random walk processes. Random walk means that the progress (root mean square average distance from the point of origination) of a passive scalar is the sum of a large number of steps. Let be the random variable denoting the displacement after steps. We can write

6.27

The mean over an ensemble of such random walks is 0, because we assume the individual steps have mean 0, that is,

6.28

The variance of can be calculated if all the are uncorrelated and have equal variance:

6.29

The standard deviation of is proportional to the square root of the number of steps.

Now consider the one-dimensional damped diffusion equation

6.30

It is convenient to solve this equation with Fourier transforms. can be represented as follows:

6.31

We can write the second spatial partial derivative as follows:

6.32

The inverse Fourier transform is

6.33

We can insert the other terms.

6.34

The integrand must vanish for every wave number.

6.35

The solution to this first-order homogeneous linear equation for wave number is

If the initial distribution is peaked at , that is, , then , a constant. The inverse Fourier transformation

6.37

gives (from tables or MATHEMATIC A)

6.38

where we have used the now familiar length scale and to make the notation more compact and to see the variables and proportional to their natural scales, and . We see that except for the overall damping factor , the solution spreads like a bell-shaped curve with standard deviation

6.39

The interpretation is that heat spreads symmetrically away from a point-concentrated initial anomaly a distance that is comparable to the damped diffusive length scale in about one characteristic time . This is shown in Figure 6.9, wherein the spatial units are proportional to and temporal units are proportional to .

6.4.1 Evolution on a Plane

This section generalizes the treatment to two horizontal dimensions on the infinite plane. We begin by including diffusion in two Cartesian dimensions:

6.40

This time we use the two-dimensional Fourier transform pair (in the two spatial dimensions):

6.41

where the two-dimensional vectors and have been introduced. Following the steps from the subSection 6.4, we find

6.42

where , and the integration limits have been suppressed. We proceed by using polar coordinates in the plane. We write where is the angle between the vectors and , and .

6.43

We proceed by considering first the portion of the double integral into its part (enclosed in parentheses): This angular integral is well known (it is a special case, , of Bessel's integral (see Whittacker and Watson, 1962 p. 362; or other books on mathematical physics)):

6.44

where is the Bessel function. The resulting integral can also be solved (e.g., MATHEMATICA), yielding the rather unsurprising answer:

6.45

Note that in this compact form is expressed proportional to the damped diffusion length scale and is always found proportional to the global timescale . We have found exactly what we found in one dimension: a delta function point expanding into a 2-D Gaussian shape, further expanding its disk width in a time that is about in radius. The volume under the Gaussian surface also diminishes by the factor owing to the damping from infrared radiation to space.

6.5 Random Winds and Diffusion

Next consider the one-dimensional energy-balance equation in which heat is advected by a wind field that for simplicity has no -dependence.

6.46

Once again, look at the component of the wave number

6.47

This is a first-order linear equation and can be solved using the usual integrating factor:

6.48

Let be a stationary random function of time. The ensemble average of a random quantity is denoted by . Let the ensemble average vanish; stationarity demands that with . This means that will have mean zero and be stationary as well. Finally, we assume that is a Gaussian random field. In other words, our wind field is similar to a random eddy field that carries heat one way or another depending on the vagaries of such a wind field. Note, however, that we have not permitted the wind to be a function of position. This makes our wind field a rather peculiar one: at a point in time it is everywhere pointed in the same direction, until the next instant, when it suddenly switches direction and magnitude the same everywhere. The case of the wind field that is variable in space as well as time is more difficult to solve. Rather than going into the complexities involved in that, we proceed with what we have.

Consider the series expansion of the second exponential factor, .

The first three terms are

6.49

The second term vanishes because . In fact, all the odd powered terms vanish.a We turn to the third term after running the integrals from time to instead of 0 to . This symmetry helps in analyzing the result.

6.50

The double integral can be computed by a transformation of variables from to a one-dimensional integral. Referring to Figure 6.10, as the integrand only depends on , the integration is best done by summing slabs that are at a 45 angle (along which ) to the horizontal. The length of the slab can be found from the hypotenuse of the right isosceles triangle whose lower edge has length . We can show this last by noting that the same side has length where is evaluated at the point the hypotenuse intersects the line . At that point, , or . Using we finally arrive at the width of thelower side of the triangle to be . The length of the slab is . The width of the slab is ; the two square roots cancel to leave us with

6.51

where is the autocorrelation function for the magnitude of the lag . Note that and is the variance of the velocity field. We specify that the integration time is very long compared to the autocorrelation time that we take to be in Figure 6.11. This means that the kernel (nearly an isosceles triangle of base width 2 and height unity) of the integral is concentrated near and the integral is , which is

6.52

After replacing by to regain the original notation, we can now identify

6.53

which gives us a handy formula for the diffusion coefficient in terms of the variance of the wind speed and its autocorrelation time. The last expression is reminiscent of the form of (6.36). Note that the units of are (length)(time), when the diffusion equation is set with the coefficient of set to unity. When the approximations above are valid, we can think of diffusion equations as being the equations describing the ensemble averages of (damped) random walk equations. Basically, the approximation is that the winds are white noise ( days) compared to our averaging times. All timescales, including relaxation time of a column of air (30 days), and forcing functions such as the seasonal cycle (few months), must be long compared to the weather–noise timescale (few days). This suggests the following:

6.54

When this approximation is valid, we can safely think of the EBCM equation as an equation for the ensemble average of . Note that there is a residual whose ensemble average is zero. Later, we will use this residual as a noise driver of climate fluctuations.

6.6 Numerical Methods

6.6.1 Explicit Finite Difference Method

6.55

where

6.57

The coefficient in (6.51) was constructed by multiplying numerator and denominator by , then using and . We start with a grid from 1 to 1, divided into equal segments. More details on the grid are presented in the following. The centered finite difference form for the derivative is

6.58

The derivative of is then

6.59

where

6.60

To step forward in time, we have the following algorithm (in the explicit finite difference case):

where the important parameter is given by

6.62

Next, we must make sure our solution enforces the Neumann boundary conditions, implying that no net heat flux enters the infinitesimal latitude circles surrounding the poles:

6.63

The fact that the time-dependent version of the equation has a regular singular point at the pole poses a problem because as noted earlier, small numerical errors will lead to erroneous encroachment by the irregular solutions (). This does not turn out to be a serious problem as we will see. The most straightforward way of representing the equation in finite difference form is to break the interval into intervals, . Similarly, the time is discretized in steps of .

Deferring discussion of the boundary points, we see that the term on the left is the value of at time , while all the terms on the right are to be evaluated at the previous time . This is very convenient because we presumably have knowledge of the field at time . For example, if we start at time , we know the initial profile of the field. This process allows us to evaluate it at and so on. Such an algorithm is very appealing intuitively as it feels like we are solving the equation just as nature does it. Note that there is a problem at the end points for the reason that when , the RHS contains the value and when , it contains the value . Hence, we must use the aforementioned algorithm only for . Had the boundary conditions been of the Dirichlet type where the values of the field are specified at the boundaries, we could merely specify the values of and .

Enforcing the Neumann boundary conditions is a little tricky as the straightforward implementation of the thinking above leads to and . We can get around this by using a centered difference of width for these points. We introduce fictitious points just outside the range, , and . Then the boundary conditions read as follows:

6.64

We compute the outside points using the explicit algorithm, but then force to equal the lower outside value and likewise is forced to be .

The curves in Figure 6.12 show an example of implementation of this kind of algorithm for , where we have modified the definition of the parameter to include , that is, , thereby allowing the stability parameter to be a function of latitude. The forcing is . The initial condition for the integration is C. There are 20 intervals in from pole to pole, and the time step is of the relaxation time . The figure shows a sequence of profiles after 10, 20, 40, 80, 160, 320, and 640 time steps. The last is nearly indistinguishable from the 320 time-step case. The solutions in this case are smooth and stable after integration to 3.20 relaxation times. Advancement to 3000 steps (30 relaxation times) indicates no change. We infer stability of the solution.

If we increase the time step by a factor of 10 to 0.1, we run into trouble. This case is shown in Figure 6.13 starting from the same initial condition. The stability parameter increases to 3.20. In the integration, everything goes well until time step 400 in which a small ripple occurs at the equator. By time step 425, the instability is full blown and propagating from the equator toward the poles. The reason for this peculiar behavior is that the explicit method is unstable when the time steps are too large compared to the spatial increment. For the diffusion equation, the condition can be made quantitative: the solution will be unstable if . Since is largest at the equator, the instability breaks out there first as indicated in the figure. The physical interpretation of the instability is that for diffusion or random walk processes, information propagates an rms distance . Turning this around, we have ; if in the numerical integration is larger than the time for propagation of information in the continuous exact solution, we can expect instability of the numerical procedure. A similar limitation occurs in the integration of wavelike (hyperbolic) equations with wave motion entering as the propagation mechanism (, = wave speed).

The finite difference algorithm as noted above in (6.56) is not the only one that is accurate with errors of order (). The next subsection considers some alternatives that have different numerical stability properties.

6.6.2 Semi-Implicit Method

An intermediate method is often used in practice. It is simply the weighted average of the explicit and implicit algorithms:

6.65

with .

One has to gather the coefficients of and place them onto the left-hand side. The coefficient matrix has to be inverted to obtain . The result is

where is a large matrix with and spatial indices. The matrix has to be inverted to find the next time-step values as a function latitude as indicated by the index : . Fortunately, the matrix is usually sparse (most entries are zero), and very fast algorithms are available. A special case that is commonly used is for , which is called the Crank–Nicolson scheme. Stability properties can be found in books on numerical analysis, but experience shows that use of this method allows larger time steps before instability sets in. Semi-implicit algorithms are commonly used in numerical solution of general circulation models.

In the finite difference solution to more complicated models treating dynamics of the flows as well as radiation, and so on, it is found to be advantageous to treat some terms on the RHS of the equation with one semi-implicit weighting and other terms with different weightings. These are called splitting methods. Again, they are commonly used in numerical solutions of climate models.

6.7 Spectral Methods

6.7.1 Galerkin or Spectral Method

An alternative to the finite difference methods is to write an approximate solution to the field as a series of (usually) orthogonal functions

6.66

where is some finite cutoff to the series. A rather obvious choice for the for this class of problems is the Legendre polynomials, . In the present case, this renders the problem trivial as the are the eigenfunctions of . The problem becomes nontrivial if the diffusion coefficient depends on . Such an EBM may be written as

6.67

with the usual Neumann boundary conditions at the poles. If is inserted and each side is multiplied by and integrated from pole to pole with respect to , we obtain

6.68

where

6.69

Now the problem has been reduced to a set of first-order coupled ordinary differential equations for the time-dependent coefficients . The coupling matrix is symmetric, hence in this case we can even find its eigenvectors and conduct a stability analysis. From discussions earlier in this chapter, it is easy to relate the eigenvalues to relaxation times for the eigenmodes of the problem. The numerical integration will be unstable if the time constant for the highest eigenmode retained (th) is shorter than the time step employed in the time-stepping algorithm. In other words, the criterion encountered in the explicit scheme comes up again unless special precautions are taken.

Other basis sets are possible besides the Legendre polynomials. For example, one might try a Fourier series

6.70

This choice is equivalent to the use of Chebyshev polynomials. This method is successful in the present one-dimensional case but requires considerable work and extra discussion when applied to the case of two horizontal dimensions. One advantage is that transformations from grid points to components and vice-versa can be accomplished via fast Fourier transform.

6.7.2 Pseudospectral Method

A complication arises in the spectral method if one of the coefficients is space dependent, for example, . Then,

6.71

with

6.72

and

6.73

where

6.74

and the are known as interaction coefficients. They can be tabulated or generated from recurrence relations. It is important to notice how many are necessary in a high-resolution simulation. For example, in a GCM simulation the truncation level might be 15 or 42 typically. This means there are 3375 or 74 088 coefficients that have to be stored in a lookup table. Some storage savings can be gained by noting that most of the interaction coefficients are 0. However, when the problem is elevated to two dimensions on the sphere, it becomes truly formidable. One way to get around it is to use a so-called pseudospectral method.

In the pseudospectral method, we transform back and forth between a grid point representation and the spectral representation. The grid point representation for the sphere involves use of the Gaussian quadrature method of numerically estimating integrals. An integral may be estimated by the sum

6.75

where the are weights associated with each term and the are unequally spaced but specified points along the axis. The and are optimal for a given level . Gaussian integrationb of order has the property that it integrates a degree polynomial exactly. The abscissas are the zero of and the weights are . As an example, Figure 6.14 shows a plot of (solid gray line) along with a plot of (black line) with heavy points on the dashed line corresponding to the value of the weights at the roots of . Next, the quantities and are calculated by

6.77

The next time step may now be evaluated:

6.78

where the ratio on the LHS stands for time advancement by some ODE algorithm such as Runge–Kutta. When the new set are computed, we can reevaluate by

The procedure delineated in (6.76)–(6.79) can be repeated as many times as necessary as long as the employed ODE algorithm is stable.

Studies have shown that the pseudo-spectral method pays over the interaction coefficient method when the number of stored coefficients exceeds a few tens of thousands. Since this is rather quickly exceeded in GCM calculations, the pseudospectral method is commonly used in numerical simulations.

6.8 Summary

This chapter has introduced time into the energy balance models. The first result is that for the linear zonally symmetric models, we find that if a climate is perturbed from equilibrium and then released, it relaxes back to its steady-state solution. The relaxation can be characterized as a sum of contributions from relaxation modes, each having an exponential decay with large scales having longer relaxation times than smaller ones. The zonally symmetric planet can also be solved for its seasonal cycle. It is remarkable that the solar insolation can be represented for many purposes as a sum of terms involving only the Legendre modes 0, 1, and 2. The largest part of the circular orbital insolation is controlled by Legendre mode of index 1, which is antisymmetric on the globe and driven by a single annual cycle sinusoid in time. The Legendre mode of index 2 also has a non-negligible contribution in the semiannual cycle representing the passage of the Sun over the equator twice per year. These simple facts allow the seasonal cycle to be solved for this configuration. We can readily study the major effects on insolation of changing orbital parameters. The amplitude of the annual cycle is much too large if the zonally symmetric planet is considered to be all-land and too smallif all-ocean. We postpone the study of land–sea distribution until Chapter 8.

Next in the chapter was a study of how a patch of heat deposited in a small area spreads laterally to larger circular areas in a time roughly that of the characteristic time of the global model, to a radius of the characteristic length . The spread has an exponentially damped (time constant ) Gaussian shape with standard deviation proportional to . It was also shown that if the diffusion term is replaced by a random-wind advection term, the ensemble result will be the same as diffusion. If the autocorrelation time of the velocity field is short compared to other times, such as , in the problem, then the ensemble average of this term is essentially the diffusion term. This is very close to the Brownian motion problem solved by Einstein in 1904.

The chapter concludes with a study of how 1-D EBMs can be solved by finite difference methods. In particular, an explicit problem is worked through in some detail with an illustration of how the procedure becomes unstable if the time step exceeds a certain threshold depending on the characteristic time and length scales of the EBM. Basically, the time step must be shorter than the spreading time across a grid box in the horizontal direction.

The appendix to the chapter provides some derivations of the orbital dependence of the insolation function.

6.8.1 Parameter Count

We have introduced the time dimension, which means we must now have an effective heat capacity that cannot be calculated from first principles. For the all-land planet and at frequencies around 1/year, our guess is to take it to be 1/2 the heat capacity of the air column's mass at constant pressure. This neglects the heat capacity of soil. We are also neglecting the topography, where atop mountains, the effective heat capacity might be less. For an all-ocean planet we take the effective heat capacity to be that of the ocean's mixed layer. It depends on position, especially latitude, but we take it to be constant. So for the latitude-only model we have introduced one new parameter, but we have learned a lot about how the seasonal cycle works for a uniform planet.

Papoulis (1984) is an excellent book on random processes and spectral analysis written mainly for electrical engineers. It is rather compact, but comprehensive. Numerical methods for partial differential equations are covered in Ames (1992) and Fletcher (1991). Multigrid methods for solving partial differential equations are covered in Hackbusch (1980) and Briggs et al. (2000) and applied to EBMs by Bowman and Huang (1991), Huang and Bowman (1992), and Stevens and North (1996). Mars has no ocean, so a constant heat capacity can be used on the (approximately) homogeneous planet. A one-dimensional model with a zonally symmetric seasonal cycle was used successfully by James and North (1982). Much more can be found about the Martian climate as well as the other planets in Ingersoll's recent book on planetary climates Ingersoll (2015). Planetary climates are covered at a higher level of physics by Pierrehumbert (2011).

Exercises

1. 6.1 Given the time-dependent, one-dimensional energy balance model

where , and coalbedo are constants, derive the basic timescale and spatial scale of the model by nondimensionalizing the model.

2. 6.2 Consider the time-dependent, one-dimensional model of the previous exercise, where the parameters, , and are constants. Let the solution of the energy balance model be denoted as follows: . An external forcing is applied at time so that the temperature field is perturbed into the form . The external forcing disappears instantaneously (delta function in time). (a) Derive the governing equation for . (b) Solve the governing equation to obtain the solution for .
3. 6.3 Consider next the same one-dimensional model as before with the same constant parameters. Let the solar distribution function be decomposed as follows:

where Re() denotes the real part of its argument. Solve the energy balance model for .

4. 6.4 Show that the solution of the one-dimensional damped diffusion equation

with the initial condition is given by

where and .

5. 6.5 A semi-implicit algorithm for a time-dependent, one-dimensional energy balance model is given by

where

and and denote a point in space and time, respectively. This equation can be rewritten as

Determine the matrices and explicitly.

6. 6.6 Consider a time-dependent, one-dimensional energy balance model in the form

where represents the constant heat capacity and . Set up the solution procedure using the spectral method. Use the Legendre polynomials as basis functions.

7. 6.7 In Chapter 5, the insolation distribution function was determined by using spherical geometry. Consider the situation depicted in Figure 6.15. (a) Determine the normal vector at point and at point in terms of the longitude, latitude, and declination angle, and unit vectors i , j, and k. Then, calculate the cosine of the angle between the two vectors. (b) Determine the range of longitude for which the Sun is visible. (c) Based on your answers for (a) and (b), determine the total amount of insolation received at the surface when the solar constant is .

6.9 Appendix to Chapter 6: Solar Heating Distribution

This chapter made use of the solar heating (insolation) function , which is the amount of radiant energy per unit time per unit surface area averaged through the day reaching the top of the atmosphere. In this appendix, we present a short derivation of this function, as its development into Legendre functions in latitude and sinusoids in time helps to understand the excitation of these modes in the response field under different orbital conditions. The appendix is organized as follows. First the derivation of is given, followed by a discussion of how the orbital elements enter the mode amplitudes. Figure 6.16 shows an octant of the Earth's surface with a given point singled out for our attention. The North Pole lies in the direction, and hence the point will rotate uniformly around the -axis in the course of a day. As the day passes, will vary linearly from 0 to . At a given time of day, , the amount of solar power per unit area deposited at is given by the solar constant times the cosine of the angle between zenith direction and the line joining the Sun and the Earth . In other words, . We define the solar vector to lie in the plane. Using the Cartesian unit vectors , and , we may write

6.80
6.81

where is the angle the solar vector makes with the equatorial plane ( plane). Clearly, depends on the time of the year. We readily compute

6.82

To obtain the diurnal average of the solar power at per unit area, we must integrate through the daylight hours (the range of for which ) and divide by the length of the whole day (). Dawn and dusk can be defined as , the roots of .

The half day is shown in Figure 6.17. The formula can now be written for the solar power per unit area averaged through the day,

where differs slightly from the total solar irradiance (because it is not annualized). Because of the elliptical orbit,

6.85

where is the Earth–Sun distance at the given time of year and is the annual average of . Figure 6.18 shows the seasonal cycle of insolation function for a circular orbit.

6.9.1 The Elliptical Orbit of the Earth

The Sun lies at the focus of an ellipse that constitutes the Earth's trajectory over the year (Figure 6.19). Using the Sun as the origin of a polar coordinate system with as polar angle in the ecliptic plane (celestial longitude), we can write

6.86

where is the eccentricity (presently ),

6.87

and is very nearly the average of (to fourth order in powers of ). The value of determines the time of year of the closest approach of the Earth to the Sun (perihelion). If we arbitrarily choose that at winter solstice (December 22 in the present epoch), then at present , as perihelion occurs at present within a few weeks of (Northern Hemisphere) winter solstice. However, increases linearly through in a time of 22 000 years, leading to a passing of perihelion through the seasons with a period of 22 000 years.

In what follows, we can take to be fixed and consider the time dependence of . If the Earth's orbit were circular, would be linear with time:

6.88

Since the Earth's orbit is really elliptical, we must use the conservation of angular momentum to compute . This is expressed as . After some manipulation, it can be shown that, to the first order in powers of ,

6.89

6.9.2 Relation Between Declination and Obliquity

The obliquity or tilt of the Earth's orbit is the angle between and or . At present, this angle is about . The declination on a given day is the angle between and , that is, . With some straightforward geometry, we can show that

6.90

6.9.3 Expansion of

Consider now the expansion of into a series of and , the latter being a Fourier series as the function is strictly periodic in (units of years here).

6.91

In principle, the coefficients can be computed numerically from (6.84) and (6.83); however, the main seasonal driving term can be computed analytically:

6.92
6.93

Furthermore, it can be shown that

6.94

a The coefficients are zero for odd values of greater than one. corresponds to the Northern Hemisphere winter solstice.

Table 6.1 presents the first few coefficients for the present orbital parameters. Truncating the series at and is an excellent approximation for many purposes. Note that the is due to the eccentric orbit. In fact, , where the present eccentricity is about 0.016. More numerical values of orbital changes can be found in North and Coakley (1979).