6. Integration in Series: Legendre, Bessel and Chebyshev Functions – Differential Equations

Integration in Series: Legendre, Bessel and Chebyshev Functions

6.1.1 Introduction

If a homogeneous linear differential equation of the form L(y) = (ΣarDr)y = 0 has constant coefficients ar, it can be solved by elementary methods, and the functions involved in solutions are elementary functions such as xn, ex, log x, sin x etc. However, if such an equation has variable coefficients and functions of the independent variable x, it has to be solved by other methods. Legendre's equation and Bessel's equation are two very important equations of this type. We take up below, solution of these equations by application of two standard methods of solution:

1. The power series method and
2. The Frobenius1 method, which is an extension of the power series method.

6.1.2 Power Series Method of Solution of Linear Differential Equations

The power series method, which gives solutions in the form of power series, is the standard method for solving ordinary linear differential equations with variable coefficients.

Power series

An infinite series of the form where a0, a1, a2,… are real constants is called a power series. These constants are called the coefficients of the series. x0 is a constant and is called the centre of the series, and x is a real variable.

For the choice x0 = 0, we obtain the power series

with the origin as its centre.

Convergence interval

If there exists a positive real number R such that for all x in I = (xR, x + R) the series Eq. (6.1) is convergent, then I is called the interval of convergence for the series Eq. (6.1) and R is called the radius of convergence. R is defined by

Real analytic functions

A real function f(x) is called analytic at x = x0 if it can be represented by a power series in powers of (xx0) with radius of convergence R > 0.

6.1.3 Existence of Series Solutions: Method of Frobenius

The condition for the existence of power series solutions of the type

for a second order differential equation with variable coefficients, written in the standard form

is that the coefficient functions p(x) and q(x) must be analytic at x = x0 (equivalently expansible in Taylor's series or differentiable any number of times). In the case of Bessel's equation

the above condition is not satisfied since and are not analytic at x = 0 and the power series method fails. The method of Frobenius which is an extension of the power series method applies to an important class of equations of the form

where the functions p(x) and q(x) are analytic at x = 0, or equivalently, p(x) and q(x) are expansible as power series. It gives at least one solution that can be represented as

where the exponent m may be any real or complex number and m is chosen so that a0 ≠ 0.

Eq. (6.5) also has a second solution, which is linearly independent of the above solution in the form Eq. (6.6) with a different m and different coefficients or a logarithmic term.

Bessel's equation (6.4) is of the form Eq. (6.5) with p(x) = 1 and q(x) = x2n2 analytic at x = 0 so that the method of Frobenius applies.

Regular and singular points

The point x0 at which the coefficients p and q are analytic is called a regular point of Eq. (6.3). Then the power series method can be applied for its solution.

In the case of Legendre's equation

(1 − x2)y″ − 2xy′ + n(n + 1)y = 0

which can be put in the standard form

the functions and are expressible as a power series about x = 0 and so they are analytic at x = 0. Hence the equation can be solved by the power series method.

If x0 is not regular, it is called a singular point. In respect of Bessel's Eq. (6.4) the point x = 0 is not regular and is therefore a singular point of Eq. (6.4). But xp(x) = 1 and x2q(x) = x2p2 are analytic at x = 0 and hence we can apply Frobenius method for the solution of Eq. (6.4).

The series solution of p(x)y″ + q(x)y′+ r(x)y = 0 by Frobenius method consists of the following steps:

1. Assume that y given by Eq. (6.6) is a solution of the equation.
2. Compute y′ and y″ and substitute for y, y′ and y″ in the equation.
3. Equate to zero the coefficient of the lowest power of x; it gives a quadratic in m, which is known as the indicial equation, giving two values for m.
4. Equate to zero the coefficients of the other powers of x and find the values of a1, a2, a3,… in terms of a0.

6.1.4 Legendre Functions

The power series method of solution can be applied to find the solution of the Legendre's2 differential equation

(1 − x2)y″ − 2xy′ + n(n + 1)y = 0

which arises in boundary value problems having spherical symmetry. Here n is a real constant. We can put Eq. (6.7) in the standard form

by dividing it by the coefficient (1 − x2) of y″ in Eq. (6.7). Now, the coefficients of y′ and y in Eq. (6.8) are

respectively, and these functions are analytic at x = 0 (i.e., derivable any number of times). Hence we may apply the power series method for its solution.

Let us assume a power series solution of Eq. (6.7) in the form

Differentiating Eq. (6.8) w.r.t. ‘x’ twice, we have

Substituting the expressions for y, y′ and y′ from Eqs. (6.9) and (6.10) in Eq. (6.7)

By writing each series in the expanded form and arranging each power of x in a column, we have

Since Eq. (6.9) is a solution of Eq. (6.7), this must be an identity in x. So, the sum of the coefficients of each power of x must be zero. This implies that

In general, when r = 2, 3,…

Simplifying the expression in square brackets, we obtain (nr)(nr + 1) so that

This is called a recurrence relation or recursion formula. It gives all the coefficients starting from a2 onwards in terms of a0 and a1, which are considered as arbitrary constants. Thus we have

Substituting these coefficients in Eq. (6.9), we obtain

where

The two series converge for |x| < 1, if they are non-terminating. Since y1 contains only the even powers of x and y2 contains only the odd powers of x, the ratio y1/y2 is not a constant so that y1 and y2 are linearly independent. Hence Eq. (6.17) is a general solution of Eq. (6.7) in the interval −1 < x < 1.

6.1.5 Legendre Polynomials Pa(x)

If the parameter n in Legendre's Eq. (6.7) is a non-negative integer n, then the right-hand side of Eq. 6.15) is zero when r = n. This implies that an + 2 = an + 4 = … = 0. Hence, if n is even, y1(x) reduces to a polynomial of degree n, while y2(x) remains an infinite series.

Similarly, if n is odd, y2(x) reduces to a polynomial of degree n, while y1(x) remains an infinite series.

In either case, the terminating series solution (i.e., the polynomial solution) of the Legendre's equation, multiplied by some constants are called Legendre polynomials or zonal harmonics of order n and are denoted by Pn(x). The series which is non-terminating is known as Legendre's function of the second kind and is denoted by Qn(x). Thus, for a non-negative integer n the general solution of Legendre's Eq. (6.7) is

where Pn(x) is a polynomial of degree n and Qn(x) is a non-terminating series, which is unbounded at x = ±1.

From Eq. (6.15), we have

Now, all the non-vanishing coefficients may be expressed in terms of the coefficient an of the highest power of x of the polynomial. The coefficient an which is still arbitrary may be chosen as an = 1 when n = 0 and

For this choice of an, all these polynomials will have the value 1 when x = 1, i.e., Pn(1) = 1. We obtain from Eq. (6.21) and Eq. (6.22),

The resulting solution of Legendre's differential equation Eq. (6.7) is called the Legendre polynomial of degree n, denoted by Pn(x) and is given by

where M = n/2 or (n − 1)/2 according as n is even or odd so that M is an integer.

Legendre's equation Eq. (6.7) can also be solved as a series of descending powers of x, assuming a solution of Eq. (6.7) in the form

Differentiating w.r.t. x twice, we obtain

Substituting these expressions in Eq. (6.7)

Equating to zero the coefficient of the highest power of x, i.e., xm, we get a0(nm)(nm + 1) = 0 which is obtained by putting r = 0 in the coefficient of xmr in (6.9). This implies that m = n or m = −n − 1(∵ a0 ≠ 0)

Equating now to zero the coefficient of the next power of x, i.e., xm − 1, we get a1(nm + 1)(n + m) = 0 which is obtained by putting r = 1 in the coefficient of xmr − 1 in Eq. (6.25). This implies that a1 = 0 ∵ m ≠ −n and mn ÷ 1

To obtain the recurrence relation, we equate the coefficient of xmr to zero:

From this relation, we see that a1 = a3 = a5 = … = a2r − 1 = 0.

Case (i) m = n, We have

Putting r = 2, 4, 6,…, we have

We obtain a solution of Eq. (6.7) as

If we choose a0 as the first solution y1 is obtained and is called Legendre's polynomial and is denoted by Pn(x). This is called the Legendre's function of the first kind of degree n.

Case (ii) m = −n − 1, In this case,

Putting r = 2, 4, 6…

Choosing the second solution y2 of Legendre's Eq. (6.7) is obtained and it is called Legendre's function of the second kind of degree n and is denoted by Qn(x). The general solution of Legendre's Eq. (6.7) is y = APn(x) + BQn(x), where A and B are arbitrary constants.

The first few Legendre polynomials are shown in Fig. 6.1.

Figure 6.1 Legendre Polynomials

Rodrigues′3 Formula for Legendre Polynomials Pn(x)

Example 6.1.1

.

Solution   Let y = (x2 − 1)n. Then

Differentiating Eq. (6.27) (n + 1) times using Leibnitz's theorem, we have

which is Legendre's differential equation having Pn(x) and Qn(x) as solutions.

Since contains positive powers of x only, v must be a constant multiple of Pn(x).

Note

1. The first few Legendre polynomials obtained earlier can also be obtained using Rodrigues’ formula.
2. Any polynomial f(x) of degree n can be expressed as a linear combination of Legendre polynomials Pn(x) as

6.1.6 Generating Function for Legendre Polynomials Pn(x)

Generating function: Let <fn(x)> be a sequence of functions. A function w(t, x) is called a generating function for the functions .

Example 6.1.2

Solution   We have, by binomial theorem,

where u = 2xtt2.

Substituting in Eq. (6.28), we get

Thus, the coefficient of tn in the expansion is

6.1.7 Recurrence Relations of Legendre Functions

Example 6.1.3   Show that the following recurrence relations a satisfied by the Legendre polynomials Pn(x):

Solution

To prove RR1

We know that

Differentiating Eq. (6.31) partially w.r.t. t, we get

Multiplying this by (1 − 2xt + t2) and using Eq. (6.31), we get

Equating the coefficients of tn

xPn(x) − Pn−1(x) = (n + 1)Pn+1(x) − 2nxPnx + (n − 1)Pn−1(x)

⇒ (2n + 1)xPn(x) = (n + 1)Pn+1(x) + nPn−1(x)

To prove RR2

nPn(x) = xPn(x) − Pn−1(x)

Differentiating Eq. (6.31) partially w.r.t. t and x, we get respectively

From these we have, on multiplying the first by t and the second by (xt),

Equating the coefficients of tn

To prove RR3

Differentiating RR1: (2n + 1)xPn(x) = (n + 1)Pn+1(x) + nPn−1(x) w.r.t. x we have

To prove RR4

To prove RR5

Multiplying the last equation by x and subtracting it from the previous one

To prove RR6

Note

The above recurrence relations can also be proved by using Rodrigues’ formula.

Example 6.1.4

1. Pn(1) = 1;
2. Pn(−x) = (−1)nPn(x);
3. Pn(−1)n.

Solution   We know that

1. Put x = 1 in Eq. (6.32)

Equating the coefficients of tn on both sides Pn(1) = 1.

2. Replacing t by –t and x by –x in Eq. (6.32), we get

Equating the coefficients of tn on both sides

3. Put x = 1 in the above result. Then,

6.1.8 Orthogonality of Functions

A set of functions f1,f2,f3,… defined on some interval I = {xR | axb} is said to be orthogonal on I with respect to a weight function p(x) > 0 if

The normfn‖ of fn is defined by

The functions are called orthonormal on I if they are orthogonal on I and all have a norm equal to 1.

In respect of functions with p(x) = 1 we simply say ‘orthogonal’. Thus, functions f1,f2,f3,… are orthogonal on some interval I if

The normfn‖ of fn is then simply given by

The functions are called orthogonal on I if they are orthogonal on I and all of them are of norm equal to 1.

6.1.9 Orthogonality of Legendre Polynomials Pn(x)

Example 6.1.5   Show that the Legendre Polynomials P0(x), P1(x), P2(x),… are orthogonal on I = [−1,1]. That is,

Solution

Case (i) mn

Let u = Pm(x) and v = Pn(x) be the solutions of the Legendre's equation

so that we have

Multiplying Eq. (6.38) by v and Eq. (6.39) by u and subtracting, we have

On transposing, we have

Integrating both sides w.r.t. x between −1 and 1, we get

Case (ii) m = n. We know that

Squaring both sides we have

Integrating both sides w.r.t. x between −1 and 1, we get

by Eq. (6.40).

Equating the coefficients of t2n on both sides, we get

Example 6.1.6   Prove that

Solution

Replacing n by 1,2,…(n − 1),n

6.1.10 Betrami's Result

Example 6.1.7   Prove that

Solution

Multiplying Eq. (6.48) by (n + 1), Eq. (6.49) by n and adding

6.1.11 Christoffel's Expansion

Example 6.1.8   Prove that = (2n − 1)Pn−1 + (2n − 5)Pn−3 + (2n − 9)Pn−5 + … + 3P1 or P0 according as n is even or odd.

Solution

Replacing n by (n − 1), we have

Replacing n by (n − 2), (n − 4), (n − 6),… and finally by 2 (if n is even) and 3 (if n is odd).

6.1.12 Christoffel's Summation Formula

Example 6.1.9   Prove that

Solution

(1)·Pm(y) − (2)·Pm(x) yields

Substituting m = 0,1,2,…n in Eq. (6.57)

… …

6.1.13 Laplace's First Integral for Pn(x)

Example 6.1.10   Show that when

Solution   We know that

Let a = 1 − tx and We have a2b2 = 1 − 2tx + t2 so that Eq. (6.63) becomes

is the generating function for Pn(x)].

Equating the coefficients of tn,

Example 6.1.11   Show that

Solution   Laplace's first integral for Pn(x) is

6.1.14 Laplace's Second Integral for Pn(x)

Example 6.1.12   Prove that

Solution   We know that

Let a = tx − 1 and Then

where

Equating the coefficients of on both sides

6.1.15 Expansion of f(x) in a Series of Legendre Polynomials

Example 6.1.13   Let f(x) be expressible in a series of Legendre polynomials. Thus

where Cn are constants to be determined.

Then multiplying both sides by Pn(x) and integrating w.r.t. x from −1 to 1, we get

Example 6.1.14   Show that and hence express 2x2 − 4x + 2 in terms of Legendre polynomials.

JNTU 2003S]

Solution   From Rodrigues’ formula

Taking n = 0, 1, 2 we have

Example 6.1.15   Express x3 + 2x2x − 3 in terms if Legendre polynomials.

Solution   We have P0(x) = 1, P1(x) = x,

Example 6.1.16   Using Rodrigues’ formula prove that

Solution   Rodrigues’ formula for Pn(x) is

(∵ Dn−1 (x2 − 1)n contains (x2 − 1) as a factor)

Proceeding similarly, integrating by parts (n − 1) times, we get

Example 6.1.17   Prove that (1 − 2xt + t2)−½ is a solution of the equation

Solution   We know that

Denoting each side by u, we have

From Eqs. (6.65) and (6.66) we have

since Pn satisfies Legendre's equation:

(1 − x2)y″ − 2xy′ + n(n + 1)y = 0

Example 6.1.18   Prove that

[JNTU 2005S (Set 4)]

Solution   Since

Example 6.1.19   Expand f(x) in a series of Legendre polynomials if

Solution   Let f(x) = C0P0(x) + C1P1(x) + … + CnPn(x) + …. Then

EXERCISE 6.1
1. Show that (a)

[Hint: Use generating functions or

2. Show that

[Hint: Use orthogonal property.]

3. Show that

4. Show that (a)

[JNTU 2006 (Set 2)]

5. Show that

6. Show that

7. Show that

8. Show that

[Hint: Use recurrence relation RR1 and orthogonal property.]

9. Show that

[Hint: Use RR1]

[JNTU 2003S (Set 4)]

10. Show that 2P3(x) + 3P1(x) = 5x3.

[JNTU 2006 (Set 3)]

6.2.1 Introduction

Bessel functions are solutions of Bessel's differential equation which arises in the solution of Laplace's equation in cylindrical coordinates. It occurs in many boundary value problems arising in electrical fields, mechanical vibrations and heat conduction.

6.2.2 Bessel Functions

The second order linear differential equation

is called Bessel's4 equation of order p. Its particular solutions are called Bessel functions of order p. p is a real (or complex) constant. Here we assume p to be real non-negative number.

As mentioned earlier, Eq. (6.68) can be solved by the method of Frobenius. Writing Eq. (6.68) in the standard form we have

We assume a solution of Eq. (6.69) in the form

Differentiating Eq. (6.70) w.r.t. x twice, we get

Substituting Eqs. (6.70)−(6.72) into Eq.(6.68) we obtain

Equation (6.73) must be identically satisfied. This implies that the coefficient of each power of x must be zero.

The lowest power of x is xm obtained for r = 0. Equating to zero the coefficient of xm, we get the indicial equation

Equating to zero the coefficient of xm+1, we obtain [(m + 1)2p2]a1 = 0 ⇒ a1 = 0 ∵(m + 1)2p2

Generally equating to zero the coefficient of xm+r+2, we get the coefficient recurrence relation

[(m + r + 2)2p2]ar+2 + ar = 0

Putting r = 1, 3, 5, … we see that the odd coefficients vanish, i.e., a2r−1 = 0 for all r = 1, 3, 5,….

Putting r = 0, 2, 4, … we get

A solution of Eq. (6.68) is

We get different types of solutions depending on the values of p.

6.2.3 Bessel Functions of Non-integral Order p: Jp(x) and J−p(x)

Case (i) p is not an integer.

In this case, we get two linearly independent solutions for m = p and m = −p. For m = p, we have

Since a0 is arbitrary we choose it as and denote the resulting solution by Jp(x). This is called Bessel function of the first kind of order p. Thus

A second linearly independent solution corresponding to m = −p is

which is also called Bessel function of the first kind of order −p. Both the series converge for all x as can be seen by D’ Alembert's ratio test.

When p is not an integer, the complete solution of Eq. (6.68) is

where A and B are arbitrary constants.

6.2.4 Bessel Functions of Order Zero and One: J0(x), J1(x)

Case (ii) p = 0

Putting p = 0 in Eq. (6.68) we obtain Bessel's equation of order zero as

Its solution obtained from Eq. (6.76) by putting p = 0 and taking a0 = 1 is

which is the solution of Eq. (6.80) if m = 0.

Thus, the first solution of Eq. (6.80) is

which is called Bessel's function of the first kind of order zero, and appears to be similar to the cosine function.

For p = 1, we obtain the Bessel function of order 1

This looks similar to the sine function. The height of the waves decreases with increasing x.

Dividing Bessel's Eq. (6.68) by x2 we can put it in the standard form where p has been replaced by n. The term is 0 for n = 0. Further, this term as well as the term are small in absolute value for large x so that the Bessel's equation comes closer to y″ + y = 0 whose solutions are sine and cosine functions. Also, acts as a damping term which is partly responsible for the decrease in height (see Fig. 6.2).

For large x, we can derive the result that

Figure 6.2 Bessel functions of the first kind

6.2.5 Bessel Function of Second Kind of Order Zero Y0(x)

Differentiating Eq. (6.81) partially w.r.t. m, we get

A second independent solution of Eq. (6.80) denoted by Y0(x) is given by

Y0(x) is called the Bessel function of the second kind of order zero or Neumann function. So, the complete solution in Bessel Eq. (6.80) of order zero is

where A and B are arbitrary constants.

6.2.6 Bessel Functions of Integral Order: Linear Dependence of Jn(x) and J−n(x)

Case (iii) p = n (an integer):

To prove that J−n(x) = (-1)nJn(x).

Taking p = −n, a positive integer so that −n is a negative integer, we obtain from Eq. (6.78)

This proves that when p is an integer Bessel functions Jp(x) and J−p(x) are linearly dependent.

6.2.7 Bessel Functions of the Second Kind of Order n: Yn(x): Determination of Second Solution Yn(x) by the Method of Variation of Parameters

Since Jn(x) and J−n(x) are linearly dependent when n is an integer, the second solution of Eq. (6.68) is obtained by the following method.

Let y = uv, where v = Jn(x), be a solution of Eq. (6.68). Then

Substituting in Eq. (6.68), we get

The coefficient of u is zero since v = Jn(x) is a solution of Eq. (6.68).

and integrating.

Hence the complete solution of Eq. (6.68) when p = n, an integer, is

y(x) = AJn(x) + BYn(x)

where and A and B are arbitrary constants.

6.2.8 Generating Functions for Bessel Functions

Let where be a sequence of functions fn. A function w(t, x) such that is called a generating function of the functions fn as already given at Section 6.1.6 (p. 6–12).

Generating function for Bessel functions Jn(x) of integral order n

Example 6.2.1   Prove that

Solution

In the product on the RHS

Coefficient of

Coefficient of

Coefficient of

The result follows from Eqs. (6.88), (6.89) and (6.90).

6.2.9 Recurrence Relations of Bessel Functions

Example 6.2.2   The following recurrence relations connect Bessel functions of different orders and are very useful in solving problems involving Bessel functions. These relations are true for general p but we prove them taking p = n.

Solution

To prove RR1

To prove RR2

To prove RR3

on dividing by xn.

To prove RR4

on dividing by xn.

To prove RR5

To prove RR6

Subtracting, we get

6.2.10 Bessel's Functions of Half-integral Order

Bessel function Jp of orders are elementary functions. They can be expressed in terms of sines and cosines and powers of x.

Example 6.2.3   Prove that

Solution   (i) We know that

Putting

on multiplying and dividing by x, and noting

(ii) We know that

Putting

(iii) Squaring and adding the above results we have

Example 6.2.4   Prove that

Solution

6.2.11 Differential Equation Reducible to Bessel's Equation

The differential equation

where λ is a parameter, can be written as

which is Bessel's equation. When p is a non-integer the general solution is

and when p = n is an integer the general solution is

Definition of orthogonality of functions

A set of functions f1,f2,… defined on some interval I =[a,b,] is said to be orthogonal on I with respect to a weight function w(x)>0 if

The norm of fm is defined by

The functions are called orthonormal of I if they are orthogonal on I and all have norm equal to unity.

For ‘orthogonal w.r.t. w(x) = 1’ we simply say orthogonal on I. Thus, functions f1,f2,… are orthogonal on some interval I if

The norm of fm is then simply defined by

and the functions are called orthonormal on axb if they are orthogonal there and all have norm equal to unity.

Orthogonal set, orthonormal set

Example 6.2.5   Prove that the functions fm(x)=sinmx(m=1,2,3,…) form an orthogonal set on −π≤x≤π.

Solution   Here a=−π,b=π,fm(x)=sinmx and we have

since the functions vanish at both the limits.

∴ The norm hence the orthonormal set is

Orthogonality of Bessel functions

Example 6.2.6   Prove that, for each fixed non-negative integer n,

where α and β are roots of Jn(ax)=0.

Solution   Let u(x)=Jn(ax) and v(x)=Jn(βx) be the solutions of the equations

respectively. Hence

Now multiplying Eq. (6.91) by and Eq. (6.92) by and subtracting.

Integrating both sides of Eq. (6.93) w.r.t. x from x=0 to a

This implies that

Case (i) α≠β

That is, α, β are two distinct roots of Jn(ax) so that Jn()=Jn()=0.

Therefore, we obtain from Eq. (6.95)

which is the orthogonality relation for Bessel functions.

Case (ii) α=β

In this case, the RHS of Eq. (6.95) assumes the indeterminate form Hence we apply L’ Hospital's Rule by differentiating w.r.t. β and evaluating the limits.

By recurrence relation RR4:

If we put x= in the recurrence relation RR6, we have

Thus for α=β

6.2.13 Integrals of Bessel Functions

Example 6.2.7   Prove that

Solution   Recurrence relation RR1 is

Integrating both sides we have

For p=1, we get

Recurrence relation RR2 is

Integrating both sides, we have

For p=0 we get

In general, for m and n integer with m+n≥0 can be integrated by parts completely if m+n is odd.

But when m+n is even the integral depends on the residual integral which has been tabulated.

Integrating,

6.2.14 Expansion of Sine and Cosine in Terms of Bessel Functions

Example 6.2.8   Show that

1. cos x=J0 −2J2 +2J4 −…
2. sin x=2(J1J3+J5 −…).

Solution   We know that

Put t=e

Separating the real and imaginary parts we have

Putting θ=π/2 we get

Example 6.2.9   Find series expansion for J0(x) and J1(x).

Solution   We know that

Putting n=0 we get

Putting n=1 we get

Example 6.2.10   Express J5(x) in terms of J0(x) and J1(x).

Solution

Putting n=1,2,3,4 we get

where the argument x has been omitted for convenience. Substituting from Eq. (6.97), Eq. (6.98) becomes

Substituting Eqs. (6.101) and (6.97) into Eq. (6.99)

Substituting Eqs. (6.102) and (6.101) into Eq. (6.100),

Example 6.2.11   Express J5/2(x) in terms of sine and cosine functions.

Solution

Example 6.2.12   Prove that

1. Jn(x)=(−1)nJn(x)
2. Jn(−x)=(−1)nJn(x)
3. Jn(x) is an even or odd function of x according as n is even or odd.

Solution

(i) We know that for any non-negative real p.

Let p=−n where n is a positive integer. Then

Since lim as or a negative integer, each term in the above summation is zero as long as

i.e., as long as r≤(n−1). So, the summation starts with r=n,

(ii) Let n be a negative integer and x by −x, where m is a positive integer. Then

Let n be a negative integer and n=−m where m is a positive integer. Then

Replacing x by −x, we have

(iii) We have from (ii) Jn(−x)=(−1)nJn(x)

Thus Jn is even or odd according as n is even or odd.

Example 6.2.13   Prove that

Solution

replacing n by (n +1).

Substituting for and into Eq. (6.109)

Substituting in Eq. (6.112), we get

Example 6.2.14   Prove that Hence show that .

Solution   We have

Putting n = 0, 1, 2, 3,...

Integrating we obtain

where c = 1 since and Jn(0) = 0 for n ≥ 1

Since each term is non-negative

EXERCISE 6.2

Prove the following:

1. [JNTU 2003 (Set 2)]

6.3 CHEBYSHEV POLYNOMIALS

One of the important differential equations that gives rise to special functions is the Chebyshev differential equation

where n is a positive integer. The singularities of Eq. (6.115) are x = ±1. If we seek a power series solution of Eq. (6.115) about x = 0 of the form

then this series solution is convergent in |x| < 1 since the distance between x = 0 and the nearest singularity is 1. Differentiating (6.116) w.r.t. x twice we obtain

Substituting (6.116)(6.118) in Eq. (6.115) we obtain

Substituting m − 2 = t in the first sum, we get

Since t is a dummy variable, we can combine the third and fourth terms of this equation. Equating the coefficients of various powers of x to zero, we obtain

We have

Substituting in the power series solution, we obtain

where

The series y0(x), y1(x) converge for |x| < 1. Now, y0(x) contains even powers of x only and y1(x) contains odd powers of x only. Hence, the two solutions y0(x), y1(x) are linearly independent solutions of the Chebyshev differential equation (6.115).

As n takes the value zero and even positive integral values we obtain for

Hence, y0(x) reduces to an even degree polynomial as n takes even positive integral values, whereas y1(x) remains an infinite series.

As n takes odd positive integral values, we get for

Hence, y1(x) reduces to an odd degree polynomial as n takes odd positive integral values, whereas y0(x) remains an infinite series. These polynomial give rise to an important class of polynomials called Chebyshev polynomials.

Changing the independent variable x by the substitution x = cosθ we have

Substituting in the differential Eq. (6.115) we obtain

The general solution of the differential Eq. (6.121) is

y(θ) = A cos nθ + B sin nθ

∴ The solution of Eq. (6.115) can be written as

Thus, cos(n cos−1 x) and sin(n cos−1 x) are two linearly independent solutions of Eq. (6.119). Denoting the first solution by Tn(x) we have

which is called the Chebyshev polynomial of first kind. Denoting the second linearly independent solution by

which is called the Chebyshev polynomial of second kind.

Chebyshev polynomials of first kind

The Chebyshev polynomials of first kind are given by

we note that .

Recurrence relation for Chebyshev polynomials Tn(x)

The Chebyshev polynomials Tn(x) satisfy the following recurrence relation

we have

where cos−1 x = θ. Hence

and

From Eq. (6.125) we have

T0(x) = cosθ = 1, T1(x) = cos(cos−1 x) = x

Using the recurrence relation (Eq. (6.126)) we have

Tn(x) is a polynomial of degree n. If n is even Tn(x) is an even degree polynomial and if n is odd Tn(x) is an odd degree polynomial.

To express xn in terms of Chebyshev polynomials

From Eqs. (6.127) we obtain

Figure 6.3 Chebyshev polynomials

The relationship between and Tn(x) is given by the recurrence relation

Putting x = cosθ we get

Since LHS = RHS Eq. (6.115) is proved.

Example 6.3.1   Write the following polynomials in terms of Chebyshev polynomials of first kind:

1. 2x2 − 5x + 7
2. 8x3 + 11x2 − 3x + 4
3. 16x4 − 8x3 − 2x2 + 4x − 3

Solution

Example 6.3.2   Find the expression for

1. in terms of T3(x) and T2 (x); and
2. in terms of T3(x) and T2 (x); and

Solution   Consider

1. Taking n = 3 we have

2. Taking n = 5 we have

Zeros of Tn(x)

Equating Tn(x) to zero we obtain

The n simple zeros of Tn(x) are given by (6.130).

Turning Points (Extreme values) of Tn(x)

Tn(x) attains its relative maximum or minimum at (x − 1) points given by Eq. (6.131). At these points we have

Also, at the end points of the interval [–1, 1] we have

Tn(x) attains its maximum and minimum values at the (n + 1) points

expanding by Binomial Theorem

This shows that Tn(x) is a polynomial of degree n and its leading coefficient is 2n−1.

Generating Function of Chebyshev Polynomials Tn(x)

Prove that

Proof The function on the LHS of (6.134), which is a function of two variables x and t is called the generating function of Chebyshev polynomials Tn(x). We have

Hence the result (6.134) is proved.

Alternate expansion for Tn(x)

Writing and expanding it by using Binomial Theorem we get

Now, the term containing tn in the expansion of the product tnr(2xt)nr is obtained from

∴ The coefficient of tn from all the terms is bn where

Also, the term containing tn−1 in the expansion of the product tnr(2xt)nr is obtained from

∴ The coefficient of tn−1 from all the terms is bn–1 where

Now, the LHS of Eq. (6.134) yields

Thus, we obtain the coefficient of tn as

In the second sum we put r − 1 = s so that we have, replacing the dummy variable s by r:

consequently

We can also obtain the expression for xn in terms of the Chebyshev polynomials as follows:

If n is even, the last term is .

Integration of Tn(x)

Consider

The above result does not hold good for n = 0 and 1. For these values of n we have

Orthogonality of Chebyshev Polynomials Tn(x)

Prove that

Solution Case (i) m = n = 0. In this case, we get (since T0 = 1)

Case (ii) m = n ≠ 0. In this case, we have to evaluate

Put cos−1 x = θ or x = cosθdx = − sinθdθ and the limits x = −1 and x = 1 become θ = π and θ = 0, respectively.

Case (iii) mn Since Tm(x) = cos (m cos−1 x) and Tn(x) = cos(n cos−1 x) we have to evaluate

Put cos−1 x = 0 ⇒ x = cosθ, and the limits x = −1 and x = 1 become θ = π and θ = 0, respectively.

consequently the Chebyshev polynomials Tn(x) are orthogonal w.r.t. the weight function .

Chebyshev Series

Let f(x) be a continuous function, having continuous derivatives on the interval [–1, 1]. Now, f(x) can be uniquely written as an infinite series.

which converges uniformly in [–1, 1]. Multiplying both sides of the equation by and integrating with respect to x over [–1, 1] and using the orthogonal property of Chebyshev polynomials we get

Hence

If we write the series given in Eq. (6.135) as

then using Eq. (6.137) we can write

This series may also be written as

where means that the coefficient of the first term T0(x) is multiplied by 1/2.

Example 6.3.3   Expand f(x) = x3 + x, − 1 ≤ x ≤ 1 in a Chebyshev series.

Solution   We have

f(x) = x3 + x

write f(x) = c0T0(x) + c1T1(x) + c2T2(x) + ...

we obtain

(odd function)

Hence .

EXERCISE 6.3

Prove the following (1–7):

1. Tn(1) = 1.
2. Tn(−1) = (−1)n.
3. Tn(−x) = (−1)n Tn(x).
4. T2n(0) = (−1)n.
5. T2n+1 (0) = 0.
6. .