6
Integration in Series: Legendre, Bessel and Chebyshev Functions
6.1 LEGENDRE FUNCTIONS
6.1.1 Introduction
If a homogeneous linear differential equation of the form L(y) = (Σa_{r}D^{r})y = 0 has constant coefficients a_{r}, it can be solved by elementary methods, and the functions involved in solutions are elementary functions such as x^{n}, e^{x}, log x, sin x etc. However, if such an equation has variable coefficients and functions of the independent variable x, it has to be solved by other methods. Legendre's equation and Bessel's equation are two very important equations of this type. We take up below, solution of these equations by application of two standard methods of solution:
 The power series method and
 The Frobenius^{1} method, which is an extension of the power series method.
6.1.2 Power Series Method of Solution of Linear Differential Equations
The power series method, which gives solutions in the form of power series, is the standard method for solving ordinary linear differential equations with variable coefficients.
Power series
An infinite series of the form where a_{0}, a_{1}, a_{2},… are real constants is called a power series. These constants are called the coefficients of the series. x_{0} is a constant and is called the centre of the series, and x is a real variable.
For the choice x_{0} = 0, we obtain the power series
with the origin as its centre.
Convergence interval
If there exists a positive real number R such that for all x in I = (x − R, x + R) the series Eq. (6.1) is convergent, then I is called the interval of convergence for the series Eq. (6.1) and R is called the radius of convergence. R is defined by
Real analytic functions
A real function f(x) is called analytic at x = x_{0} if it can be represented by a power series in powers of (x − x_{0}) with radius of convergence R > 0.
6.1.3 Existence of Series Solutions: Method of Frobenius
The condition for the existence of power series solutions of the type
for a second order differential equation with variable coefficients, written in the standard form
is that the coefficient functions p(x) and q(x) must be analytic at x = x_{0} (equivalently expansible in Taylor's series or differentiable any number of times). In the case of Bessel's equation
the above condition is not satisfied since and are not analytic at x = 0 and the power series method fails. The method of Frobenius which is an extension of the power series method applies to an important class of equations of the form
where the functions p(x) and q(x) are analytic at x = 0, or equivalently, p(x) and q(x) are expansible as power series. It gives at least one solution that can be represented as
where the exponent m may be any real or complex number and m is chosen so that a_{0} ≠ 0.
Eq. (6.5) also has a second solution, which is linearly independent of the above solution in the form Eq. (6.6) with a different m and different coefficients or a logarithmic term.
Bessel's equation (6.4) is of the form Eq. (6.5) with p(x) = 1 and q(x) = x^{2} − n^{2} analytic at x = 0 so that the method of Frobenius applies.
Regular and singular points
The point x_{0} at which the coefficients p and q are analytic is called a regular point of Eq. (6.3). Then the power series method can be applied for its solution.
In the case of Legendre's equation
which can be put in the standard form
the functions and are expressible as a power series about x = 0 and so they are analytic at x = 0. Hence the equation can be solved by the power series method.
If x_{0} is not regular, it is called a singular point. In respect of Bessel's Eq. (6.4) the point x = 0 is not regular and is therefore a singular point of Eq. (6.4). But xp(x) = 1 and x^{2}q(x) = x^{2} − p^{2} are analytic at x = 0 and hence we can apply Frobenius method for the solution of Eq. (6.4).
The series solution of p(x)y″ + q(x)y′+ r(x)y = 0 by Frobenius method consists of the following steps:
 Assume that y given by Eq. (6.6) is a solution of the equation.
 Compute y′ and y″ and substitute for y, y′ and y″ in the equation.
 Equate to zero the coefficient of the lowest power of x; it gives a quadratic in m, which is known as the indicial equation, giving two values for m.
 Equate to zero the coefficients of the other powers of x and find the values of a_{1}, a_{2}, a_{3},… in terms of a_{0}.
6.1.4 Legendre Functions
The power series method of solution can be applied to find the solution of the Legendre's^{2} differential equation
which arises in boundary value problems having spherical symmetry. Here n is a real constant. We can put Eq. (6.7) in the standard form
by dividing it by the coefficient (1 − x^{2}) of y″ in Eq. (6.7). Now, the coefficients of y′ and y in Eq. (6.8) are
respectively, and these functions are analytic at x = 0 (i.e., derivable any number of times). Hence we may apply the power series method for its solution.
Let us assume a power series solution of Eq. (6.7) in the form
Differentiating Eq. (6.8) w.r.t. ‘x’ twice, we have
Substituting the expressions for y, y′ and y′ from Eqs. (6.9) and (6.10) in Eq. (6.7)
By writing each series in the expanded form and arranging each power of x in a column, we have
Since Eq. (6.9) is a solution of Eq. (6.7), this must be an identity in x. So, the sum of the coefficients of each power of x must be zero. This implies that
In general, when r = 2, 3,…
Simplifying the expression in square brackets, we obtain (n − r)(n − r + 1) so that
This is called a recurrence relation or recursion formula. It gives all the coefficients starting from a_{2} onwards in terms of a_{0} and a_{1}, which are considered as arbitrary constants. Thus we have
Substituting these coefficients in Eq. (6.9), we obtain
where
The two series converge for x < 1, if they are nonterminating. Since y_{1} contains only the even powers of x and y_{2} contains only the odd powers of x, the ratio y_{1}/y_{2} is not a constant so that y_{1} and y_{2} are linearly independent. Hence Eq. (6.17) is a general solution of Eq. (6.7) in the interval −1 < x < 1.
6.1.5 Legendre Polynomials P_{a}(x)
If the parameter n in Legendre's Eq. (6.7) is a nonnegative integer n, then the righthand side of Eq. 6.15) is zero when r = n. This implies that a_{n + 2} = a_{n + 4} = … = 0. Hence, if n is even, y_{1}(x) reduces to a polynomial of degree n, while y_{2}(x) remains an infinite series.
Similarly, if n is odd, y_{2}(x) reduces to a polynomial of degree n, while y_{1}(x) remains an infinite series.
In either case, the terminating series solution (i.e., the polynomial solution) of the Legendre's equation, multiplied by some constants are called Legendre polynomials or zonal harmonics of order n and are denoted by P_{n}(x). The series which is nonterminating is known as Legendre's function of the second kind and is denoted by Q_{n}(x). Thus, for a nonnegative integer n the general solution of Legendre's Eq. (6.7) is
where P_{n}(x) is a polynomial of degree n and Q_{n}(x) is a nonterminating series, which is unbounded at x = ±1.
From Eq. (6.15), we have
Now, all the nonvanishing coefficients may be expressed in terms of the coefficient a_{n} of the highest power of x of the polynomial. The coefficient a_{n} which is still arbitrary may be chosen as a_{n} = 1 when n = 0 and
For this choice of a_{n}, all these polynomials will have the value 1 when x = 1, i.e., P_{n}(1) = 1. We obtain from Eq. (6.21) and Eq. (6.22),
The resulting solution of Legendre's differential equation Eq. (6.7) is called the Legendre polynomial of degree n, denoted by P_{n}(x) and is given by
where M = n/2 or (n − 1)/2 according as n is even or odd so that M is an integer.
Legendre's equation Eq. (6.7) can also be solved as a series of descending powers of x, assuming a solution of Eq. (6.7) in the form
Differentiating w.r.t. x twice, we obtain
Substituting these expressions in Eq. (6.7)
Equating to zero the coefficient of the highest power of x, i.e., x^{m}, we get a_{0}(n − m)(n − m + 1) = 0 which is obtained by putting r = 0 in the coefficient of x^{m − r} in (6.9). This implies that m = n or m = −n − 1(∵ a_{0} ≠ 0)
Equating now to zero the coefficient of the next power of x, i.e., x^{m − 1}, we get a_{1}(n − m + 1)(n + m) = 0 which is obtained by putting r = 1 in the coefficient of x^{m − r − 1} in Eq. (6.25). This implies that a_{1} = 0 ∵ m ≠ −n and m ≠ n ÷ 1
To obtain the recurrence relation, we equate the coefficient of x^{m − r} to zero:
From this relation, we see that a_{1} = a_{3} = a_{5} = … = a_{2r − 1} = 0.
Case (i) m = n, We have
Putting r = 2, 4, 6,…, we have
We obtain a solution of Eq. (6.7) as
If we choose a_{0} as the first solution y_{1} is obtained and is called Legendre's polynomial and is denoted by P_{n}(x). This is called the Legendre's function of the first kind of degree n.
Case (ii) m = −n − 1, In this case,
Putting r = 2, 4, 6…
Choosing the second solution y_{2} of Legendre's Eq. (6.7) is obtained and it is called Legendre's function of the second kind of degree n and is denoted by Q_{n}(x). The general solution of Legendre's Eq. (6.7) is y = AP_{n}(x) + BQ_{n}(x), where A and B are arbitrary constants.
The first few Legendre polynomials are shown in Fig. 6.1.
Figure 6.1 Legendre Polynomials
Rodrigues′^{3} Formula for Legendre Polynomials P_{n}(x)
Example 6.1.1
Solution Let y = (x^{2} − 1)^{n}. Then
Differentiating Eq. (6.27) (n + 1) times using Leibnitz's theorem, we have
which is Legendre's differential equation having P_{n}(x) and Q_{n}(x) as solutions.
Since contains positive powers of x only, v must be a constant multiple of P_{n}(x).
Note
 The first few Legendre polynomials obtained earlier can also be obtained using Rodrigues’ formula.
 Any polynomial f(x) of degree n can be expressed as a linear combination of Legendre polynomials P_{n}(x) as
6.1.6 Generating Function for Legendre Polynomials P_{n}(x)
Generating function: Let <f_{n}(x)> be a sequence of functions. A function w(t, x) is called a generating function for the functions .
Example 6.1.2
Solution We have, by binomial theorem,
where u = 2xt − t^{2}.
Substituting in Eq. (6.28), we get
Thus, the coefficient of t^{n} in the expansion is
6.1.7 Recurrence Relations of Legendre Functions
Example 6.1.3 Show that the following recurrence relations a satisfied by the Legendre polynomials P_{n}(x):
Solution
To prove RR1
We know that
Differentiating Eq. (6.31) partially w.r.t. t, we get
Multiplying this by (1 − 2xt + t^{2}) and using Eq. (6.31), we get
Equating the coefficients of t^{n}
xP_{n}(x) − P_{n−1}(x) = (n + 1)P_{n+1}(x) − 2nxP_{n}x + (n − 1)P_{n−1}(x)
⇒ (2n + 1)xP_{n}(x) = (n + 1)P_{n+1}(x) + nP_{n−1}(x)
To prove RR2
Differentiating Eq. (6.31) partially w.r.t. t and x, we get respectively
From these we have, on multiplying the first by t and the second by (x − t),
Equating the coefficients of t^{n}
To prove RR3
Differentiating RR1: (2n + 1)xP_{n}(x) = (n + 1)P_{n+1}(x) + nP_{n−1}(x) w.r.t. x we have
To prove RR4
To prove RR5
Multiplying the last equation by x and subtracting it from the previous one
To prove RR6
Note
The above recurrence relations can also be proved by using Rodrigues’ formula.
Example 6.1.4
 P_{n}(1) = 1;
 P_{n}(−x) = (−1)^{n}P_{n}(x);
 P_{n}(−1)^{n}.
Solution We know that
 Put x = 1 in Eq. (6.32)
Equating the coefficients of t^{n} on both sides P_{n}(1) = 1.
 Replacing t by –t and x by –x in Eq. (6.32), we get
Equating the coefficients of t^{n} on both sides
 Put x = 1 in the above result. Then,
6.1.8 Orthogonality of Functions
A set of functions f_{1},f_{2},f_{3},… defined on some interval I = {x ∈ R  a ≤ x ≤ b} is said to be orthogonal on I with respect to a weight function p(x) > 0 if
The norm ‖f_{n}‖ of f_{n} is defined by
The functions are called orthonormal on I if they are orthogonal on I and all have a norm equal to 1.
In respect of functions with p(x) = 1 we simply say ‘orthogonal’. Thus, functions f_{1},f_{2},f_{3},… are orthogonal on some interval I if
The norm ‖f_{n}‖ of f_{n} is then simply given by
The functions are called orthogonal on I if they are orthogonal on I and all of them are of norm equal to 1.
6.1.9 Orthogonality of Legendre Polynomials P_{n}(x)
Example 6.1.5 Show that the Legendre Polynomials P_{0}(x), P_{1}(x), P_{2}(x),… are orthogonal on I = [−1,1]. That is,
Solution
Case (i) m ≠ n
Let u = P_{m}(x) and v = P_{n}(x) be the solutions of the Legendre's equation
so that we have
Multiplying Eq. (6.38) by v and Eq. (6.39) by u and subtracting, we have
On transposing, we have
Integrating both sides w.r.t. x between −1 and 1, we get
Case (ii) m = n. We know that
Squaring both sides we have
Integrating both sides w.r.t. x between −1 and 1, we get
by Eq. (6.40).
Equating the coefficients of t^{2n} on both sides, we get
Example 6.1.6 Prove that
Solution
Replacing n by 1,2,…(n − 1),n
Adding we get
6.1.10 Betrami's Result
Example 6.1.7 Prove that
Solution
Multiplying Eq. (6.48) by (n + 1), Eq. (6.49) by n and adding
6.1.11 Christoffel's Expansion
Example 6.1.8 Prove that = (2n − 1)P_{n−1} + (2n − 5)P_{n−3} + (2n − 9)P_{n−5} + … + 3P_{1} or P_{0} according as n is even or odd.
Solution
Replacing n by (n − 1), we have
Replacing n by (n − 2), (n − 4), (n − 6),… and finally by 2 (if n is even) and 3 (if n is odd).
Adding these
6.1.12 Christoffel's Summation Formula
Example 6.1.9 Prove that
Solution
Substituting m = 0,1,2,…n in Eq. (6.57)
Adding Eqs. (6.58) − (6.62)
6.1.13 Laplace's First Integral for P_{n}(x)
Example 6.1.10 Show that when
Solution We know that
Let a = 1 − tx and We have a^{2} − b^{2} = 1 − 2tx + t^{2} so that Eq. (6.63) becomes
is the generating function for P_{n}(x)].
Equating the coefficients of t^{n},
Example 6.1.11 Show that
Solution Laplace's first integral for P_{n}(x) is
6.1.14 Laplace's Second Integral for P_{n}(x)
Example 6.1.12 Prove that
Solution We know that
Let a = tx − 1 and Then
where
Equating the coefficients of on both sides
6.1.15 Expansion of f(x) in a Series of Legendre Polynomials
Example 6.1.13 Let f(x) be expressible in a series of Legendre polynomials. Thus
where C_{n} are constants to be determined.
Then multiplying both sides by P_{n}(x) and integrating w.r.t. x from −1 to 1, we get
Example 6.1.14 Show that and hence express 2x^{2} − 4x + 2 in terms of Legendre polynomials.
JNTU 2003S]
Solution From Rodrigues’ formula
Taking n = 0, 1, 2 we have
Example 6.1.15 Express x^{3} + 2x^{2} − x − 3 in terms if Legendre polynomials.
Solution We have P_{0}(x) = 1, P_{1}(x) = x,
Example 6.1.16 Using Rodrigues’ formula prove that
Solution Rodrigues’ formula for P_{n}(x) is
(∵ D^{n−1} (x^{2} − 1)^{n} contains (x^{2} − 1) as a factor)
Proceeding similarly, integrating by parts (n − 1) times, we get
Example 6.1.17 Prove that (1 − 2xt + t^{2})^{−½} is a solution of the equation
Solution We know that
Denoting each side by u, we have
From Eqs. (6.65) and (6.66) we have
since P_{n} satisfies Legendre's equation:
Example 6.1.18 Prove that
[JNTU 2005S (Set 4)]
Solution Since
Example 6.1.19 Expand f(x) in a series of Legendre polynomials if
Solution Let f(x) = C_{0}P_{0}(x) + C_{1}P_{1}(x) + … + C_{n}P_{n}(x) + …. Then
EXERCISE 6.1

Show that (a)
[Hint: Use generating functions or

Show that
[Hint: Use orthogonal property.]

Show that

Show that (a)
[JNTU 2006 (Set 2)]

Show that

Show that

Show that
[Hint: Use recurrence relation RR1 and orthogonal property.]

Show that
[Hint: Use RR1]
[JNTU 2003S (Set 4)]

Show that 2P_{3}(x) + 3P_{1}(x) = 5x^{3}.
[JNTU 2006 (Set 3)]
6.2 BESSEL FUNCTIONS
6.2.1 Introduction
Bessel functions are solutions of Bessel's differential equation which arises in the solution of Laplace's equation in cylindrical coordinates. It occurs in many boundary value problems arising in electrical fields, mechanical vibrations and heat conduction.
6.2.2 Bessel Functions
The second order linear differential equation
is called Bessel's^{4} equation of order p. Its particular solutions are called Bessel functions of order p. p is a real (or complex) constant. Here we assume p to be real nonnegative number.
As mentioned earlier, Eq. (6.68) can be solved by the method of Frobenius. Writing Eq. (6.68) in the standard form we have
We assume a solution of Eq. (6.69) in the form
Differentiating Eq. (6.70) w.r.t. x twice, we get
Substituting Eqs. (6.70)−(6.72) into Eq.(6.68) we obtain
Equation (6.73) must be identically satisfied. This implies that the coefficient of each power of x must be zero.
The lowest power of x is x^{m} obtained for r = 0. Equating to zero the coefficient of x^{m}, we get the indicial equation
Equating to zero the coefficient of x^{m+1}, we obtain [(m + 1)^{2} − p^{2}]a_{1} = 0 ⇒ a_{1} = 0 ∵(m + 1)^{2} ≠ p^{2}
Generally equating to zero the coefficient of x^{m+r+2}, we get the coefficient recurrence relation
Putting r = 1, 3, 5, … we see that the odd coefficients vanish, i.e., a_{2r−1} = 0 for all r = 1, 3, 5,….
Putting r = 0, 2, 4, … we get
A solution of Eq. (6.68) is
We get different types of solutions depending on the values of p.
6.2.3 Bessel Functions of Nonintegral Order p: J_{p}(x) and J_{−p}(x)
Case (i) p is not an integer.
In this case, we get two linearly independent solutions for m = p and m = −p. For m = p, we have
Since a_{0} is arbitrary we choose it as and denote the resulting solution by J_{p}(x). This is called Bessel function of the first kind of order p. Thus
A second linearly independent solution corresponding to m = −p is
which is also called Bessel function of the first kind of order −p. Both the series converge for all x as can be seen by D’ Alembert's ratio test.
When p is not an integer, the complete solution of Eq. (6.68) is
where A and B are arbitrary constants.
6.2.4 Bessel Functions of Order Zero and One: J_{0}(x), J_{1}(x)
Case (ii) p = 0
Putting p = 0 in Eq. (6.68) we obtain Bessel's equation of order zero as
Its solution obtained from Eq. (6.76) by putting p = 0 and taking a_{0} = 1 is
which is the solution of Eq. (6.80) if m = 0.
Thus, the first solution of Eq. (6.80) is
which is called Bessel's function of the first kind of order zero, and appears to be similar to the cosine function.
For p = 1, we obtain the Bessel function of order 1
This looks similar to the sine function. The height of the waves decreases with increasing x.
Dividing Bessel's Eq. (6.68) by x^{2} we can put it in the standard form where p has been replaced by n. The term is 0 for n = 0. Further, this term as well as the term are small in absolute value for large x so that the Bessel's equation comes closer to y″ + y = 0 whose solutions are sine and cosine functions. Also, acts as a damping term which is partly responsible for the decrease in height (see Fig. 6.2).
For large x, we can derive the result that
Figure 6.2 Bessel functions of the first kind
6.2.5 Bessel Function of Second Kind of Order Zero Y_{0}(x)
Differentiating Eq. (6.81) partially w.r.t. m, we get
A second independent solution of Eq. (6.80) denoted by Y_{0}(x) is given by
Y_{0}(x) is called the Bessel function of the second kind of order zero or Neumann function. So, the complete solution in Bessel Eq. (6.80) of order zero is
where A and B are arbitrary constants.
6.2.6 Bessel Functions of Integral Order: Linear Dependence of Jn(x) and J_{−n}(x)
Case (iii) p = n (an integer):
To prove that J_{−n}(x) = (1)^{n}J_{n}(x).
Taking p = −n, a positive integer so that −n is a negative integer, we obtain from Eq. (6.78)
This proves that when p is an integer Bessel functions J_{p}(x) and J_{−p}(x) are linearly dependent.
6.2.7 Bessel Functions of the Second Kind of Order n: Y_{n}(x): Determination of Second Solution Y_{n}(x) by the Method of Variation of Parameters
Since J_{n}(x) and J_{−n}(x) are linearly dependent when n is an integer, the second solution of Eq. (6.68) is obtained by the following method.
Let y = uv, where v = J_{n}(x), be a solution of Eq. (6.68). Then
Substituting in Eq. (6.68), we get
The coefficient of u is zero since v = J_{n}(x) is a solution of Eq. (6.68).
and integrating.
Hence the complete solution of Eq. (6.68) when p = n, an integer, is
where and A and B are arbitrary constants.
6.2.8 Generating Functions for Bessel Functions
Let where be a sequence of functions f_{n}. A function w(t, x) such that is called a generating function of the functions f_{n} as already given at Section 6.1.6 (p. 6–12).
Generating function for Bessel functions J_{n}(x) of integral order n
Example 6.2.1 Prove that
Solution
In the product on the RHS
Coefficient of
Coefficient of
Coefficient of
The result follows from Eqs. (6.88), (6.89) and (6.90).
6.2.9 Recurrence Relations of Bessel Functions
Example 6.2.2 The following recurrence relations connect Bessel functions of different orders and are very useful in solving problems involving Bessel functions. These relations are true for general p but we prove them taking p = n.
Solution
To prove RR1
To prove RR2
To prove RR3
on dividing by x^{n}.
To prove RR4
on dividing by x^{−n}.
To prove RR5
To prove RR6
Subtracting, we get
6.2.10 Bessel's Functions of Halfintegral Order
Bessel function J_{p} of orders are elementary functions. They can be expressed in terms of sines and cosines and powers of x.
Example 6.2.3 Prove that
Solution (i) We know that
Putting
on multiplying and dividing by x, and noting
(ii) We know that
Putting
(iii) Squaring and adding the above results we have
Example 6.2.4 Prove that
Solution
6.2.11 Differential Equation Reducible to Bessel's Equation
The differential equation
where λ is a parameter, can be written as
which is Bessel's equation. When p is a noninteger the general solution is
and when p = n is an integer the general solution is
6.2.12 Orthogonality
Definition of orthogonality of functions
A set of functions f_{1},f_{2},… defined on some interval I =[a,b,] is said to be orthogonal on I with respect to a weight function w(x)>0 if
The norm of f_{m} is defined by
The functions are called orthonormal of I if they are orthogonal on I and all have norm equal to unity.
For ‘orthogonal w.r.t. w(x) = 1’ we simply say orthogonal on I. Thus, functions f_{1},f_{2},… are orthogonal on some interval I if
The norm of f_{m} is then simply defined by
and the functions are called orthonormal on a≤x≤b if they are orthogonal there and all have norm equal to unity.
Orthogonal set, orthonormal set
Example 6.2.5 Prove that the functions f_{m}(x)=sinmx(m=1,2,3,…) form an orthogonal set on −π≤x≤π.
Solution Here a=−π,b=π,f_{m}(x)=sinmx and we have
since the functions vanish at both the limits.
∴ The norm hence the orthonormal set is
Orthogonality of Bessel functions
Example 6.2.6 Prove that, for each fixed nonnegative integer n,
where α and β are roots of J_{n}(ax)=0.
Solution Let u(x)=J_{n}(ax) and v(x)=J_{n}(βx) be the solutions of the equations
respectively. Hence
Now multiplying Eq. (6.91) by and Eq. (6.92) by and subtracting.
Integrating both sides of Eq. (6.93) w.r.t. x from x=0 to a
This implies that
Case (i) α≠β
That is, α, β are two distinct roots of J_{n}(ax) so that J_{n}(aα)=J_{n}(aβ)=0.
Therefore, we obtain from Eq. (6.95)
which is the orthogonality relation for Bessel functions.
Case (ii) α=β
In this case, the RHS of Eq. (6.95) assumes the indeterminate form Hence we apply L’ Hospital's Rule by differentiating w.r.t. β and evaluating the limits.
By recurrence relation RR4:
If we put x=aα in the recurrence relation RR6, we have
Thus for α=β
6.2.13 Integrals of Bessel Functions
Example 6.2.7 Prove that
Solution Recurrence relation RR1 is
Integrating both sides we have
For p=1, we get
Recurrence relation RR2 is
Integrating both sides, we have
For p=0 we get
In general, for m and n integer with m+n≥0 can be integrated by parts completely if m+n is odd.
But when m+n is even the integral depends on the residual integral which has been tabulated.
Integrating,
6.2.14 Expansion of Sine and Cosine in Terms of Bessel Functions
Example 6.2.8 Show that
 cos x=J_{0} −2J_{2} +2J_{4} −…
 sin x=2(J_{1}−J_{3}+J_{5} −…).
Solution We know that
Put t=e^{iθ}
Separating the real and imaginary parts we have
Putting θ=π/2 we get
Example 6.2.9 Find series expansion for J_{0}(x) and J_{1}(x).
Solution We know that
Putting n=0 we get
Putting n=1 we get
Example 6.2.10 Express J_{5}(x) in terms of J_{0}(x) and J_{1}(x).
Solution
Putting n=1,2,3,4 we get
where the argument x has been omitted for convenience. Substituting from Eq. (6.97), Eq. (6.98) becomes
Substituting Eqs. (6.101) and (6.97) into Eq. (6.99)
Substituting Eqs. (6.102) and (6.101) into Eq. (6.100),
Example 6.2.11 Express J_{5/2}(x) in terms of sine and cosine functions.
Solution
Example 6.2.12 Prove that
 J_{−n}(x)=(−1)^{n}J_{n}(x)
 J_{n}(−x)=(−1)^{n}J_{n}(x)
 J_{n}(x) is an even or odd function of x according as n is even or odd.
Solution
(i) We know that for any nonnegative real p.
Let p=−n where n is a positive integer. Then
Since lim as or a negative integer, each term in the above summation is zero as long as
i.e., as long as r≤(n−1). So, the summation starts with r=n,
(ii) Let n be a negative integer and x by −x, where m is a positive integer. Then
Let n be a negative integer and n=−m where m is a positive integer. Then
Replacing x by −x, we have
(iii) We have from (ii) J_{n}(−x)=(−1)^{n}J_{n}(x)
Thus J_{n} is even or odd according as n is even or odd.
Example 6.2.13 Prove that
Solution
replacing n by (n +1).
Substituting for and into Eq. (6.109)
Substituting in Eq. (6.112), we get
Example 6.2.14 Prove that Hence show that .
Solution We have
Putting n = 0, 1, 2, 3,...
Adding these we get
Integrating we obtain
where c = 1 since and J_{n}(0) = 0 for n ≥ 1
Since each term is nonnegative
EXERCISE 6.2
Prove the following:
6.3 CHEBYSHEV POLYNOMIALS
One of the important differential equations that gives rise to special functions is the Chebyshev differential equation
where n is a positive integer. The singularities of Eq. (6.115) are x = ±1. If we seek a power series solution of Eq. (6.115) about x = 0 of the form
then this series solution is convergent in x < 1 since the distance between x = 0 and the nearest singularity is 1. Differentiating (6.116) w.r.t. x twice we obtain
Substituting (6.116) – (6.118) in Eq. (6.115) we obtain
Substituting m − 2 = t in the first sum, we get
Since t is a dummy variable, we can combine the third and fourth terms of this equation. Equating the coefficients of various powers of x to zero, we obtain
We have
Substituting in the power series solution, we obtain
where
The series y_{0}(x), y_{1}(x) converge for x < 1. Now, y_{0}(x) contains even powers of x only and y_{1}(x) contains odd powers of x only. Hence, the two solutions y_{0}(x), y_{1}(x) are linearly independent solutions of the Chebyshev differential equation (6.115).
As n takes the value zero and even positive integral values we obtain for
Hence, y_{0}(x) reduces to an even degree polynomial as n takes even positive integral values, whereas y_{1}(x) remains an infinite series.
As n takes odd positive integral values, we get for
Hence, y_{1}(x) reduces to an odd degree polynomial as n takes odd positive integral values, whereas y_{0}(x) remains an infinite series. These polynomial give rise to an important class of polynomials called Chebyshev polynomials.
Changing the independent variable x by the substitution x = cosθ we have
Substituting in the differential Eq. (6.115) we obtain
The general solution of the differential Eq. (6.121) is
∴ The solution of Eq. (6.115) can be written as
Thus, cos(n cos^{−1} x) and sin(n cos^{−1} x) are two linearly independent solutions of Eq. (6.119). Denoting the first solution by T_{n}(x) we have
which is called the Chebyshev polynomial of first kind. Denoting the second linearly independent solution by
which is called the Chebyshev polynomial of second kind.
Chebyshev polynomials of first kind
The Chebyshev polynomials of first kind are given by
we note that .
Recurrence relation for Chebyshev polynomials T_{n}(x)
The Chebyshev polynomials T_{n}(x) satisfy the following recurrence relation
we have
where cos^{−1} x = θ. Hence
and
Adding we obtain
From Eq. (6.125) we have
Using the recurrence relation (Eq. (6.126)) we have
∴ T_{n}(x) is a polynomial of degree n. If n is even T_{n}(x) is an even degree polynomial and if n is odd T_{n}(x) is an odd degree polynomial.
To express x^{n} in terms of Chebyshev polynomials
From Eqs. (6.127) we obtain
The relationship between and T_{n}(x) is given by the recurrence relation
Putting x = cosθ we get
Since LHS = RHS Eq. (6.115) is proved.
Example 6.3.1 Write the following polynomials in terms of Chebyshev polynomials of first kind:
 2x^{2} − 5x + 7
 8x^{3} + 11x^{2} − 3x + 4
 16x^{4} − 8x^{3} − 2x^{2} + 4x − 3
Solution
Example 6.3.2 Find the expression for
 in terms of T_{3}(x) and T_{2} (x); and
 in terms of T_{3}(x) and T_{2} (x); and
Solution Consider
 Taking n = 3 we have
 Taking n = 5 we have
Zeros of T_{n}(x)
Equating T_{n}(x) to zero we obtain
The n simple zeros of T_{n}(x) are given by (6.130).
Turning Points (Extreme values) of T_{n}(x)
T_{n}(x) attains its relative maximum or minimum at (x − 1) points given by Eq. (6.131). At these points we have
Also, at the end points of the interval [–1, 1] we have
∴ T_{n}(x) attains its maximum and minimum values at the (n + 1) points
Leading Coefficient of T_{n}(x)
expanding by Binomial Theorem
This shows that T_{n}(x) is a polynomial of degree n and its leading coefficient is 2^{n−1}.
Generating Function of Chebyshev Polynomials T_{n}(x)
Prove that
Proof The function on the LHS of (6.134), which is a function of two variables x and t is called the generating function of Chebyshev polynomials T_{n}(x). We have
Hence the result (6.134) is proved.
Alternate expansion for T_{n}(x)
Writing and expanding it by using Binomial Theorem we get
Now, the term containing t^{n} in the expansion of the product t^{n−r}(2x − t)^{n−r} is obtained from
∴ The coefficient of t^{n} from all the terms is b_{n} where
Also, the term containing t^{n−1} in the expansion of the product t^{n−r}(2x − t)^{n−r} is obtained from
∴ The coefficient of t^{n−1} from all the terms is b_{n–1} where
Now, the LHS of Eq. (6.134) yields
Thus, we obtain the coefficient of t^{n} as
In the second sum we put r − 1 = s so that we have, replacing the dummy variable s by r:
consequently
We can also obtain the expression for x^{n} in terms of the Chebyshev polynomials as follows:
If n is even, the last term is .
Integration of T_{n}(x)
Consider
The above result does not hold good for n = 0 and 1. For these values of n we have
Orthogonality of Chebyshev Polynomials T_{n}(x)
Prove that
Solution Case (i) m = n = 0. In this case, we get (since T_{0} = 1)
Case (ii) m = n ≠ 0. In this case, we have to evaluate
Put cos^{−1} x = θ or x = cosθ ⇒ dx = − sinθdθ and the limits x = −1 and x = 1 become θ = π and θ = 0, respectively.
Case (iii) m ⇒ n Since T_{m}(x) = cos (m cos^{−1} x) and T_{n}(x) = cos(n cos^{−1} x) we have to evaluate
Put cos^{−1} x = 0 ⇒ x = cosθ, and the limits x = −1 and x = 1 become θ = π and θ = 0, respectively.
consequently the Chebyshev polynomials T_{n}(x) are orthogonal w.r.t. the weight function .
Chebyshev Series
Let f(x) be a continuous function, having continuous derivatives on the interval [–1, 1]. Now, f(x) can be uniquely written as an infinite series.
which converges uniformly in [–1, 1]. Multiplying both sides of the equation by and integrating with respect to x over [–1, 1] and using the orthogonal property of Chebyshev polynomials we get
Hence
If we write the series given in Eq. (6.135) as
then using Eq. (6.137) we can write
This series may also be written as
where means that the coefficient of the first term T_{0}(x) is multiplied by 1/2.
Example 6.3.3 Expand f(x) = x^{3} + x, − 1 ≤ x ≤ 1 in a Chebyshev series.
Solution We have
write f(x) = c_{0}T_{0}(x) + c_{1}T_{1}(x) + c_{2}T_{2}(x) + ...
we obtain
(odd function)
Hence .
EXERCISE 6.3
Prove the following (1–7):
 T_{n}(1) = 1.
 T_{n}(−1) = (−1)^{n}.
 T_{n}(−x) = (−1)^{n} T_{n}(x).
 T_{2n}(0) = (−1)^{n}.
 T_{2n+1} (0) = 0.
 .