5 The GSL scientific library – Computational Technologies

Nadezhda M. Afanasyeva, Victor S. Borisov, Maria V. Vasilieva, Aleksandr V. Grigoriev, Petr E. Zakharov, Petr A. Popov, and Ivan K. Sirditov

5 The GSL scientific library

Abstract: The GNU Scientific Library (GSL)24 is a collection of numerical tools for solving basic problems of computational mathematics. The library is written in the C programming language and distributed under the GNU General Public License25.

5.1 Preliminaries

The GSL library provides a large number of functions (more then 1000) for the numerical solution of systems of linear algebraic equations and ordinary differential equations (ODEs), as well as functions for working in such areas of computational mathematics as combinations, statistics, optimization etc.

To install GSL, we need the C compiler of the ANS I C standard. The library can be installed from the source code (using the make tool26) or from the Ubuntu repository using the command:

To run a program that employs the GSL possibilities, we need to compile a source code

and then link the program

The result is the executable file.

5.2 Functions and constants

The GSL library provides the standard and special mathematical functions, statistical functions, physical constants, and generators of random numbers.

5.2.1 Mathematical and special functions

The GSL library provides standard mathematical functions as an alternative to system libraries; they can be used when the system functions are not available.

A short list of mathematical constants is given in Table 5.1. Table 5.2 presents the standard functions implemented in the library. To work with these functions and constants, it is necessary to include the header file gs 1_math. h.

Table 5.1. Mathematical constants.

Name Description
M_E The base of exponentials, e
M_LOG2E The base-2 logarithm of e
M_LOG10E The base-10 logarithm of e
M_SQRT2 The square root of two
M_SQRT3 The square root of three
M_PI The constant π
M_LN10 The natural logarithm of 10
M_LN2 The natural logarithm of 2
M_LNPI The natural logarithm of π
M_EULER Euler’s constant

Table 5.2. Standard functions.

Name Description
double gsl_log1p (const double x) The function log (1 + x)
double gsl_expm1 (const double x) The function exp (x) – 1
double gsl_acosh (const double x) The function arccosh (x)
double gsl_asinh (const double x) The function arcsinh (x)
double gsl_atanh (const double x) The function arctanh (x)

In the GSL library, special functions are presented in two forms: a natural form and error-handling form. The natural form returns only the value of a function, and therefore, such a form can be used in mathematical expressions. A function in the error-handling form returns an error code in addition to the value. A structure of returned value is declared in the header file gsl_sf_result. h and defined as follows:

Here val is the value of the function and err is an estimate of the absolute error. Functions in the error-handling form have the additional suffix _e.

Now we present an example of how to use a function in the form of error handling functions to calculate the Bessel function J0 (x).

Listing 5.1.

The result of the program is as follows:

In this program, we employ the function gsl_sf_bessel_J0_e, which calculates the Bessel function with a given x. Other special functions are also available in the library.

5.2.2 Physical constants

The library contains a collection of macros to operate with physical constants, such as the speed of light and the gravitational constant. The values are presented in dif ferent unit systems, including the MKSA(meters, kilograms, seconds, amperes) and CGSM (centimeters, grams, seconds, gauss) systems.

The header file gsl_const_mksa.h contains constants in the MKSA system, constants in the CGSM system are defined in the gsl_const_cgsm.h, and dimensionless constants are governed by gsl_const_num.h.

The use of physical constants is shown in the following example: we want to calculate the light-travel time from Earth to Mars.

Listing 5.2.

The output is

Here the speed of light in a vacuum is set using GSL_CONST_MKSA_SPEED_OF_ LIGHT. The constant GSL_CONST_MKSA_ASTRONOMICAL_UNIT is applied to indicate the length of one astronomical unit, which is approximately equal to the average distance between the Earth and the Sun. The constant GSL_CONST_MKSA_MINUTE is employed to calculate time in minutes.

In addition to the above constants, GSLhas other constants, which are presented in Table 5.3.

5.2.3 Random number generators

The GSL library is supplied by a collection of random number generators. These generators are accessible through a uniform interface. Environment variables permit quickly switching between different generators without the need to recompile a program. Functions for working with random numbers are declared in the header file gsl_rng.h.

Table 5.3. Physical constants.

Name Description
GSL_ CONST_NUM_AVOGADRO Avogadro’s number
GSL_CONST_MKSA_FARADAY The molar charge of 1 Faraday
GSL_CONST_MKSA_MICRON The length of 1 micron
GSL_CONST_MKSA_TON The mass of 1 ton
GSL_CONST_MKSA_CALORIE The energy of 1 calorie
GSL_CONST_MKSA_STD_ATMOSPHERE The pressure of 1 standard atmosphere
GSL_CONST_MKSA_NEWTON The force of 1 Newton
GSL_CONST_MKSA_GAUSS The magnetic field of 1 Gauss

The following example demonstrates some features of the library in using generators.

Listing 5.3.

The function gsl_rng_type * gsl_rng_env_setup reads the environment variables GSL_RNG_TYPE, GSL_RNG_SEED and writes the obtained values in the global variables gsl_rng_default and gsl_rng_default_seed, where gsl_rng_default is a pointer to a random number generator. The function gsl_rng_alloc initializes the random number generator. Further, in the loop we obtain a sequence of random numbers using the function gsl_rng_uniform, which returns a number uniformly distributed in the range from 0 to 1. Finally, the function gsl_rng_free frees the memory allocated by the random number generator.

The output of the program is as follows:

The numbers depend on a key used by the generator. The key can be changed using the environment variable GSL_RNG_SEED. The generator can also be changed using the environment variable GSL_RNG_TYPE.

Other generators are also available in GSL (see Table 5.4).

Table 5.4. Types of random number generator.

Name Description
gsl_rng_mt19937 Mersenne Twister (MT19937)
gsl_rng_ranlxs0 The generator ranlxs0
gsl_rng_cmrg Multiple recursive generator by L’Ecuyer
gsl_rng_taus Tausworthe generator

These generators are applied to model processes in order to pass statistical tests.

5.2.4 Statistical functions

The library provides basic statistical functions to calculate the mean, variance and standard deviation. More advanced functions allow evaluation of absolute deviations, skewness, kurtosis and the median along with arbitrary percentiles.

The functions are available for both integer and floating-point types. For the floating-point data type, we need to use functions with the prefix gsl_stats declared in the header file gsl_statistics_double .h. For the integer data type, we employ a function with the prefix gsl_stats_int declared in the header file gsl_statistics_int.h.

Let us present an example demonstrating the use of the statistical functions from GSL.

Listing 5.4.

The results of the program are as follows:

In this example, we apply the function gsl_stats_mean to calculate the arithmetic mean; the function gsl_stats_variance is used to evaluate the variance of random data, whereas the functions gsl_stats_max and gsl_stats_min are employed to obtain maximum and minimum values. Table 5.5 shows the list of statistical functions.

Table 5.5 Statistical functions.

Name Description
gsl_stats_sd Standard deviation
gsl_stats_absdev Absolute deviation from the mean value
gsl_stats_skew The skewness
gsl_stats_kurtosis The kurtosis
gsl_stats_covariance The covariance
gsl_stats_correlation The Pearson correlation coefficient between elements of an array

5.3 Combinatorics

The GSLlibrary has functions to solve problems in combinatorial analysis. Here we consider typical problems for permutation, combination, and enumeration.

5.3.1 Permutation

In combinatorics, a permutation p is an ordered set of n integers in the range 0 to n -1, where each value pi occurs only once. For example, the integers (0,1, 3, 2) represents a permutation, where the last two numbers have exchanged positions.

Let us solve the following problem: we want to display all the permutations of the string “republic”.

Listing 5.5.

The program creates a permutation with the size equal to the number of letters in the string. We display the letters in the i-th position of the current permutation. The library function calculates the next permutation and iterations continues until all possible permutations are complete.

Functions for working with permutations use the structure gsl_permutation. This structure is declared as follows:

The pointer data refers to an array with size integers. Before working with a permutation and, in general, with all structures in GSL, we need to correctly allocate memory calling one of the following functions:

Here the parameter n is the number of integers in a permutation. The difference between these two functions is that the pointer returned by gsl_permutation_ alloc must be initialized using the function gsl_permutation_init, while gsl_permutation_calloc returns the same pointer, but the permutation is initialized. When we finish working with a permutation, we must free the allocated memory using the function

To access the i-th element of the current permutation, we employ the function

The value returned by this function can be used as the index in the char array s. The next permutation is generated using the function

The function rearranges elements of the structure p in lexicographic order and returns GSL_SUCCESS, if the permutation has been created, or GSL_FAILURE, if no further permutations are available.

Thus, the program displays the following result:

5.3.2 Combinations

A combination () is a subset of k elements chosen from a given set containing n different elements. The order of elements is not considered. The GSL library contains functions for computing all the possible subsets in a combination.

Let us consider the classic knapsack problem. We have a set of items, each with two parameters: a mass and value. Also, we have a knapsack with a certain capacity. The aim is to collect the knapsack with the maximum total value of the items under the restriction that the total weight is less than or equal to the given limit for the knapsack.

There are different solutions of this problem. The most obvious method is the brute force approach (searching for all possible combinations). Thus, we must create a list of n items and construct combinations () for k ∈ [1, n]. For each combination of items, we calculate the total weight and value as well as choose a collection with the maximum value.

The program begins with the inclusion of the GSL library, definitions of constants and structures:

Listing 5.6.

The header file gsl / gsl_combination. h contains functions to operate with combinations. The constants NITEM and SACK_WEIGHT denote the number of items and the weight limit of the knapsack, respectively. The structure ITEM is an item with a weight and a value.

The initialization of the array with items is performed using the pseudo-random number generator from the standard library:

Listing 5.7.

The expression rand ( ) % 10 + 1 represents the generation of a number from [1,10].

The GSL library uses the structure gsl_combination for working with combinations:

The initialization of a combination is performed by means of the function

It returns a pointer to the structure of the combination (), where the lexicographically first combination is initialized. In the example with the knapsack, to examine all the possible combinations of NITEM with k items, we need to create a pointer

To get the i-th element of the subset, the following function is used:

To evaluate the weight and val of items of the current combination c, we employ the following loop:

Listing 5.8.

If items from the current subset have the maximum value and its total value is not larger than the knapsack weight, then the set c is stored as max_c. At the same time, we cannot just copy the pointer, since subsets in the combination c are changing. For this, we can use the function

Combinations must have the same size; therefore, we must allocate memory before copying the combination:

Listing 5.9.

Here gsl_combination_free frees the memory allocated by functions gsl_combination_calloc or gsl_combination_alloc. The latter two functions work similarly to the permutation functions. The memory allocation occur each time, since the number k of combined items can be changed, whereas the size of copying combination must be equal.

The step to the next combination () is performed using the function

Here the returned value is equal to GSL_SUCCESS or GSL_FAILURE, similarly to the permutation. Therefore, the main loop appears like this:

Listing 5.10.

Printing the combination with the maximum value is conducted with the function

The parameter format must contain an integer modifier, e.g. %d, which is replaced by the i-th element of the combination. All elements are written to the stream. In our example, it is the standard output stream stdout:

Listing 5.11.

Here is the complete code:

Listing 5.12.

The output is as follows:

5.3.3 Multisets

A multiset is a subset of k elements chosen from a given set containing n various elements, which may appear more than once.

Let us consider the following problem. We need to find all the possible versions of passwords with 3 digits from 0 to 8 and write obtained passwords in the lexicographical order.

Listing 5.13.

The program output is as follows:

The program begins with the inclusion of the header file gsl_multiset . h and the initialization of the multiset using the function gsl_multiset_alloc, where the lexicographically first multiset is initialized. Similarly to permutations and combinations, the step to the next multiset is performed by the function gsl_multiset_next. When we finish working with a multiset, we must free the allocated memory using the function gsl_multiset_free.

5.4 Linear algebra

In this section, we consider examples of manipulations with vectors and matrices, as well as solving systems of linear algebraic equations and finding eigenvalues with eigenvectors using GSL.

5.4.1 Basic structures

Blocks, vectors, and matrices are the basic structures used to solve systems of linear algebraic equations as well as to find eigenvalues and eigenvectors.

One of the basic structures is a block, which is declared in the header file gs1_ block. h. To create a block, we employ the function

This function allocates memory for the created block b, which will consist of 100 elements of the double type. When we finish working with the block, it must be removed (to free memory):

We illustrate the work with vectors on an example of filling a vector with some values.

Listing 5.14.

The program output is

In this example, we apply the function gsl_vector_set (v, i, a ) to set the value a to the i-th element of the vector and the function gsl_vector_get (v, i) to get the value of the i-th element.

Table 5.6 presents the fundamental vector operations available in GSL.

Table 5.6. Vector operations.

Name Operation
gsl_vector_add(gsl_vector * a, const gsl_vector * b) aj = aj + bj
gsl_vector_sub (gsl_vector * a, const gsl_vector * b) aj ai- bj
gsl_vector_mul (gsl_vector * a, const gsl_vector * b) ai = ai bj
gsl_vector_div (gsl_vector * a, const gsl_vector * b) aj = a¡/bj
gsl_vector_scale (gsl_vector * a, const double c) ai= caj
gsl_vector_add_constant (gsl_vector * a, const double c) ai c+ ai
double gsl_vector_max (gsl_vector * a) maxi(ai)
double gsl_vector_min(gsl_vector * a) mini(ai)
size_t gsl_vector_max_index (gsl_vector * a) i- > maxi(ai)
size_t gsl_vector_min_index (gsl_vector * a) i- > mini(ai)

In GSL, operations with matrices are similar to working with vectors. At the beginning, we must allocate memory, and in the end, we must free it. We can access the matrix elements using the functions gsl_matrix_get (m, i, j ) and gsl_matrix_set (m, i, j, x). The following program demonstrates briefly how to work with matrices.

Listing 5.15.

The output is as follow:

The fundamental matrix operation are shown in Table 5.7.

5.4.2 BLAS support

The Basic Linear Algebra Subprograms (BLAS) provide fundamental operations on vectors and matrices, which can be used to optimize algorithms of linear algebra at a higher level. The functionality of BLAS is divided into three levels:

Table 5.7. Matrix operations.

Name Operation
gsl_matrix_add (gsl_matrix * a, const gsl_matrix * b) ai,j = ai,j + bi,j
gsl_matrix_sub (gsl_matrix * a, const gsl_matrix * b) ai,j = ai,j - bi,j
gsl_matrix_mul_elements (gsl_matrix * a, const gsl_matrix * b) ai,j = ai,j bi,j
gsl_matrix_div_elements (gsl_matrix * a, const gsl_matrix * b) ai,j = ai,j/bi,j
gsl_matrix_scale (gsl_matrix * a, const double c) ai,j = ai,j
gsl_matrix_add_constant (gsl_matrix * a, const double c) ai,j = c ai,j
double gsl_matrix_max (gsl_matrix * a) maxi,j(ai,j)
double gsl_matrix_min(gsl_matrix * a) mini,j.(ai,j)
gsl_matrix_max_index(gsl_matrix * a, size_t *imax, size_t *jmax) (i,j)—> maxi,j(ai,j)
gsl_matrix_min_index(gsl_matrix * a, size_t *imin, size_t *jmin) (i,j)—> mini,j(ai,j)
  • Level 1. Vector operations, e.g.


  • Level 2. Matrix-vector operations like

    y= aAx+βy;

  • Level 3. Matrix-matrix operations, such as


We demonstrate using Level 1 of BLAS on the following example.

Listing 5.16.

The program output is as follows:

In this example, we use the function gsl_blas_ddot (x, y, &result ) , which allows calculation of the scalar product for two vectors.

Level 1 of BLAS also has functions for calculating norms and the absolute sum of the elements of the vector ∑i |xi|:

Listing 5.17.

To evaluate vectors according to the formulas x = a * x and y = a * x + y, we employ the following functions:

Listing 5.18.

As the result, for the vector x and y from our example, we get the following values:

Note that the considered functions also have the implementation for both complex numbers and single-precision numbers (float).

Let us consider the function gsl_blas_dgemv(CblasNoTrans, a, A, x, b, y ) which implements the matrix-vector operation y = a Ax + by.

Listing 5.19.

The output is as follows:

Level 2 of BLAS has other functions for matrix-vector operations, such as x = A x, A = a x yT + A, as well as functions for some particular matrices (symmetric, triangular, hermitian etc.).

Now apply the last level (Level 3) of BLAS associated with matrix-matrix operations. The following example illustrates the operation of the type C = α AB + bC.

Listing 5.20.

The result is as follows:

There are also other matrix-matrix operations, which can be found in the GSL Reference Manual.

5.4.3 Solving linear systems

The GSL library provides a series of functions for numerical solving of linear systems

Ax = b

via direct methods (LU, QR-decomposition, and others).

As an example, let us consider the process of solving a linear system by means of the LU decomposition

PA = LU,

where P is a permutation matrix, L stands for a lower triangular matrix, and U denotes an upper triangular matrix. Such a decomposition lets us solve the linear system in two steps:

Ly = Pb,


Since L is a lower triangular matrix, then at the first step the system is solved by the forward substitution. In the second step, since U is an upper triangular matrix, and the system is solved by backwards substitution.

The application of the LU decomposition is as follows.

Listing 5.21.

The obtained result is as follows:

In this example, we use the function

which executes LU decomposition of the matrix A, and the function

which solves the linear system on the basis of obtained LU decomposition.

It should be noted that on the basis of the LU decomposition we can also determine the determinant of a matrix using the function

as well as invert a matrix using the function

where Ainv is the derived inverse matrix.

To solve linear systems, we may use the QR decomposition, which is the decomposition of the matrix A into an orthogonal matrix Q (QQT = I) and a right-triangular matrix R using the functions

The library has some modifications of these methods as well as implementations for symmetric matrices A.

5.4.4 Eigenvalues and eigenvectors

The GSL library contains functions to evaluate eigenvalues and eigenvectors of matrices. These functions are declared in the header file gsl_eigen. h. The following program calculates eigenvalues and eigenvectors of the symmetric matrix H: H¡,j = 2 * i + 3 * j.

Listing 5.22.

The output is as follows:

To calculate eigenvalues and eigenvectors of the matrix, we must allocate memory:

Listing 5.23.

Then we call the function for calculating eigenvalues and eigenvectors:

Listing 5.24.

and free memory at the end:

Listing 5.25.

If necessary we can sort the obtained values:

Listing 5.26.

The library has functions for obtaining eigenvalues and eigenvectors of nonsymmetric matrices:

Note that in the case of nonsymmetric matrices, eigenvalues and eigenvectors are complex numbers.

5.5 Numerical differentiation and integration

We consider examples of numerical differentiation of functions and approximate calculation of a definite integral using GSL.

5.5.1 Numerical differentiation

In the GSL library, the numerical calculation of derivatives of a function is performed using the finite difference method. There are three types of finite differences: central, forward, and backward.

Assume that we need to evaluate the numerical derivative of the function

at the points x = 5 and x = 0 and compare it with the exact analytical derivative

The function f(x) does not have real values for x < 0, and the derivative at the point x = 0 must therefore be evaluated using the forward difference:

Listing 5.27.

We start with including the header file to use functions for numerical differentiation:

Listing 5.28.

The function f(x) must be created as gsl_function:

Listing 5.29.

where &f is the pointer to the C function:

Listing 5.30.

Further, we define the variables for the calculated derivative and its absolute error. Then we call the function for evaluating the approximate derivative using the central difference:

Listing 5.31.

To call the function, it is necessary to specify as arguments the differentiable function, the point, and the step-size. The function returns the calculated derivative and its absolute error. We compare the obtained result with the analytical value:

Listing 5.32.

Similarly, we calculate the derivative at the point x = 0 using the forward difference:

Listing 5.33.

The output of the program is as follows:

The library also has a function to calculate the approximate derivative using backward difference gsl_deriv_backward.

5.5.2 Numerical integration

Here we shall discuss the numerical methods for integration available in the library, such as calculating definite integrals over finite or infinite intervals, integrals of functions with singularities or periodic functions, and so on.

Algorithms of numerical integration calculate an approximate value of a definite integral of the form

where w(x) is a weight function (in particular, w(x) = 1).

The names of the methods of numerical integration are based on the first letters of the methods (see Table 5.8).

In the following is a short list containing some methods of numerical integration:

  • – QNG – a nonadaptive Gauss-Kronrod integration, w(x) = 1;
  • – QAG – a simple adaptive integration;
  • – QAGP – adaptive integration with known singular points;
  • – QAWO – adaptive integration for oscillatory functions w(x) = cos((ωx) or w(x) = sin(ωx).

Table 5.8. The letters in the name of methods for numerical integration.

Consider the numerical integration of the function with an algebraic-logarithmic singularity at the origin:

The program is as follows:

Listing 5.34.

It is necessary to include the header file in order to use the functions of numerical integration:

Listing 5.35.

Next, we define the integrand as the structure gsl_function:

Listing 5.36.

where & f is a pointer to the C function that describes the integrand. Further, we define the interval along with the absolute and relative error limits:

Listing 5.37.

If the absolute error limit abserr equals zero, only the relative error relerr will be considered.

Since the integrand has singularity at the point x = 0, it is necessary to apply the QAGS algorithm. To employ adaptive integration, we must define the maximum number of subintervals and allocate memory for them:

Listing 5.38.

Further, we call the numerical integration function:

Listing 5.39.

At the end, we free memory and print the results:

Listing 5.40.

The output of the above program is as follows:

The error returned by the QAGS function is larger than the actual error. The defined relative error will be achieved with six additional subintervals of the integration.

GSL involves other methods of numerical integration; see the documentation for more details.

5.5.3 Numerical integration of multidimensional functions

For the numerical integration of multidimensional functions in GSL,Monte Carlo’s methods are employed. Using these methods, we can calculate integrals of the form

Three Monte Carlo integration methods are implemented in the library, i.e.

  • – PLAIN is a simple Monte Carlo method, where integration points are randomly defined;
  • – MISER is a method with integration points concentrated in regions of the highest variance;
  • – VEGAS is a method similar to MISER, but it has a different logic of concentrating integration points.

Let us integrate numerically the following three-dimensional function:

where Γ(x) is the gamma function defined in the header file gs l_math. h. The corresponding program is as follows:

Listing 5.41.

First, we include the header file of the standard Monte Carlo interpolation method:

Listing 5.42.

The integrand is defined as the structure gsl_monte_function:

Listing 5.43.

where & f is a reference to the integrand

Listing 5.44.

Here *x are coordinates of a point, dim is the dimension, and *params is a pointer to parameters.

Next, we define the integration region and number of integration points:

Listing 5.45.

Since the Monte Carlo method uses random points, we must create the random numbers generator. Apply the default generator:

Listing 5.46.

All input parameters for integration of the function are now ready. Now we declare the variables for the results, allocate memory, and call the function of the standard Monte Carlo method:

Listing 5.47.

Further, we must free memory:

Listing 5.48.

Finally, we calculate the exact solution using the special function Γ (x) and compare it with the obtained result:

Listing 5.49.

After compilation and execution of the program, we get the following results:

To use MISER and VEGAS methods, we must include gsl_monte_miser.h and gsl_monte_vegas. h. In this case, the general principle of the implementation of Monte Carlo methods is the same.

5.6 Cauchy problem for systems of ODEs

Now using the GSL library, we solve the Cauchy problem for systems of ordinary differential equations (ODEs)

To solve ODE systems, it is possible to apply the following functions of the GSL library:

  • the stepping functions, which compute the next time level using a fixed time step;
  • the evolution functions, which compute the next time level using the optimal time step;
  • the driver object, which computes the next time level hiding solution process.

5.6.1 Stepping functions

These functions of calculating the solution at the next time level are fundamental components, and are used in other methods for solving ODE systems. The library provides explicit, implicit, and multistep methods, presented in Table 5.9. Implicit and multistep methods are used only with control functions, except the implicit Bulirsch-Stoer method of Bader and Deuflhard.

Table 5.9. Stepping functions.

Name Description
gsl_odeiv2_step_rk2 Explicit embedded Runge-Kutta (2, 3) method
gsl_odeiv2_step_rk4 Explicit 4th order (classical) Runge-Kutta
gsl_odeiv2_step_rkf45 Explicit embedded Runge-Kutta-Fehlberg (4, 5) method
gsl_odeiv2_step_rkck Explicit embedded Runge-Kutta Cash-Karp (4, 5) method
gsl_odeiv2_step_rk8pd Explicit embedded Runge-Kutta Prince-Dormand (8, 9) method
gsl_odeiv2_step_rklimp Implicit Gaussian first order Runge-Kutta
gsl_odeiv2_step_rk2imp Implicit Gaussian second order Runge-Kutta
gsl_odeiv2_step_rk4imp Implicit Gaussian 4th order Runge-Kutta
gsl odeiv2_step bsimp Implicit Bulirsch-Stoer method of Bader and Deuflhard
gsl_odeiv2_step_msadams Linear multistep Adams method in Nordsieck form
gsl_odeiv2_step_msbdf Linear multistep backward differentiation formula method in Nordsieck form

Consider the Cauchy problem for the ordinary differential equation:

with the exact solution u(t) = sin(t). The program for the numerical solution of this problem is as follows:

Listing 5.50.

As before, we include first the header file:

Listing 5.51.

This is a new implementation of the code for solving ODEs (variant 2). The old implementation with the header file gs 1_ode iv . h is retained for backwards compatibility.

To define an ODE system, GSL uses the special structure gsl_odeiv2_system:

Listing 5.52.

The elements of the structure are a pointer to the function fi(t, u1, u2,....., un), a pointer to the Jacobian matrix, a dimension of the system, and a pointer to the parameters of the system. The function F has the following form:

Listing 5.53.

Further, we set the time step and range:

Listing 5.54.

Since we consider the simplest approach, the explicit 4th-order Runge-Kutta method is chosen to calculate the solution at the next time level:

Listing 5.55.

Next, we define the variables for the solution and error and set the time loop to evaluate the solution:

Listing 5.56.

The function gsl_odeiv2_step_apply gets 6 mandatory and 2 optional parameters.

The optional parameters are an input array containing the derivatives () for the current time level t and an output array with computed derivatives for the next time level t + τ.

These parameters are prescribed to avoid repeating the computation of the derivatives ().

Finally, we free memory and print the results:

Listing 5.57.

The output of the program is as follows:

5.6.2 Evolution function

If we apply explicit methods, then we should choose a time step so that the solution does not diverge. For this, the time step is selected to be sufficiently small. Therefore, the number of time levels may essentially increase. The evolution function is used to automatically select the optimal time step for advancing the solution forward one step.

Let us consider the Cauchy problem for the following system of ODEs:

with the exact solution u1(t) = sin(t), u2(t) = cos(t). Prepare the following program:

Listing 5.58.

Unlike the previous example, we consider the system of two equations, but it does not change the algorithm much. In addition, we employ the control and evolution functions:

Listing 5.59.

The selection of the time step depends on the absolute abserr and relative relerr errors. When we call the evolution function, the selection of the time step and the advance of the solution to the next time level is performed within the function:

Listing 5.60.

The output of the program is as follows:

After calculating 16 time levels, we obtain the error, which is very close to the presribed absolute error abserr = 1e-6.

5.6.3 Driver

GSL provides a special driver object for the easy use of implicit and multistep methods. The application of driver objects illustrates the solution of the following system of ODEs:

with the exact solution u1(t) = sin(t), u2(t) = cos(t). To solve this problem, we apply the following code:

Listing 5.61.

In contrast to the previous example with the evolution function, in the system of equations

Listing 5.62.

we specify the function that stores the Jacobian matrix ( ) and the vector of the time derivative ():

Listing 5.63.

Here we introduce the special driver object:

Listing 5.64.

where we set the type of stepping function gsl_ode iv2_step_type*,the maximal time step tau as well as the absolute abserr and relative relerr errors. The driver object automatically initializes the evolution function, control object, and stepping function.

To evaluate the solution at each time level, we apply the driver object:

Listing 5.65.

The output of the program is as follows:

Of the three methods of solving systems of ODEs, the latter is the easiest for implementation and provides more accurate results.

5.7 Solution of equations

The GSL library allows to solve equations and systems that are based on polynomials or other functions as well as find roots and minimal values of functions.

5.7.1 Polynomials

The library functions for operating polynomials are declared in the header file gsl_poly.h. To find the roots of the polynomials, two methods are available: the analytical method for quadratic and cubic equations, and the iterative method for polynomials of the general type.

Consider the quadratic equation

x2 – 2x + 1 = 0,

with one real root x = 1. The program to find the root is as follows:

Listing 5.66.

The function gsl_poly_solve_quadratic(a, b, c, &x0, &x1) takes as parameters the coefficients of the quadratic equation and variables to store the results. The function returns the number of real roots. In this case, the zero discriminant leads to two equal roots. If no real root is found, then the values of x0 , x1 are not modified. If only one real root exists, then its value is stored in x0, whereas x1 is not modified.

The output of the program is

The library also has functions for the analytical solution of equations with complex roots and cubic equations.

To solve equations defined using the polynomial the functions based on the iterative method are employed. As an example, we consider the equation P(x) = X4 – 1 = 0 with the following program:

Listing 5.67.

Here the coefficient of the highest order must be nonzero. To store the results, the array z is introduced. Its length is equal to twice the number of roots, since the roots are complex numbers. The function allocates memory for a gsl_poly_complex_workspace structure. The size of this structure must be equal to the number of coefficients. The function frees memory after solving the equation.

Listing 5.68.

Listing 5.69.

The function finds the roots of the equation with a coefficient defined by the array a and returns the roots in the array z.

Listing 5.70.

The output:

We can observe that four complex roots of the equation are found; the first number corresponds to the real part, the second one is the imaginary part.

5.7.2 One-dimensional root-finding

Here we discuss the functions for finding roots of arbitrary one- or multidimensional functions. These functions are declared within the header file gsl/gsl_roots . h.

The library contains iterative methods for finding roots of one-dimensional functions. Iterative methods are convenient for achieving the desired tolerance of the solution and viewing the intermediate iterations. There are two classes of finding root algorithms: the root bracketing and the root polishing methods.

Bracketing algorithms examine a bounded region where a root is located. To find the root with the desired tolerance, bracketing algorithms reduce the region in an iterative manner. This permits accurate estimation of an error of the solution. These algorithms can be applied if we have a single root or an odd number of roots in the region. However, only one root will be found.

Polishing algorithms attempt to improve an initial guess for the root at each iteration. Their convergence rate depends essentially on the initial guess for the root; they may diverge if this guess is not close enough. These algorithms have no restriction on the number of roots, but they also find only one root. Before using a polishing algorithm, we must visually analyze the function and indicate the correct initial guess.

To solve an equation using bracketing and polishing algorithms, we use the gsl_root_fsolver and gsl root_fdfsolver structures, respectively. Both classes include three main steps during the iteration:

  • – initialization of solver state s for the algorithm T;
  • – updating the state s using the algorithm T;
  • – checking the state s for convergence and repeating the iteration if necessary.

Consider, for example, the quadratic equation

x2 – 4 = 0,

and solve it using the Brent bracketing algorithm.

Listing 5.71.

We start by preparing the function for gsl_function. For polishing algorithms, we must also specify the derivative of the function. Here we declare the type of the solver as the Brent bracketing algorithm:

Listing 5.72.

Table 5.10 presents the full list of available solvers.

Table 5.10. Root-finding algorithms.

Solver Description
Bracketing algorithms
gsl_root_fsolver_bisection The bisection method
gsl_root_fsolver_falsepos The false position method
gsl_root_fsolver_brent The Brent method
Polishing algorithms
gsl_root_fdf solver_newton The standard Newton method
gsl_root_fdfsolver_secants The secant method (simplified Newton’s method)
gsl_root_fdfsolver_steffenson The Steffenson method

Using the function gsl_root_fsolver_alloc, we allocate memory for the state of the solver:

Listing 5.73.

The function gsl_root_fsolver_set specifies the parameters of the solver:

Listing 5.74.

The function gsl_root_fsolver_iterate (s) performs the iteration and updates the state s. The function gsl_root_test_interval checks convergence and returns the status of the found root.

The output of the program:

It is easy to see that the root of the equation is found with the given accuracy in 7 iterations.

5.7.3 Multidimensional root-finding

The multidimensional root-finding can be treated as solving a nonlinear system with n unknowns and n equations:

In this case bracketing algorithms are not applied. All applicable algorithms are versions of the Newton method and use the initial guess. The structure of solvers permits combining them. The algorithms under onsideration are divided into two classes, i.e. the algorithms based on using derivatives, and the methods without employing derivatives.

For example, let us study the Rosenbrock system of equations:

which has one root at the point (X1, X2) = (1,1).

Listing 5.75.

First, we define the general system of equations with parameters:

Listing 5.76.

For solvers which use the derivatives, we employ the structure gsl_multiroot _function_fdf. Next, we initialize the solver:

Listing 5.77.

Table 5.11 demonstrates the list of available solvers.

Table 5.11. Multidimensional root-finding algorithms.

Solver Description
Without derivatives
gsl_multiroot_fsolver_hybrids The Hybrid algorithm, which replaces calls to the Jacobian function by its finite difference approximation
gsl_multiroot_fsolver_hybrid The Hybrid algorithm without internal scaling
gsl_multiroot_fsolver_dnewton The discrete Newton algorithm
gsl_multiroot_fsolver_broyden The Broyden algorithm
Using derivatives
gsl_multiroot_fdfsolver_hybridsj The Powell hybrid method
gsl_multiroot_fdfsolver_hybridj The unscaled version of hybridsj
gsl_multiroot_fdfsolver_newton The standard Newton method
gsl_multiroot_fdfsolver_gnewton The modified Newton method

Here is the output of the program:

The error decreases with each iteration and the root is found in 11 iterations.

5.7.4 One-dimensional minimization

Minimization functions available in GSL as root-finding functions are iterative methods. Each algorithm stores its state, and therefore, we can switch between algorithms if necessary. Algorithms work in a bounded region and return only one minimum value at a time. The minimization functions are declared in the header file gsl\gsl_min.h.

The following is an example of finding the minimal value of the function f (x) = cos(x) – 1 is presented:

Listing 5.78.

We begin by selecting the minimization algorithm:

Listing 5.79.

Table 5.12 presents the list of available algorithms. Next, we allocate memory for the selected minimizer and specify its parameters:

Listing 5.80.

Further, we perform the iteration using the function gsl_min_fminimizer_ iterate and get the current value of the minimum and the current lower and upper bounds of the interval.

Listing 5.81.

Table 5.12. One-dimensional minimization algorithms.

Algorithm Description
gsl_min_fminimizer_goldensection The golden section algorithm
gsl_min_fminimizer_brent The Brent minimization algorithm, which combines parabolic interpolation with the golden section algorithm
gsl_min_fminimizer_quad_golden The Brent algorithm, whichuses the safeguarded step-length algorithm of Gill and Murray.

The output of the program is

The minimal value with the given accuracy 0.001 is obtained in 7 iterations.

5.7.5 Multidimensional minimization

Now we consider the functions of finding the minimal values of multidimensional functions. The GSL library provides iterative methods of minimization. Minimizers can be combined, since the intermediate states between iterations are fully stored. Minimization algorithms can find only one value of the minimum. If there is more than one value, then a minimizer returns the first value.

In this case, only polishing algorithms are used to find the minimal value. The polishing algorithms are divided into two classes: with and without using derivatives.

As an example, we consider the function

with the minimal value 0 at the point (0, 0).

Listing 5.82.

First, we describe the structure my_fdf for the function and its derivative. Further, we define the function and specify the parameters.

After setting the initial guess, we select the minimization algorithm (the available algorithms are listed in Table 5.13):

Listing 5.83.

Next, we allocate memory for the selected minimizer and set its parameters:

Listing 5.84.

Then we run the loop until the minimal value is found. Inside the loop, we perform the iteration gsl_mul timin_fdfminimizer_iterate and check the state of the minimizer.

Table 5.13. Multidimensional minimization algorithms.

Algorithm Description
With derivatives
gsl_multimin_fdfminimizer_conjugate_fr The Fletcher-Reeves conjugate gradient algorithm
gsl_multimin_fdfminimizer_conjugate_pr The Polak-Ribiere conjugate gradient algorithm
gsl_multimin_fdfminimizer_vector_bfgs The vector Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm
gsl_multimin_fdfminimizer_vector_bfgs2 The modified vector Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm
gsl_multimin_fdfminimizer_steepest_descent The steepest descent algorithm
Without derivatives
gsl_multimin_fminimizer_nmsimplex The simplex algorithm of Nelder and Mead
gsl_multimin_fminimizer_nmsimplex2 The modified simplex algorithm of Nelder and Mead
gsl_multimin_fminimizer_nmsimplex2rand The variation of the nmsimplex2

The following is the output of the program:

The minimal value of the function is achieved in 11 iterations.

5.8 Interpolation and approximation of functions

This section describes the functions and methods for approximate reconstructing functions. Namely, we consider interpolation, least-squares fitting, and basis splines.

5.8.1 Interpolation

The interpolation is a method of constructing a function f (x) using the known values at several points of an interval. The interpolation functions are declared in the header files gsl_interp.h and gsl_spline .h.

We now consider an example with interpolation of data using the cubic spline, where 5 points are given: xi = i, yi = i + ei, i = 0,1, 2, 3, 4:

Listing 5.85.

For each value xi+1/2 = xi + 0.5, we get the following values of yi:

The interpolation function for a given data set is stored in the object gsl_interp. To create new interpolation functions, we apply the function which allocates memory for interpolating object of type t with N given points. Next, to initialize the object interp, we call

Listing 5.86.

Listing 5.87.

At the end, the object interp must be removed:

Listing 5.88.

In this example, we use the cubic spline interpolation method:

Listing 5.89.

GSL provides the standard interpolation types, such as linear interpolation, polynomial interpolation, cubic spline interpolation, Akima spline interpolation and so on. These interpolation types are interchangeable. Interpolation can be defined for both natural and periodic boundary conditions. Table 5.14 presents the list of available interpolation types.

Table 5.14. Interpolation types.

Type Description
gsl_interp_linear Linear interpolation
gsl_interp_polynomial Polynomial interpolation
gsl_interp_cspline Cubic spline with natural boundary conditions
gsl_interp_cspline_periodic Cubic spline with periodic boundary conditions
gsl_interp_akima Nonrounded Akima spline with natural boundary conditions
gsl_interp_akima_periodic Nonrounded Akima spline with periodic boundary conditions

The state of an interpolation process is stored in an object gsl_interp_accel, which is a sort of iterator for interpolation. It stores the previous state of the process. Thus, if the subsequent interpolation point lies in the same range, then its index value can be returned instantly.

To create an accelerator object, it is necessary to apply the function

Listing 5.90.

This object must also be removed:

Listing 5.91.

To obtain the interpolated value yi, we call the function

Listing 5.92.

Here interp is the interpolation object, x,y denotes a given data set, xi is a new value of x, and ace is the accelerator.

There are also additional functions for calculating derivatives and integrals of interpolated functions (see Table 5.15).

Table 5.15. Functions for calculating derivatives and integrals.

Function Description
gsl_interp_eval_deriv The derivative of an interpolated function for a given point x
gsl_interp_eval_deriv2 The second derivative of an interpolated function for a given point x
gsl_interp_eval_integ The numerical integral of an interpolated function over the range (a,b)

These functions require pointers to arrays x and y at each call. As a simplification, GSL provides functions that are equivalent to the corresponding functions of gsl_interp and store a copy of the data in the object gsl_spline. This eliminates the need to enter values of x and y as arguments for each calculation. These functions are defined in the header file gsl_spline. h.

5.8.2 Least-squares method

The interpolation of large experimental data obtained with some errors is irrational. The least squares method is an approach that determines an approximate function by minimizing the sum of the squared deviations of the values of this function and the experimental data.

Functions of the least-squares method are declared in the header file gsl_fit.h.

The approximate function Y(c, x) is found by minimizing X2, i.e. using the weighted sum of squared residuals over n given points (xi, yi) for the function Y(c, x):

Here is a weight factor, and σi stands for the experimental error yi. For unweighted data, the sum χ2 is calculated without any weight factors.

The fitting functions return the best parameters c = c0, c1, ... and corresponding covariance matrix, which measures the statistical errors.

The following is the program that calculates the linear interpolated function Y = Y(c, x) = c0 + c1x for the given data set using the least-square method:

Listing 5.93.

The output is as follows:

In this example we use the function which evaluates the coefficient of the linear regression Y(c, x) = c0 + c1x for the weighted data:

Listing 5.94.

Here x and y is the set of weighted data of length n, 1 is a stride, w is the weight function in the form of the vector of length n with the stride = 1. The covariance matrix (c0, c1) is estimated from the scattering of points around the function and returned via the parameters (cov00, cov01, cov11), whereas chisq is the weighted sum of squared residuals χ2.

If we want to calculate the coefficients c0, c1 of the best approximation Y(c, x) = c0 + c1x, for data without weight, we use the function gsl_fit_linear.

Using the coefficients (c0, c1) and the covariance parameters (cov00, cov01, cov11), we can calculate the value y of the linear regression Y(c, x) = c0 + c1x at the point x and the standard deviation using the function gsl_fit_linear_est. Similarly, there are functions to determine the coefficients of the function Y = c1X.

The library provides functions for least-square multiparameters fitting using the function of type Y = Xc, where Y is a vector of n data sets, X is an n x p matrix of parameter,s and c is the vector of best-fit parameters. The value X2 is calculated as

To apply this fitting, we need to formulate the n×p matrix X defined by any number of functions and variables. For example, to fit to the polynomial x with p-th order, the matrix is defined as

To fit to a set of sinusoidal functions with fixed frequencies w1, w2, ... , wp, we apply

To fit to independent variables x1, X2, ... , xp, we use

where xj(i) is the i-th value of the given function.

To apply these functions, we must include the header file gsl_multifit .h.

5.8.3 B-splines

B-splines are used as basis functions to build smoothing curves for a large set of data. B-splines differ from interpolation splines in that the resulting curve is not required to pass through given points.

To build a B-spline, the abscissa axis is divided into some number of intervals, where the endpoints of each interval are called breakpoints. These points are then converted to knots by setting various continuity and smoothness conditions at each interface.

Basis splines of order k are defined by



for i = 0,1,..., n – 1. For k = 4, we get a cubic B-spline. This recurrence relation can be numerically evaluated using the de Boor algorithm.

If appropriate knots are defined on an interval [a, b], then the basis functions of the B-spline generate a complete set on this interval. Therefore, we can extend smoothing functions as

by setting the data pairs (xj,f(xj)). The coefficients ci can be easily obtained using the least-squares method.

Let us employ the linear least-square approximation based on the basis functions of cubic B-splines with uniform breakpoints. The data is generated by the function with Gaussian noise:

Listing 5.95.

The solution of this problem is shown in Figure 5.1.

Fig. 5.1. The result of constructing cubic B-spline.

To evaluate the basis functions of the B-spline, we begin by allocating memory:

Listing 5.96.

At the end, we must free memory by calling the function

Listing 5.97.

If we need to use derivatives of the B-spline, we can also employ the function gsl_bspline_deriv_workspace to allocate memory and the function gsl_ bspline_deriv_free to free memory.

Listing 5.98.

Using the function we set uniform breakpoints on the given interval [0, 10] and construct the corresponding knot vector using the parameter nbreak. The knots are stored in w->knots.

The function evaluates all the basis functions of the B-spline at the position xi and stores them in the vector B. The vector B must be of length n = nbreak + k – 2. There are also functions for the evaluation of basis functions derivatives.

Listing 5.99.

5.9 Fast Fourier transforms and Chebyshev approximations

The GSL library provides functions for performing fast Fourier transforms and Chebyshev approximations.

5.9.1 Fast Fourier transform

Fast Fourier transforms are algorithms for calculating the discrete Fourier transform

Discrete Fourier transforms usually arise when approximating the continuous Fourier transforms, where a function is given at discrete intervals. The discrete Fourier transform can be presented as a matrix-vector multiplication Wz. For a vector of length n, this multiplication takes O(n2) operations. A fast Fourier transform factorizes the matrix W into smaller submatrices. If n can be presented as a product n1 n2 ... nm, then a discrete Fourier transform can be calculated in O(ni ni) operations. If ni = 2, then the discrete fast Fourier transform is performed in O(n log2 n) operations.

All fast Fourier transform functions are divided into three types: forwards, inverse, and backwards. The definition of the forward Fourier transform is

and the definition of the inverse Fourier transform is

Thus, a call to the function gsl_fft_complex_forward, followed by a call to the function gsl_fft_complex_inverse, must return the input data with numerical errors.

The backward transform corresponds to the inverse transform without the factor 1/n; it is used to avoid additional division if there are no restrictions on the result of the function.

The library supports both radix-2 and mixed-radix fast Fourier transform functions.

Here we will consider fast Fourier transform realization for a pulse. For this example, we apply the fast Fourier transform function for complex data. This transform uses the radix-2 Cooley-Tukey algorithm.

Listing 5.100.

To employ functions of fast Fourier transforms, we include the header file:

Listing 5.101.

Next, to perform a fast Fourier transform, we use the following function:

Listing 5.102.

Here data is data, 1 is a stride, and Q is the length of the transform. The stride defines the efficiency of data processing. For example, if the stride value equals 2, then each second value will be processed.

For the inverse fast Fourier transform, we apply a similar function:

Listing 5.103.

The result is depicted in Figure 5.2.

The library also provides a function for performing Fast Fourier transforms for real data.

5.9.2 Chebyshev approximations

A Chebyshev approximation is a truncation of the series

where the Chebyshev polynomials

form an orthogonal basis of polynomials on the interval [-1,1].

Fig. 5.2. The pulse and its fast Fourier transform.

Let us now consider how to apply Chebyshev approximations.

Listing 5.104.

First, we must include the header file for working with the functions of Chebyshev approximations:

Listing 5.105.

Now we set the function similar to the pulse function from the example of the fast Fourier transform:

Listing 5.106.

Next, we need to allocate memory for the Chebyshev series, for example of the order 40:

Listing 5.107.

Since Chebyshev polynomials work with special functions, we link the given function with the necessary GSL function:

Listing 5.108.

We calculate the Chebyshev approximation for the function over the range [0,1]. To do this, we use the function gsl_cheb_init. The function takes a pointer to the Chebyshev series, the function f, and the range:

Listing 5.109.

Further, we evaluate the Chebyshev series at each point:

Listing 5.110.

At the end, we free allocated memory:

Listing 5.111.

The result is presented in Figure 5.3.

The main functions for working with Chebyshev polynomials are presented in Table 5.16.

Fig. 5.3. Input function and its Chebyshev approximation.

Table 5.16. Functions for Chebyshev approximations.

Function Description
gsl_cheb_order The function returns the order of Chebyshev series
gsl_cheb_size The function returns the size of the Chebyshev coefficient array
gsl_cheb_eval The function evaluates the Chebyshev series at a given point
gsl_cheb_eval_err The function evaluates the Chebyshev series at a given point and returns the estimates for the result and its absolute error
gsl_cheb_eval_n The function returns the Chebyshev series with the given order
gsl_cheb_eval_n_err Similar to gsl_cheb_eval_n
gsl_cheb_calc_deriv The function calculates the derivative of the Chebyshev series
gsl_cheb_calc_integ The function calculates the integral of the Chebyshev series