5 Electrooptics – Military Avionics Systems

5
Electrooptics

5.1 Introduction

The use of electrooptic sensors in military avionics systems has steadily evolved over the past three decades. Infrared (IR) missiles were originally produced during the 1950s with missiles such as Sidewinder in the United States and Firestreak and Red Top in the United Kingdom. Television (TV) guidance was used on guided missiles such as TV-Martel developed jointly by the United Kingdom and France during the 1960s and AGM-62 Walleye in the United States. Lasers were used for target illumination during the latter stages of the Vietnam War. Forward looking IR (FLIR) imaging systems were developed and deployed during the 1970s, and third-generation systems are now taking the field. Infrared track and scan (IRTS) systems followed. Now, integrated systems are in operation that combine a number of sensor types to offer a complete suite of capabilities.

This chapter describes the following electrooptic technologies that are to be found on a range of modern military, law and order and drug enforcement platforms:

  • Television (TV) – day, low-light and colour (section 5.2);
  • Night-vision goggles (NVG) (section 5.3);
  • IR imaging including forward looking infrared (FLIR) (section 5.4);
  • IR tracking systems including IR-guided missiles and infrared track and scan (IRTS) (section 5.5);
  • Lasers – target illumination, range-finding and smart bomb guidance (section 5.6);
  • Integrated systems (section 5.7) (usually carried in external pods or multiaxis swivelling turrets; on stealth aircraft, carried internally to preserve aircraft low-observability characteristics).

The characteristics and range of all the electromagnetic sensors used in modern military avionics systems was described in Chapter 2 (Figure 2.2), spanning 10 decades of the electromagnetic spectrum. Electrooptic sensors and systems cover the top two decades from 104 GHz to 106 GHz in frequency (Figure 5.1). In fact the categorisation of devices in Hz at this point in the spectrum becomes meaningless because of the huge numbers involved, so that wavelength is more usually used. Therefore, wavelength is referred to in microns, where 1 μm = 1×10–6 m or one-millionth of a metre. In the visible light portion of the spectrum that the human eye uses, angstrom units are sometimes used, especially in the scientific community, where 1 Å = 10–2 m or 104 μm.

Figure 5.1 Electromagnetic Spectrum for electrooptic systems.

The three specific bands of interest in Figure 5.1 are:

  1. The visible light spectrum from 750 to 400 nanometres covering red/orange/yellow/green/blue/indigo/violet.
  2. The IR spectrum from 1 to 14 μm which itself is subdivided into three regions:
    • Shortwave IR: 1–2.5 μm;
    • Medium-wave IR: 3–5 μm;
    • Long-wave IR: 8–14 μm.
  3. The spectrum within which airborne lasers operate from 193 nanometres near the ultraviolet part of the spectrum to 10 600 nanometres approaching the far IR spectrum. Of particular interest is the frequency at which yttrium aluminium garnet (YAG) operates, namely 1064 nanometres, this being one of the most common lasers used.

As the area of the spectrum in which electrooptic sensors operate is close to and includes visible light, so they experience many of the same shortcomings to a greater or lesser extent: obscuration due to water vapour or other gases, scattering due to haze and smoke, etc. The near-IR part of the spectrum such as the shortwave IR region suffers most, although sensors operating in the long-wave IR band are less affected.

Therefore, a major problem confronting the use of sensors in the IR region in particular is the severe attenuation that occurs in certain parts of the spectrum, allowing only certain windows to be used. This is similar to the radar attenuation described in Figure 3.4 of Chapter 3. The IR transmission characteristics in the atmosphere are presented in Figure 5.2. This shows the percentage transmission over the IR band from 1 to 14 μm at sea level. As shown in the figure, at sea level there are a number of areas where the attenuation is significant, particularly in a region between 6 and about 7.6 μm where there is no transmission at all – mainly owing to water vapour and CO2. At altitude the situation is much improved, although there are one or two unfavourable attenuation ‘notches’, again owing to H2O or CO2. Therefore, atmospheric attenuation is most likely to affect systems being used at low level, while air-to-air missiles being used at medium or high altitude will be less affected.

In fact, IR systems tend to use the IR windows or bands already described to avoid the worst of the problem using the SWIR (1–2.5 μm), MWIR (3–5 μm) and LWIR (8–14 μm) regions, and all the sensors utilised in avionics systems operate in these bands.

Figure 5.2 Transmission characteristics of infrared (sea level).

5.2 Television

The visible light sensors used in electrooptics are available to assist and augment the platform operator’s ‘Mk 1 eyeball’. The main categories used are:

  • Direct vision optics, effectively a direct optics system offering image magnification in the same wave as a conventional telescope or binoculars;
  • Television (TV) imaging in monochrome and, more recently, colour;
  • Low-light TV (LLTV) for viewing in low-light conditions.

TV camera sensors used in military applications existed for many years prior to the 1980s when the sensing technology was imaging tubes. Charge coupled devices (CCDs) have now largely replaced the imaging tube, and these are described below.

The CCD imaging device comprises a number of resolution cells or pixels that are sensitive to incident light. Each pixel comprises elements of polysilicon placed upon a layer of silicon dioxide. Below the silicon dioxide is a silicon substrate layer. As the incident light arrives at the polysilicon, a ‘well’ of charge is built up according to the level of the incident light. The pixels are arranged in X rows and Y columns to provide the two-dimensional CCD array as shown in Figure 5.3. In each pixel the charge according to the amount of incident light is captured by a potential barrier that surrounds the well and maintains the charge in place. The pixels are separated by an interelement gap and respond to light below a wavelength of 1100 nm (visible light lies between 400 and 750 nm). Therefore, visible spectrum energy incident upon the array will provide a pattern of charge within each pixel of the CCD array that corresponds to the image in view. The CCD array is an optical plane that preserves the image in the same way as a frame of film, and developments of this technology are now to be found in commercial digital cameras – replacing wet-film technology.

Figure 5.3 Principle of operation of the charge coupled device.

Figure 5.4 Full frame CCD.

Once the array of retained charge has been established, the image needs to be scanned. There are a number of ways of accomplishing this, two of which are described below:

  • Full frame devices;
  • Frame transfer devices.

A full frame CCD device is shown in Figure 5.4. The charge associated with a row of pixels is shifted sequentially to the serial register, whereupon the row is read out as a stream of data. This is repeated on a row-by-row basis until the complete array of pixels has been read off the imaging device or chip. As the parallel register is used for both imaging and read-out, a mechanical shutter or illuminated strobe must be used to preserve the image. The simplicity of this method means that operation is simple and provides images of the highest resolution and density.

The frame transfer architecture is shown in Figure 5.5 and is similar to that of the full frame transfer except that a separate identical parallel register, called a storage array, is provided that is not light sensitive. The captured image is quickly read into the storage array from whence the image may be read out as before while the next image is being formed. The advantage of this approach is that a shutterless or strobeless imaging process is possible, resulting in higher frame rates. Performance is sacrificed as imaging is still occurring while the image is being read to the storage array, resulting in smearing of the image. Twice the area of silicon is required to fabricate an imaging device of comparable imaging coverage, so the frame transfer array provides lower performance at a higher cost.

The precise optical configuration of a CCD depends entirely upon the intended operational use. CCD devices may be used in a tactical platform such as a battlefield helicopter or as a high-resolution imaging system for a theatre reconnaissance aircraft or UAV. The imaging requirements and field of view (FOV) will be different in each case. Looking at the capabilities of two CCD imaging systems illustrates this fact. The examples chosen are the systems on board the AH-64 C/D Longbow Apache and RQ-4 Global Hawk:

Figure 5.5 Frame transfer CCD.

  1. AH-64 C/D Apache. The target acquisition designation sight (TADS) or AN/ASQ-170 is part of the overall turreted EO sensor system on the Apache battlefield helicopter. The TV element is part of the lower turret assembly which is described more fully in Chapter 9. The display is presented to the pilot/gunner by means of a TV raster display, originally on a CRT, but more recently this has been upgraded to a colour active matrix liquid crystal display (AMLCD). Military display technology is described in Chapter 11. The FOV options available for the TV are:
    Underscan 0.45°
    Narrow 0.9°
    Wide 4.0°

    The TADS turret is capable of traversing ± 120° in azimuth and + 30 to –60° in elevation with respect to the aircraft axes. Aircraft manoeuvring may reduce the turret FOV in certain circumstances.

  2. RQ-4 Global Hawk. The CCD imaging system is a theatre reconnaissance high-resolution imaging system designed to provide high-quality images for intelligence purposes. The field of view is specified in milliradians (mrad), where a radian is equivalent to 57.3°, therefore 1 mrad is equivalent to 0.0573° (1 mrad is the angle subtended by an object of 1 m length at a range of 1000 m). The specified FOV for the CCD imaging device, part of the integrated sensor system (ISS) is 5.1 × 5.2 mrad (0.3°×0.3°). The platform is capable of providing the following coverage in a 24 h period:

    The national imagery interpretability rating scale (NIIRS) is an imaging classification for radar, IR and visual imaging systems. The scale is from 0 to 9, where 0 represents unusable and 9 represents the highest quality. The scale equates to qualitative criteria to which the image interpreter can easily relate. For example: NIIRS 6 quality allows the spare tyre on a medium-sized truck to be identified; NIIRS 7 quality allows individual railway sleepers (rail ties) to be identified.

Low-light TV is accomplished by the use of image intensifier tubes or devices that amplify the image incident from the scene in view. Today, the use of night-vision goggles (NVGs) provides operators with a look-up, look-out capability that is far more flexible than the use of a dedicated low-light TV (LLTV) sensor, and this technology is described in the next section.

5.3 Night-vision Goggles

Night-vision devices gather ambient light and intensify the image by using a photocathode device to produce electrons from the photons in the incident light. The electrons are multiplied and then used to bombard a phosphor screen that changes the electrons back to visible light. The sensors are normally configured as a monocular or binocular ‘goggle’ which is attached to the operator’s helmet, and through which he views the external low-light scene. However, as will be described in Chapter 11, care has to be exercised to ensure that the NVGs are compatible with the cockpit displays and night lighting.

Figure 5.6 shows the principle of operation of the image intensifier. The unenhanced image being viewed passes through a conventional objective lens to focus the incoming photons that represent the image upon the photocathode. The gallium arsenide (GaAs) photocathode produces a stream of electrons that are accelerated with the assistance of an applied voltage and fired towards the microchannel plate. Electrons pass through the microchannel plate – a small glass disc that has many microscopic holes (or channels) – and the effect is to multiply the electrons by a process called cascaded secondary emission. This multiplies the electrons by a factor of thousands while registering each of the channels accurately with the original image.

The electrons then impact upon a phosphor screen that reconstitutes the electrons into photons, thereby reproducing the original image but much brighter than before. The image created on the phosphor screen passes through an ocular lens that allows the operator to focus and view the image. As the image is recreated using a green phosphor, the intensified image possesses a characteristic green-hued outlook of the object being viewed.

Figure 5.6 Principle of image intensification.

Night-vision devices (NVDs) were originally used over 40 years ago and employed an IR source to illuminate the target which was then viewed using an image intensifier. Apart from the low reliability of the early imaging systems, the use of an IR illumination source clearly provided the enemy with information about the user’s whereabouts, and later systems obviated the need for IR illumination. This technology is sometimes termed generation 0. This led to the development of passive imaging devices which today are classified by four generations, each generation providing significantly improved performance over the preceding generation. A summary of these generations and the technology advancements associated with each is given below:

  1. Generation 1. This generation used the ambient illumination provided by the moon and stars. The disadvantage was that the devices did not work well on cloudy or moonless nights. Essentially the same technology was used as for generation 0 so that poor reliability was also an issue.
  2. Generation 2. Major improvements in image intensifier technology enhanced the performance of the image intensifier and improved the reliability. The biggest improvement was the ability to see images in very low ambient light conditions such as may be found on a moonless night. This technology introduced the microchannel plate which not only magnified the electrons but created an image with less distortion because of the ‘channelling’ effect.
  3. Generation 3. This has minor technology additions by way of manufacturing the photo-cathode using gallium arsenide (GaAs), a material that is very efficient at converting photons into electrons and makes the image sharper and brighter. Another improvement was coating the microchannel plate with an ion barrier to improve reliability. Generation 3 technology is typical of that in use with the US military today.
  4. Generation 4. Generation 4 is referred to as ‘filmless or gateless’ technology. The ion film was removed to allow more electrons to pass through the microchannel plate, thereby improving the tube response. Gating of the power supply allows the tube to respond more quickly to rapid changes in lighting conditions.

The use of NVGs is usually achieved by clamping them on to the operator’s helmet as shown in Figure 5.7.

Figure 5.7 Helmet with mounted night-vision goggles. (Infrared 1)

Figure 5.8 shows a typical NVG image of a AV-10 in flight. Therefore, as well as providing images to survey and attack an enemy, NVGs can be of considerable use in allowing friendly forces to operate covertly, such as air-to-air refuelling at night without lights.

More recently, the use of helmet-mounted displays (HMDs) incorporating night-vision devices has been adopted in the military combat aircraft community, and these are discussed in detail in Chapter 11.

5.4 IR Imaging

IR imaging by day or by night has become one of the most important sensing technologies over the past 30 years. In that time, technology has advanced appreciably up to the point where the quality of IR imaging and visible light imaging is virtually indistinguishable.

As has already been described, the visible light spectrum from 400 to 750 nanometres, SWIR from 1000 to 2500 nanometres (1–2.5 μm), MWIR from 3000 to 5000 nanometres (3–5 μm) and LWIR from 8000 to 14 000 (8–14 μm) are very close to one another in the electromagnetic spectrum. It is therefore hardly suprising that many of the characteristics are very similar.

Figure 5.8 Typical NVG image (AV-10 in flight). (Infrared 1)

Figure 5.9 Black body radiation.

Sensing in IR wavelengths is basically about sensing the radiation of heat. All objects radiate heat primarily depending upon their temperature but also to some extent upon the material and the nature of the surface. In classical physics the emission of thermal energy is referenced to that from a black body which is the ideal thermal emitter. Figure 5.9 shows typical plots of radiated energy versus wavelength for a black body whose temperature is 900 K (627°C) and 1000 K (727°C) respectively. The higher the temperature, the higher are the levels of radiated energy. It can also be seen that the peak value of the radiated energy moves to the left (decreases in wavelength) the hotter the object becomes. This characteristic is called Wien’s law and will be examined in more detail shortly.

The problem with this model is that not all objects are black and are perfect radiators. This can be accounted for by applying an emissivity coefficient that corrects for an imperfect radiator. Table 5.1 tabulates the emissivity coefficient for some common materials.

It can be seen that most building materials have fairly high emissivity coefficients. On the other hand, metals have a relatively low value when polished, but this increases appreciably when the surface oxidises. Aluminium is slightly different as it has a higher value when anodised which decreases if the surface is oxidised.

The other effect that Figure 5.9 illustrates is the fact that, if an object gets sufficiently hot, it emits visible light. It can be seen that an object at 1000 K is beginning to radiate energy in the red portion of the visible light spectrum. If the object were to be heated further, this area would encroach to the left. Eventually, if the object were heated to a sufficiently high temperature, then it would emit energy right across the visible light spectrum in which case it would appear white. This tallies with what everyone knows: if you heat an object it will first begin to appear red (‘red hot’), and if the object is heated to a high enough temperature it will eventually appear white (‘white hot’).

Table 5.1 Emissivity coefficients for some common materials

Surface material Emissivity coefficient
Black body (matt) 1.00
Brick 0.90
Concrete 0.85
Glass 0.92
Plaster 0.98
Paint 0.96
Water 0.95
Wood (oak) 0.90
Plastics (average) 0.91
Aluminium (oxidised) 0.11
Aluminium (anodised) 0.77
Copper (polished) 0.04
Copper (oxidised) 0.87
Stainless steel (polished) 0.15
Stainless steel (weathered) 0.85

The effect of Wien’s law is presented in Figure 5.10 which shows power density versus wavelength for different body temperatures. As can be seen, the wavelength of the power density peak decreases as the temperature of an object increases. Summarising the data:

Temperature (K) Wavelength of power density peak (nanometres)
6000 483
5000 580
4000 724
3000 966

The reverse side of this law is that the peak value falls off very rapidly as the temperature of the object decreases. The peak power density wavelength for an object at 373 K (or 100°C) is 7774 nanometres (7.77 μm); for an object at 290 K (~17°C or room temperature) it is 10 000 nanometres (10 μm).

Wien’s law provides a formula for the peak wavelength as follows:

where λpeak is the peak wavelength (μm) and T is the temperature of the object (K).

This suggests that, theoretically, to obtain a maximum response from the IR sensor in the region where people and vehicles radiate, the wavelengths calculated above should be chosen, i.e. ~8000–10 000 nanometres (8–10 μm) at the lower end of the LWIR band. All things being equal, this should be the case. However, other factors such as availability and maturity of sensors to operate in the band and the effect of scattering due to haze or smoke may also have an impact. Also, as the wavelength increases, the size of the optics should also increase (similarly to radar since both IR and radar emissions are electromagnetic waves), and therefore angular resolution will reduce with increasing wavelength unless the optics are scaled proportionately. As ever, space and volume are at a premium in a military avionics installation, and in some cases the increase in volume to accommodate larger optics is unlikely to be acceptable. Medium-wave operation is generally preferred both in high-temperature humid (tropical) and in arid (desert) conditions owing to the 3–5 μm window. The US Army generally has preference for LWIR operation which is better with haze and smoke (being at the radar end of the IR spectrum, this band has characteristics closer to that of radar). Some sophisticated systems provide dual-band operation – MWIR and LWIR – to enjoy the best of both worlds.

Figure 5.10 Wien’s law.

5.4.1 IR Imaging Device

A generic IR imaging device is shown in Figure 5.11. The target is shown emitting radiation on the left side of the diagram, with the radiation spectral energy determined by a combination of absolute temperature and emissivity. The radiated power has to compete with a number of extraneous sources: background radiation, sky radiance, the sun, reflections from clouds and other unwanted signal sources that generate clutter against which the target has to be detected. As well as the clutter, the energy radiated by the target is subject to atmospheric attenuation which can be particularly acute at low level and at certain frequency bands.

The incoming energy is focused by an appropriate set of optics, and in most cases some scanning arrangement is necessary to scan the target on to the detector array. Some arrays called ‘staring arrays’ do not need the optical scanning, and these will be described later in the chapter. Once the detector has formulated the IR image, the result is read out in a similar way to the CCD sensor and the resulting data are amplified, processed and displayed. Most sensors need cooling in order to operate – usually to around 77 K – and special cooling systems are needed to perform the cooling task. A range of sensor materials can be used, all of which have their own particular advantages and bands of operation.

Figure 5.11 Generic IR imaging system.

Three typical detector configurations are shown in Figure 5.12, and each type is used for different types of IR operation. These are:

  1. Linear array. The linear array is used to form an image strip, and a scene image may be generated by successively adding the strips together. The 1024 × 8 array illustrated is one used by BAE SYSTEMS.
  2. Two-dimensional array. The two-dimensional array forms an XY matrix that lends itself readily to generating a rectangular image in a similar way to the CCD array described earlier. The 640 × 480 and 320 × 240 two-dimensional arrays portrayed are typical of state-of-the-art third-generation systems in service today.
  3. Cruciform array. This array is used to accomplish an IR tracking function and will be described later in the chapter.

Figure 5.12 Typical IR detector configurations.

The scanning configuration adopted augments the detector configuration used to provide an IR image that may be examined for strategic reconnaissance, intelligence gathering, battle damage assessment or for a platform operator to prosecute an engagement. Three basic IR scanning techniques for imaging will be described. These are the rotating scanner, planar array and focal plane array (FPA).

5.4.2 Rotating Scanner

One of the first techniques to be employed was the rotating optics method which was also known as linescan. In Figure 5.13 the platform is flying from left to right as the scanner rotates about an axis parallel with the aircraft heading. In this case the scanner is rotating in a clockwise direction looking forwards, and successive strips of ground are imaged as the imaging mirror sweeps from right to left. Therefore, as the platform flies forwards, the series of image strips may be recorded and an area image may be constructed of the ground that has been scanned.

This technique was one of the first to be used for area IR imaging and was evolved using very small arrays as the technology was not available to produce large linear arrays. The image suffers from distortion towards the horizontal limits of the scan as the sightline moves appreciably away from the vertical. Furthermore, for clear images, the relationship between aircraft velocity, V, and height above the terrain, h, or V/h, has to be closely controlled or successive imaging strips will not be contiguous or correctly focused. Another disadvantage of this method when it was first introduced into service was that there were no high-density digital storage devices available. The images were therefore stored on film which had to be developed after the sortie before analysis could begin. Early IR linescan systems such as those carried by the UK F-4K Phantom carried the system in a large pod beneath the aircraft centre-line. This technique has been likened to a whisk broom where the brush strokes are sequential right to left movements.

Figure 5.13 Rotating scanner (IR linescan).

The Royal Air Force Tornado GR4a and Jaguar GR3 reconnaissance variants use an embedded IR linescan VIGIL system produced by Thales/Vinten. This system has the following attributes:

Detector Single cadmium mercury telluride (CMT) (CdHgTe) detector operating from 8 to 14 μm
Scan rate 600 lines/s
Angular resolution <0.67 mrad
Pixels 8192/line or ∼4.9 Mpixels/s
Weight 23 lbs

5.4.3 Planar Image

The planar image technique is shown in Figure 5.14. By comparison with the rotating scanner system just described, this is called a ‘push broom’ system since it is analogous to a broom being pushed forwards. This system uses line detector arrays as outlined above. As the aircraft moves forwards, the optics allow the strip detector to image the area of interest as a series of strips that can then be formed into a continuous area image. This type of scanning arrangement lends itself to high-altitude imaging systems on platforms such as the Global Hawk. The main operational capabilities of the Global Hawk EO system are outlined below (NIIRS is the national image interpretability rating scale):

Detector Indium antimonide (InSb) detector operating from 3.5 to 7 μm (MWIR)
Field of view Wide area scan: 5.5 × 7.3 μrad
Performance Wide area scan: NIIRS 5
Spotlight: NIIRS 5.5

Figure 5.14 Planar image.

Figure 5.15 Focal plane array.

The performance of the IR imaging is almost as good as the CCD visual imaging system for the Global Hawk described earlier in the chapter, where the corresponding figures were NIIRS 6 and 6.5.

5.4.4 Focal Plane Array or ‘Staring Array’

The focal plane array (FPA), often referred to as a ‘staring array’, is portrayed in Figure 5.15. The FPA provides an image on to a focal plane that coincides with the sensing array, most usually a two-dimensional array whose dimensions scale easily to a standard rectangular display format: NTSC, PAL and, more recently, VGA and XVGA and above, greatly simplifying the optics. Although the figure depicts the focal plane array with a vertical axis, in tactical systems the array face is usually facing directly towards the target. In most cases the forward looking IR (FLIR) sensor is looking forwards, the term being relative as it is usually mounted upon a gimballed assembly that has extreme angular agility and slew rates in order to be able to track targets while the platform is manoeuvring. As will be seen later in the chapter, several EO sensor systems are commonly physically integrated into the co-boresighted sensor set to aid sensor fusion and allow target data to be readily handed off from one sensor type to another.

In an array the entire surface is not given over to IR energy sensing. There is a certain overhead involved with interconnecting the array which prevents this from being the case. In a practical array the overhead is represented by a term called the fill factor which describes the useful portion of the array as a percentage. On modern state-of-the-art arrays, the fill factor is usually around 90%.

The array is effectively read in a sequence of frames in the same way as any other real-time imaging device. Therefore, the time between successive read-outs of the array image is the time available for the array to ‘capture’ the image, and this is referred to as the integration time and permits successive images of the target to be generated.

The key element in the performance of any IR imaging device lies in the performance of detectors and the read-out of the imaged data in a timely fashion. There are many sensor types and technology issues to be considered, and some of the detector technology issues are outlined briefly below.

Table 5.2 Overview of IR FPA detector technologies

Technology Wavelength (μm) Typical array (FPA) Cooling (K) Application
Lead silicide (PbSi) 1–5 Not generally used for military applications
Indium antimonide (InSb) 3–5 640 × 512
640 × 480
512 × 512
320 × 240
78 Tactical
UAV;
ATFLIR
AN/ASQ-228
Cadmium mercury telluride (CMT) (CaHgTe) 8–12 640 × 480 77 Apache M-TADS Litening II pod
Lead tin telluride (LTT) (PbSnTe) 8–12 77
Quantum well infrared photodetector (QWIP)(GaAs; AlGaAs) 9–10 320 × 240 70–73 Experimental for aerospace

Al, aluminium; As, arsenide; Ga, gallium; Hg, mercury; In, indium; Pb, lead; Sb, antinomy; Si, silicon; Sn, tin; Te, tellurium.

5.4.5 IR Detector Technology

The technology of the IR imaging detectors is rapidly moving in terms of materials and array size. Table 5.2 gives a brief overview of some of the key technologies for the FPA implementation in aerospace applications. Many of the materials developed for medical and industrial use may not be suitable for aerospace applications. This is a rapidly evolving area of technology and the details of new technologies are not always available in the public domain.

For reasons indicated earlier, most applications today are based in the MWIR and LWIR bands, although the band chosen will be dependent upon detailed specification requirements. There is a desire to move towards dual-band operation where the optimum wavelength may be chosen for the imaging task in hand. There is also an aspiration to introduce multispectral imaging technology to aerospace applications because of the increase in operational capability that would bring. At the moment, contemporary technology may find it difficult to discriminate targets hidden beneath camouflage nets or foliage. Multispectral sensing will provide battlefield commanders with sensors that would be able to overcome this deficiency. The typical desired capabilities of a modern sensor are summarised below:

Pixel pitch ∽20–40 μm
Frame rate 50 Hz (PAL); 60 Hz (NTSC) with a desire to go to 100 Hz and above
Maximum integration 99% of frame time
Time
Data rate 10 MHz upwards
Array size 640×480 (VGA resolution), heading towards 1000×1000 (1 Mpixels) or above in next generations: F-35 and space applications

Figure 5.16 Stirling cycle cooler.

It can be seen that all the sensor detector types require cooling, and there are two ways of doing this. Originally, cooling was achieved using a Dewar flask together with a liquid nitrogen cryogenic coolant. More recently, miniature refrigerator devices have been developed that work on a Stirling cycle principle. The Stirling machine and the associated cycle is shown in Figure 5.16. The Stirling machine comprises a compressor cylinder with two moving pistons on the right; these pistons can be moved by means of linear electrical motors. This cylinder has finned heat exchangers to assist in dumping heat overboard. In the second cylinder on the right, the heat load is mounted on a ‘cold finger’, at the top of the cylinder. This cooling cylinder contains a regenerator device which is free to move up and down but which is normally biased to the top of the cylinder by means of a spring. The regenerator device has the ability temporarily to store heat, accepting heat from the cycle and donating it back to the cycle during different phases. The Stirling machine operates in four discrete changes in pressure/volume (P/V) during one cycle, and the cycle has the overall effect of extracting heat from the cold finger abutting the sensor and rejecting it from the machine by means of heat exchangers on the compressor and regenerative cylinders. The linear motors are powered by an aircraft or pod power supplies which draw relatively small amounts of power.

The principle of operation of the Stirling cycle is described below. At the start of the cycle the pistons P1 and P2 are at the top and bottom of the compression cylinder respectively and the regenerator is at the top of the cooling cylinder:

  1. Phase 1. The linear motors compress the gas, and the heat so generated is dissipated in the heat exchangers. The black arrow on the subdiagram at top right shows that heat is rejected from the cooler.
  2. Phase 2. The pistons remain in their compressed position so that the volume of the shared gas is constant. The gas above the regenerator expands while moving the regenerator down, compressing the spring and releasing heat into the regenerator (white arrow).
  3. Phase 3. The pistons are returned to their original positions at the top and bottom of the compression cylinder by the linear motors, increasing the volume of the shared gas. Heat is rejected from the heat load/seeker assembly into the cycle (black arrow).
  4. Phase 4. The regenerator releases heat into the shared gas (white arrow) at constant volume and therefore pressure increases. The spring biases the regenerator to the top of the cooling cylinder.

The cycle is repeated continuously and a heat load is withdrawn from the seeker assembly, causing it to cool down rapidly. The characteristics of a typical Stirling cooler are:

  • Input power ∽30–50 W;
  • Heat load (seeker) ∽0.5–1.5 W;
  • Seeker operating temperature ∽77 K;
  • Cool down time 5–10 min;
  • No seals, no lubricants, no maintenance, sealed for life;
  • MTBF ∽5000–10 000 h.

IR detector packaging for second-generation arrays is now possible in a number of forms that are illustrated in Figure 5.17. These are direct hybrid, indirect hybrid, monolithic and Z technology (in all cases except the monolithic method, the electrical connection for the detector chip is made by means of indium ‘bumps’ which provide a soft metal interconnect for each pixel):

  1. Direct hybrid. In this configuration the chip is connected to an array of preamplifiers and row and column multiplexers to facilitate the read-out.
  2. Indirect hybrid. This is similar to the direct method except that the detector and read-out electronics are interconnected by fan-out which connects the two chips electrically by means of a metal bus on a fan-out substrate. This has advantages in testing the detector array and allows the size of the preamplifiers to be increased to improve dynamic performance.
  3. Monolithic. In this method both detector and signal processor are combined in the same chip which in turn is mounted on the same substrate as the signal processing. In fact, the two do not have to be packaged on to the same substrate but can be segregated in terms of substrate and operating temperature, thereby possibly reducing the cooling load by cooling the detector alone.
  4. Z technology. This provides additional signal processing space on a pixel-by-pixel basis in the Z direction (as opposed to the xy array direction). This is used when the detected output of every pixel is to be individually processed, as is the case in multispectral and hyperspectral applications (Chan et al., 2002; Bannen and Milner, 2004; Carrano et al., 2004).

Figure 5.17 IR detector packaging schemes.

5.5 IR Tracking

The use of IR seeker heads to track and engage targets has been in use in military systems for many years. The Raytheon AIM-9 Sidewinder missile was one of the first of many such systems to be deployed. It is still in service today with many air forces around the world, and the latest version AIM9-X is about to enter service with the US Armed Forces. Petrie and Buruksolik (2001) give an interesting perspective on the history of the Sidewinder. The introduction of simple man-launched surface-to-air missile (SAM) such as Stinger has been a feature of the use of IR technology. The threat of such weapons is still very much with the aviation community today when used by renegade or terrorist organisations to attack unarmed military or civil transport aircraft. IR search, track and scan (IRSTS) systems are used as a primary sensor system on many fourth-generation fighter aircraft.

5.5.1 IR Seeker Heads

To illustrate some of the capabilities and limitations of IR tracking devices, the use of IR seekers in an air-to-air missile context will first be examined.

Reticle tracking is achieved by rotating a small disc or reticle with clear and opaque segments in front of the seeker detector cell. In early IR tracking heads the detector would have been a simple arrangement; later versions used more complex detector arrays, and the very latest missiles use an FPA array with a better FOV.

A simple example of a tracking reticle is shown in Figure 5.18. Either the disc rotates or the IR image is rotated by means of a rotating mirror. Whichever method is used, the objective is to scan the IR image with relative rotary movement of the reticle and modulate the IR return. Figure 5.18 shows a simple reticle that is translucent and allows 50 % transmission on one half while alternately chopping the image on the other between clear and opaque sectors. This modulation technique yields a series of pulses of IR energy that is detected by the detector cell. By carefully choosing the characteristics of the reticle and therefore the resulting modulation, an error signal may be derived which allows the seeker head to track the target by suitable servo drive systems. The reticle scan rate of early seeker heads was ∽50–70 Hz, not dissimilar from the radar conscan tracking described in Chapter 3.

Figure 5.18 Simple reticle tracker.

The choice of the reticle type determines the kind of modulation that is employed to track the target – most seeker heads use either amplitude modulation (AM) or frequency modulation (FM). A commonly used technique employs a wagon wheel stationary reticle with nutating optics scanning in a rosette pattern, rather like the petals of a flower. The type of reticle, type of modulation and frequency of rotation/nutation for a given application are usually not advertised, as to do so would reveal key characteristics of the head and its performance. Unlike radar tracking techniques such as conscan, where the characteristics of the radiated power reveal the angular scan rate, IR scanning is passive and therefore more difficult to counter by deception means.

Earlier, a cruciform detector was shown in Figure 5.12. The operation of a cruciform or cross-configured seeker is shown in Figure 5.19. This system uses a stationary element with nutating optics which scans the image in a circular fashion over the arms of the cruciform. If the target is located on the boresight of the seeker, as shown on the left, the time between pulses received from elements 4 and 1 will equal that between pulses received from elements 3 and 4. If the target drifts off boresight in a 2 o’clock direction as shown on the right, the pulses will be an unequal distance apart. Successful tracking is achieved by using pulse period measuring techniques with the appropriate servomechanisms to maintain the target on-boresight.

Early seeker heads possessed a limited capability, only able to engage the target from the rear aspect where the IR tracker had clear sight of the engine jet-pipe and exhaust plume. Part of the limited performance was that early missile detectors were uncooled lead sulphide (PbS) elements, so the sensitivity was very low. The instantaneous field of view was ∼4° with a seeker head FOV of ∽25°. Sightline tracking rates were also low (∽11 deg/s), so the engagement of manoeuvring targets was not possible.

Figure 5.19 Tracking using a cruciform detector array.

With developments introduced in the 1960s, cooled arrays were introduced using a ‘one-shot’ liquid nitrogen bottle located in the missile launcher. The coolant bottle was renewed before each sortie and contained enough coolant to allow the missile head to operate for ∽2.5 h. Modern systems are capable of full-aspect engagements; that is, they are sufficiently sensitive to acquire and track the target aircraft from any position.

In close encounter engagements the sightline spinrate of the target may be high as it crosses rapidly in front of the aircraft/seeker head. This can occur so quickly that the seeker head is unable to acquire the target. The solution to this problem is to use one of the aircraft sensors to track the prospective target and slave the missile to that sightline. The Royal Air Force used this technique to slave the Sidewinder missile sightline to the AWG 11/12 radar sightline in the F-4M Phantom in a mode called Sidewinder expanded acquisition mode (SEAM). The technique is still used today, except that the missile seeker sightline is slaved in a more sophisticated manner to the system or pilot cues. Most fighter aircraft entering service in the last 10 years are apt to have a means of slaving the seeker boresight to steering cues given by a helmet-mounted sight (HMS) projecting directly on to the pilot’s visor/sight. See the discussion on this topic in Chapter 11.

5.5.2 Image Tracking

The high fidelity of IR imaging systems as already described opens up the possibility of image tracking and also image recognition, although the algorithms involved with the latter function can be quite complex. The resolution available with imaging systems is now close to or approaching that available with visible range sensors, and therefore specific objects may be easily tracked once identified and designated by the operator. As will be seen, this is an important feature in engaging a target, as sensor fusion using a combination of sensors, trading off relative strengths and weaknesses, is in many cases an important feature in the successful prosecution of a target.

TV and IR imaging provides good resolution in an angular sense but not in range. Radar and lasers offer good range resolution but poor angular resolution. Using the right combination of sensors provides the best of both.

Typical tracking algorithms include the following:

  1. Centroid tracking, where the sensor tracks the centre of the target as it perceives it. This is particularly useful for small targets.
  2. Correlation techniques that use pattern matching techniques. This is useful to engage medium to large targets but can be difficult if the target profile alters drastically with aspect, for example an aircraft.
  3. Boundary location or edge tracking can be used where the target can be split into segments, the arrangement of the segments providing recognition features.

The use of a human operator is most useful in ensuring that the correct target is identified and tracked. Target recognition is also vital under most rules of engagement where it is essential to have the ability to fire without positive identification in order that no ‘blue-on-blue’ or friendly fire incidents occur. Again, correlation of imagery with other sources/sensors can be of great assistance. However, in high-density dynamic target situations the human operator will soon reach saturation and automatic target tracking will be essential.

5.5.3 IR Search and Track Systems

IR search and track systems (IRSTS) have been used for air-to-air engagements for some time. The US Navy F-14 Tomcat has such a system, and the Soviet-designed aircraft MIG 29, SU27 and SU35 all used first-generation systems. The function of IRSTS is to perform a function analogous to the airborne radar TWS mode where a large volume of sky is searched and targets encountered within the large search volume are characterised and tracked. The major difference is that, whereas the radar TWS mode is active, IRSTS is purely passive.

The key requirements of an IRSTS are:

  • Large search volume;
  • Autonomous and designated tracking of distant targets;
  • Highly accurate multiple-target tracking;
  • Passive range estimation or kinematic ranging where sightline spin rates are high;
  • Full integration with other on-board systems;
  • FLIR imaging;
  • High-definition TV imaging.

A state-of-the-art implementation of IRSTS is the passive infrared airborne tracking equipment (PIRATE) developed by the EUROFIRST consortium which will be fitted to the Eurofighter Typhoon. Figure 5.20 shows the PIRATE unit and the installation on Typhoon of the left side of the fuselage. The equipment uses dual-band sensing operating in the 3–5 and 8–11 μm bands. The MWIR sensor offers greater sensitivity against hot targets such as jet engine efflux, while the LWIR sensor is suited to lower temperatures associated with frontal engagements. The unit uses linear 760 × 10 arrays with scan motors driving optics such that large volumes of sky may be rapidly scanned. The field of regard (FOR) is stated to be almost hemispherical in coverage. The detection range is believed to be ∽40 nm.

Figure 5.20 PIRATE seeker Courtesy Thales Optronics and installation on Eurofighter Typhoon (Eurofighter GmbH).

The operational modes of PIRATE are:

  1. Air-to-air:
    • Multiple-target tracking (MTT) over a hemispherical FOR – the ability to track in excess of 200 individual targets, with a tracking accuracy better than 0.25 mrad;
    • Single-target track (STT) mode for individual targets for missile cueing and launch;
    • Single-target track and identification (STTI) for target identification prior to launch, providing a high-resolution image and a back-up to identification friend or foe (IFF).
  2. Air-to-ground:
    • Ability to cue ground target From C3 data;
    • Landing aid in poor weather;
    • Navigation aid in FLIR mode, allowing low-level penetration.

The sensor data may be displayed at 50 Hz rates on the head-down display (HDD), head-up display (HUD) or helmet-mounted display (HMD), as appropriate.

5.6 Lasers

Lasers – the term stands for Light Amplification by Stimulated Emission of Radiation – have been used in military systems for almost four decades. The US Air Force used laser-guided bombs (LGBs) during the later stages of the Vietnam War, and European avionics systems such as Jaguar and Tornado were adopting laser systems during the late 1960s and early 1970s. These systems are now commonly used as range finders, target designators and missile/bomb guidance. Laser systems may be fitted internally within the aircraft such as in aircraft like the Tornado GR4 and F-18. They may be housed in pods for external carriage on weapons/stores stations on fighter aircraft, or they may be housed in highly mobile swivelling turrets for helicopter and fixed-wing use.

Figure 5.21 Principles of operation of a laser.

The major advantage of using lasers is the fact that they can provide range information that passive systems such as visible light and IR radiation cannot. Lasers are therefore particularly useful when used in conjunction with these other technologies to provide sensor fusion, blending and merging the advantages and disadvantages of the different capabilities. Some of these integrated systems are described later in the chapter.

5.6.1 Principles of Operation

The principles of operation of a laser depend upon exciting the energy levels of electrons within specifically ‘doped’ materials. Electrons within the material are stimulated to higher energy levels by an external source; when the electrons revert to a lower energy level, energy of specific wavelength is emitted, depending upon the material and the energy supplied.

A laser works on this principle but has other unique properties. Specifically, the energy that is emitted is coherent; i.e. the radiated energy is all in phase rather than being randomly related as may be the case during light emission. Figure 5.21 shows a diagrammatic representation of a laser device.

The laser medium may be liquid or solid; most of the lasers used in military systems are based upon glass-like compounds. At one end of the medium is a reflecting mirror, at the other a partly reflecting mirror. The laser medium is stimulated by an input of energy from a flashlamp or other source of energy, which raises the energy levels of the electrons.

Figure 5.22 shows the various stages that occur for a laser to ‘strike’:

  1. Stage 1. This is the initial quiescent condition with the electrons all at a natural low-energy state.
  2. Stage 2. The flash tube is illuminated, stimulating the electrons and exciting them to a higher energy state – this phenomenon is known as population inversion and is an unstable state for the electrons.
  3. Stage 3. The electrons remain in this state for a short time before decaying down to their original, lower and stable energy state. This decay occurs in two ways:
    • Spontaneous decay in which electrons fall down to the lower state while randomly emitting photons;
    • Stimulated decay in which photons released from spontaneously decaying electrons strike other electrons, causing them to revert to the decayed state – in these cases photons are emitted in the direction of the incident photon and with the same phase and wavelength.
  4. Stage 4. Where the direction of these photons is parallel to the optical axis of the laser, the photons will bounce back to and fro between the totally and partially reflecting mirrors. This causes an avalanche effect in which the photons are amplified.
  5. Stage 5. Eventually, sufficient energy is built up within the tube for the laser to strike, causing a high-energy burst of coherent light to exit the tube.

Figure 5.22 Stages leading to a laser strike.

The wavelength of the emitted light is dependent upon the nature of the material being used in the laser medium since the energy released is specific to the energy levels within the atoms of that material. Also, since the amount of energy released is in the same discrete bundles, the emitted light is very stable in wavelength and frequency as well as being coherent (Figure 5.23).

Typical pulsed solid-state lasers used in aerospace use the following compounds:

  1. Ruby is chromium-doped sapphire, while sapphire itself is a compound of aluminium and oxygen atoms. The formula for ruby is Al2O3Cr+++, where Cr+++ indicates the triply-ionised state of the chromium atom. The ruby laser mode radiates primarily at 694.3 nm. The characteristics of this material make it suitable only for pulsed operation.
  2. Neodymium:YAG lasers, where YAG stands for yttrium aluminium garnet (Y3Al5O12). This is a popular choice for airborne systems as it may operate in both pulsed and CW modes. The YAG laser radiates at 1060 nm.
  3. Neodymium:glass lasers may sometimes be used, but, glass being a poor conductor of heat, they are not suitable for continuous operation. Nd:glass operates on a wavelength of 1060 nm, the same as for YAG.

Figure 5.23 Energy levels of a YAG laser.

Figure 5.23 shows the energy transition levels for a YAG laser. During the excitation state, electrons are raised to two energy bands. These are the weak pump band (730–760 nm) and the strong pump band (790–820 nm). Electrons in both bands spontaneously decay with a radiation-less transition to the upper lasing level. A combination of both spontaneous and stimulated emissions occurs as the electrons decay to the lower lasing level. During this phase the device radiates energy at a wavelength of 1060 nm or 1.06 μm. Thereafter all electrons spontaneously decay to the ground state.

The stimulation source for lasers is usually a xenon or krypton flash tube. Xenon lamps are the best option for ruby lasers, and krypton lamps are a better match for Nd:YAG and Nd:glass lasers but are more expensive and so are seldom used. The problem with using a flash lamp as the excitation source is that it is very inefficient. Lasers that are lamp pumped are very inefficient (∼2–3% efficient). The rest of the energy can only be dissipated as heat, which causes real problems for the aerospace designer. The reason for this can be seen from Figure 5.24.

The reason for these very low efficiencies is that the flash lamp spectrum is wide compared with the narrow band in which the desired spectrum lies. Therefore, the lamp energy is poorly matched to the band of interest. Modifications can be carried out to shift the lamp spectrum more into the red region, but this still presents problems.

Figure 5.24 Flash lamp spectral characteristics.

The solution to this problem is to use laser diodes rather than flash lamps to excite the laser medium. Laser diodes lend themselves to be more easily tuned to the frequency of interest. Figure 5.25 depicts a configuration in which laser diodes are used instead of a flash lamp, and this results in higher efficiencies. The higher efficiencies result in a lower unwanted heat load that a design has to dissipate, and therefore reliability may improve at the same time as performance. Diode-pumped lasers are now used, and this has allowed greater usable power output, permitting a designating aircraft to fly much higher while illuminating the target, thereby allowing greater stand-off ranges.

Figure 5.25 Diode-pumped laser.

Figure 5.26 Properties of laser emissions.

As well as the properties of having a very stable, discrete wavelength and coherent transmission, laser emissions possess another important property, that is, the property of low dispersion. Figure 5.26 shows a comparison between a light-emitting source and a laser emission. The conventional lamp light emits light in all directions, rather like ripples in a pool. Even after passing through an aperture, the light diverges into a relatively wide wavefront. The laser source has a much narrower beam after passing through the aperture and therefore has low divergence. As a result the laser beam still has relatively high beam intensities far away from the emitter. Therefore, the laser is able to transmit coherent energy, at a fixed stable frequency and with much lower beam divergence than conventional high-intensity light sources.

5.6.2 Laser Sensor Applications

The beamwidth of a typical laser is ∼0.25 mrad, and this means it is very useful for illuminating targets of interest that laser-tuned seeker heads can follow. For example, a laser designator illuminating a target at 10 000 ft will have a spot of 2.5 ft in diameter. The first deployment of laser-guided bombs (LGBs) used this technique. As the laser designator illuminates the target, energy is scattered in all directions in a process called specular reflection. A proportion of this energy will be reflected in the director of an observer who may wish to identify or engage the target.

The laser can operate as a range finder when operating in the pulsed mode. Pulsed operation has the capability of delivering high power densities at the target, for example, a laser delivering a 100 mJ pulse in a 20 ns pulse has a peak instantaneous power of 5 MW. In this sense the laser operation is analogous to radar pulsed modes of operation and mean power, duty cycle and peak power are equally as important as they are in radar design. Even allowing for atmospheric attenuation, a laser can deliver reasonable power densities at a target, albeit for very short periods. The narrow pulse width allows accurate range measurements to be made. The 20 ns pulse mentioned above allows range resolutions to within ∼10 ft.

Figure 5.27 Laser guidance of an LGB.

Therefore, the laser offers a number of options to enhance the aircraft avionics system weapon-aiming performance in the air-to-ground mode:

  • Laser designation in CW mode to guide a missile;
  • Laser reception of a target position marked by a third party;
  • Laser ranging to within a few feet.

Examples of these engagements are shown in Figures 5.27 and 5.28. Figure 5.27 illustrates a system where the aircraft has a self-designation capability. The aircraft can designate the target and launch the LGB to engage the target. The designation/launch aircraft illuminates the target until the LGB destroys it. This type of engagement is used when the attacking force has air supremacy, the launch aircraft is free to fly without fear from counterattack by surface-to-air missiles (SAMs) and the target is one that is easy to identify – such as a bridge. Third-generation laser systems are capable of engaging from a height of 40 000–50 000 ft and perhaps 30 nm from the target.

Figure 5.28 Third-party laser designation by air or ground means.

In other situations, third-party designation may be easier and more effective. If the target is one that is difficult to detect and identify from the air, it may be preferable to use some ground forces such as the Special Forces to illuminate the target of interest. The laser designator signal structure allows for codes to be set by both ground or air designator and launch aircraft so the LGB is used against the correct target. In other cases the designator aircraft may possess a higher-quality avionics system than the launch aircraft and may act as a force multiplier serving a number of launch aircraft. This technique was used during Desert Storm when F-111 aircraft designated targets for F-16s and the RAF Buccaneers designated targets for the Tornado GR1s.

The LGBs are not always special bombs but in some cases free-fall bombs fitted with a laser guidance kit. This adds a laser seeker head and some guidance equipment to the bomb, allowing it to be guided to the target within specified limits, known as a ‘footprint’. Therefore, provided the LGB is launched within a specified speed and altitude launch envelope with respect to the target, it should be able to hit the target if all the systems work correctly. The operation of an LGB guidance system is illustrated in Figure 5.29.

The reflected laser energy passes through optics that are arranged to produce a defocused spot on a detector plane. Detector elements sense the energy from the spot and feed four channels of amplifiers, one associated with each detector. In an arrangement very similar to the radar monopulse tracking described in Chapter 3, various sum and difference channels are formed using the outputs of amplifiers A to D. These are multiplexed such that elevation and azimuth error signals are produced and then fed to the guidance system which nulls the error.

Figure 5.29 LGB Guidance.

Figure 5.30 F-18 internally fitted laser units.

The deployment of LGBs was graphically illustrated by media coverage of Desert Storm, although there was by no means a 100% success rate and a relatively small proportion of bombs dropped during that campaign were laser guided. By the time of the second Gulf War, a much higher proportion of laser-guided weaponry was used. However, as in all systems, there are drawbacks. Airborne tactical laser systems, by the very nature of their operating band very close to visible light, suffer degradation from the same sources: haze, smoke, precipitation and, in the desert, sandstorms. In other operating theatres, laser-guided systems may suffer more from weather limitations compared with operating in relatively clear conditions in the desert.

As was stated earlier, lasers may be fitted internally in some aircraft and in pods on others. Figure 5.30 shows the three units that comprise the laser system for the F-18.

5.6.3 US Air Force Airborne Laser (ABL)

The use of the lasers described so far relate to the tactical use of lasers to mark targets, determine target range and designate the target so that it may be engaged by a variety of air-launched weaponry. These lasers are not ‘death rays’ and, although they exhibit reasonably high energy levels, they are not sufficiently powerful to destroy the target by energy alone. They can, however, cause serious damage to the human eye, and laser safety issues are discussed later in the chapter.

The US Air Force airborne laser (ABL), designated as YAL-1, is a high-energy laser weapon system designed to destroy tactical theatre ballistic missiles. It is being developed by the Air Force in conjunction with a team comprising Boeing, Northrop Grumman and Lockheed Martin. The laser system is carried on board a converted Boeing 747F freighter and is curently undergoing test and evaluation. If successful, it is intended to procure several platforms with an initial operational capability of three aircraft by 2006 with a total capability of seven aircraft by 2010.

The ABL system actually carries a total of three laser systems:

  1. A low-power, multiple set of laser target-illuminating beams comprising the target iluminating laser (TILL) to determine the range of the target and provide information on the atmosphere through which the primary beam will travel. The TILL provides the aiming data for the primary beam.
  2. A beacon iluminating laser (BILL), producing power in kilowatts, reflects energy from the target to provide data about the rapidly changing nature of the atmosphere along the sightline to the target. This information is used to bias a set of deformation control mirrors in the primary laser beam control system such that corrections are applied to the COIL laser beam as it engages the target.
  3. The chemical oxygen iodine laser (COIL) is the primary beam generating the killer beam to destroy the target. This beam power is in the megawatt region and operates on a wavelength of 1315 μm. When a missile launch is detected, either by satellite or by AWACS, the target information is passed via data links to the ABL aircraft. The COIL beam is directed at the target by means of a large 1.5 m telescope mirror system at the nose of the aircraft which focuses the primary beam on the missile, destroying it shortly after launch.

5.6.4 Laser Safety

Pulsed solid-state lasers of the types commonly used in avionics applications have eye safety implications, as do many laser types. The peak powers involved are so high and the beams so narrow that direct viewing of the beam or reflections is an eye hazard even at great distances. Nd:YAG and Nd:glass lasers are particularly dangerous because their output wavelength (1064 nm) is transmitted through the eye and focused on the retina, yet it is not visible to the human eye. The wavelengths at which the human eye is most susceptible are between 400 and 1400 nm where the eye passes the energy and for the greater part the retina absorbs it. Above and below this band, the eye tissue absorbs rather than passes the energy. Lasers can be made to operate in a safe mode using a technique called Raman shift; in this way, Nd:YAG lasers can operate on a ‘shifted’ wavelength of 1540 nm, outside the hazardous band. In fact, lasers operating on this wavelength may be tolerated with power ∼105 times that of the 1064 nm wavelength.

Military forces using lasers are bound by the same safety code as everyone else and therefore have to take precautions, especially when realistic training is required. The solution is that many military lasers are designated as being ‘eye safe’ or utilise dual-band operation. This allows personnel to train realistically in peacetime while using the main system in times of conflict.

To ensure the safe operation of lasers, four primary categories have been specified:

  1. Class I. These lasers cannot emit laser radiation at known hazard levels.
  2. Class IA. This is a special designation that applies to lasers ‘not intended for viewing’, such as supermarket scanners. The upper power limit is 4.0 mW.
  3. Class II. These are low-power visible lasers that emit light above class I levels but at a radiant power level not greater than 1 mW. The idea is that human aversion to bright light will provide an instinctive reaction.
  4. Class IIIA. These are intermediate-power lasers (CW – 1–5 mW) that are hazardous only for direct beam viewing. Laser pointers are in this category.
  5. Class IIIB. These are for moderate-power lasers.
  6. Class IV. These are high-power lasers (CW – 500 mW; pulsed – 10 J/cm2) or diffuse reflection conditions that are hazardous under any conditions (either directly or diversely scattered) and are a potential fire and skin hazard. Significant controls are required of class IV facilities.

Many military lasers operating in their primary (rather than eye-safe) mode are class IV devices and must be handled accordingly.

5.7 Integrated Systems

As the electrooptic technologies used in avionics systems have matured, the benefits of integrating or fusing the different sensors have become apparent. Mounting the sensor on a rotating gimbal assembly allows the sensor(s) to track targets with high slew rates and with several degrees of freedom. Co-boresighting the sensors on this common gimbal assembly provides the ability to recognise targets and hand off target data from one sensor to another, thereby improving the overall capability of the integrated sensor set. Typical sensors that might be arranged in this fashion include:

  • FLIR imaging using FPAs with several FOV options and, in some cases, dual-band sensors;
  • CCD-TV with two or more FOVoptions using monochrome or colour displays and often a camera to record results;
  • Laser target markers to illuminate targets, laser range finders to determine range and laser spot markers for target hand-off.

Such sensor clusters have to be carefully aligned or ‘harmonised’ to ensure maximum weapon-aiming accuracy. Also, given the high levels of resolution, the sensor cluster has to be stabilised to avoid ‘jitter’ and provide stable imagery. In some cases this stabilisation will be within ∼15–30 μrad.

These EO integrated systems may take any of the following forms:

  • Installation in a pod to be mounted on a fighter aircraft weapon station;
  • Installation in a turret for use on a helicopter or fixed-wing airborne surveillance vehicle;
  • In stealthy aircraft: internal carriage to maintain low observability.

5.7.1 Electrooptic Sensor Fusion

The wavelength and frequency of the electromagnetic radiation comprising EO systems are contained within a relatively narrow portion of the spectrum:

  • Visible light 0.400 μm (V) to 0.750 μm (R);
  • IR bands 1.0 μm (lower SWIR) to 14.0 μm (upper LWIR);
  • Airborne laser 1.064 μm and 1.54 μm (eye safe).

Therefore the total sensor set covers the relatively narrow band from 0.4 to 14.0 μm or a dynamic range of around 35:1.

Inspite of this relatively low dynamic range or coverage compared with radar and CNI, the properties of transmission of some of the sensing technologies are, however, quite different for the different bands/wavelengths being used. For example, a laser that would be extremely hazardous to the human eye when operated at 1.064 μm is eye safe when operated at 1.54 μm. Perhaps even more important, from the point of view of acquiring and engaging the target, the different fields of view (FOV) are quite different between laser and visible/IR sensors. The laser beam has very low divergence, typically of the order of 0.25 mrad (2.5 × 10–4 rad or 0.014°), whereas a navigation FLIR may have an FOV around 20° × 15° (∼1400 × 1050 times more). For target engagement activities, narrower FOV modes such as 4° × 4° (MFOV) or 1° × 1° (NFOV) may be used. The alignment or co-boresighting of the EO sensors must be carefully controlled (see Figure 5.31 which illustrates the principle of harmonisation but obviously does not show the respective fields of view to scale).

Figure 5.31 Typical EO sensor fields of view – Harmonisation.

There are a number of systems issues that are very important and that must be taken into account if successful EO sensor fusion and weapons launch are to be accomplished. These factors include:

  1. A relatively wide FOV for navigation and target acquisition (∼20° × 20°) – in the case of the Apache TADS/PNVS, 40° × 30°.
  2. A relatively narrow FOV for target identification and lock on (∼4° × 4°) MFOV or (1° × 1°) NFOV.
  3. A high target line of sightline slew rates, especially at short range, possibly >90 deg/s.
  4. Relatively small angles subtended by the target and the need to stabilise the sensor package boresight within very tight limits (especially for long-range targets). A small bridge may represent a subtended angle of ∼0.024° at 40 nm, and a tank about the same angle at 10 nm. The problem may be likened to using binoculars when holding them in an unsteady manner. The enlarged image may be visible, but jitter renders the magnified image virtually useless unless the glasses can be steadied. Typical head stabilisation accuracies on third-generation sensor packages are of the order 15–30 μrad.
  5. For a variety of reasons it is necessary to provide accurate inertial as well as sightline stabilisation for GPS/inertially guided weapons such as JDAM – often referred to as the J-series weapons. Accordingly, mainly advanced EO packages have a dedicated strapdown inertial navigation unit (INU) directly fitted on to the head assembly to improve pointing and positional accuracy and reduce data latency.

These technical issues are illustrated in Figure 5.32. These demanding technical requirements have to be met within a pod mounted under wing or under fuselage in a hostile environment while the launch platform could be flying a supersonic speed. As will be seen, the space and weight available to satisfy these requirements are not unlimited, and modern EO sensor attack or surveillance packages represent very sophisticated solutions to very difficult engineering problems.

Figure 5.32 Boresighting, stabilisation and EO package sightline axes.

Figure 5.33 shows a typical integrated EO sensor for carriage within a 41 cm/16 in diameter pod, in this case the Northrop Grumman Litening II AT pod. This sensor pod contains the following:

  • Strapdown INS;
  • Wide FOV CCD camera/laser spot detector;
  • Narrow FOV CCD camera;
  • Laser designator/range finder;
  • FLIR.

The entire sensor package is mounted on a gimbal assembly that is free to move in roll, elevation and azimuth.

5.7.2 Pod Installations

Podded installations are usually carried on fighter aircraft, although in certain circumstances they could be fitted to other aircraft. For example, targeting pods were trialled on the B-52 during the recent Iraq War, and plans are being reviewed to fit them to B-52 and B-1 bombers. As opposed to the turreted implementations described later in the chapter, EO pods lend themselves to be carried on certain weapons stations of fighter aircraft such that the aircraft can be reroled to perform a specific mission merely by fitting the pod. Apart from tailoring the mission-specific software to the requirements of the mission, no aircraft modifications are required. All the hardware and software modifications to adapt the pod to the aircraft baseline avionics system are in place such that all the subsystems and display symbology will be compatible to enable the mission to be performed. Therefore, EO pods give the battlefield commander additional flexibility in discharging his overall battle plan.

Figure 5.33 Typical pod-mounted sensor package (Litening II AT).

The first pods to be developed were the low-altitude navigation and targeting infrared for night (LANTIRN) system introduced into the US Air Force in the late 1980s. In truth, this system comprised two pods:

  • The AN/AAQ-13 navigation pod containing a terrain-following radar (TFR) and an FLIR system to aid low-level navigation at night;
  • The AN/AAQ-14 targeting pod comprising a targeting FLIR and laser designator/range finder.

The introduction of these pods in the later stages of the Cold War boosted the capability of the US Air Force accurately to attack ground targets in all weather conditions. These first-generation pods were fitted to a variety of aircraft including F-14, F-16C/D and F-15E in the United States and subsequently to a number of allied Air Forces. The almost simultaneous demise of the Soviet Union and the successful deployment and execution of Desert Storm in Iraq made military planners realise that most attacks using laser-guided munitions would need to be performed from 30 000 ft or above if reasonable aircraft survivability rates were to be achieved. The focus for the use of EO pods therefore shifted from low-level attack to attack from medium level.

To accomplish this new mission, performance improvements were needed to the targeting pod. Low-level ingress and egress to/from the target were not required as it was assumed that successful weapons launch could be made from over 30 000 ft above and beyond short-range and most medium surface-to-air missile (SAM) threats. The emphasis was on long-range target detection and identification and the increased deployment of weapons from long range with INS/GPS guidance. These drivers led to some of the performance improvements mentioned earlier, and with the ‘third-generation’ pods now entering service these requirements are largely satisfied.

Table 5.3 is a top-level comparison and summary of the most common targeting pods developed by contractors in the United States, United Kingdom, France and Israel and deployed from the late 1980s onwards. The latest pods have been designed with COTS in mind and are modular in construction so that technology and performance improvements may be readily inserted. The modular construction leads to easier maintenance, with faulty modules replaced on aircraft in a matter of minutes; high levels of built-in test (BIT) are provided readily to check out the system with high levels of confidence. Most of these pods have mean time between failure (MTBF) rates of a few hundred hours, roughly equivalent to one failure per year at peacetime flying rates.

Examples of EO targeting pods are shown in Figure 5.34.

5.7.3 Turret Installations

Whereas podded installations are useful for loading on to the weapon stations of a fast jet, turreted installations are more suited to permanent installations. For clear line of sight to the targets, they need to be located at the front of the aircraft, and for these reasons they are particularly well suited for installation on helicopters. Some turreted installations have been on fixed-wing aircraft such as the B-52H, the US Navy Orion P3-C and the Nimrod MR2 and have also been considered for the S-2 Viking. The last three aircraft have anti-submarine warfare (ASW) roles.

The first application of EO turrets on helicopters was with systems like the AH-64 Apache. Many different helicopters are now fitted with EO turrets, including special forces, coastguard and law and drug enforcement. In general, these systems are used in low-level and short-range engagements rather than the high-level long-range operation of podded fighter aircraft. The atmospheric conditions at low level, combined with other conditions such as smoke and haze, mean that LWIR systems often fare better than shorter-wavelength systems. However, MWIR with a shorter wavelength offers greater resolution. In some recent applications, dual-band MWIR and LWIR sensors are accommodated.

Table 5.3 Summary and comparison of EO targeting pods

Manufacturer Product Dimensions Capabilities Carriage aircraft
Lockheed Martin AN/AAQ-13 navigation LANTIRN pod
AN/AAQ-14 targeting LANTIRN pod
L: 199 cm/78 in
D: 31 cm/12 in
W: 211 kg/470 lb
L: 251 cm/98 in
D: 38cm/15 in
W: 236 kg/524 lb
Terrain-following radar (TFR)
Fixed FLIR: FOV 21° × 28° (640 × 512) FLIR: NFOV 4° × 4° (640 × 512); NFOV 1° × 1°
Laser designator/range finder
Many upgrades during service, including third-generation FLIR; 40 000 ft laser; laser spot tracker; CCD-TV sensor; digital data recorder; geocoordinate generation for J-series weapons
F-16C/D
F-15E
F-14
+12 international Air Forces
Over 700 pods in service
BAE SYSTEMS Thermal imaging and laser designator (TIALD) L: 290 cm/114 in
D: 35 cm/14 in
W: 210 kg/462 lb
WFOV 10° × 10°
MFOV 3.6° × 3.6°
Electronic Zoom × 2; ×4
Modification added a CCD-TV
Harrier GR 7 Jaguar GR1/GR3 Tornado GR1/GR4
Northrop Grumman/Rafael
Variants: Litening II – 1999
Litening II ER – 2001
Litening II AT – 2003
AN-AAQ-28(V) Litening II L: 230 cm/87 in
D: 40.6 cm/16 in
W: 200 kg/440 lb
CCD-TV – MFOV and NFOV
640 × 480 FLIR operating on MWIR: WFOV 18.4° × 24.1° (Nav); MFOV 3.5° × 3.5°; NFOV 1° × 1°
Laser spot tracker/range finder Laser designator > 50 000 ft/40 miles
Litening III has dual mode (including eye safe)
US ANG F-16 block25/30/32 USMC;
Spanish Navy;
Italian Navy AV-8B;
Spanish F/A-18;
Israeli F-15I;
Israeli-16C/D/I;
German Navy and Air Force Tornado
Total of 14 Air Forces
Lockheed Martin Sniper XR (extended range) targeting pod
Export version known as
PANTERA
L: 239 cm/87 in
D: 30 cm/12 in
W: 200 kg/440 lb
CCD-TV: WFOV: 4° × 4°; NFOV: 1° × 1° 640 × 480 FPA FLIR operating on MWIR
Laser – diode pumped; laser >40 000 ft; laser range finder/spot tracker; dual-mode laser (including eye safe); geo coordinate generation for J-series weapons
USAF F-16 block 50
ANG F-16 block 30
F-15E
A-10
Raytheon AN/ASQ-228 ATFLIR, recently named Terminator L: 183 cm/72 in
D: 33 cm/13 in
W: 191 kg/420 lb
640 × 480 FPA FLIR operating in MWIR: WFOV 6° × 6°; MFOV 2.8° × 2.8°; NFOV 0.7° × 0.7° F/A-18A+, C/D,E/F
Replacement for AN/AAS-38 Nite-Hawk
Thales Damocles L: 250 cm/98 in
D: not quoted
W: 265 kg/580 lb
Third-generation MWIR FLIR sensor: WFOV 24° × 18° (Nav); MFOV 4° × 3°; NFOV 1° × 0.75°
Laser range finder: 1.5 μm (eye-safe)
Laser designator/range finder/spot tracker: 1.06 μm
Super Entendard Mirage 2000 replaces Atlis, Expected to be fitted to Rafale

The increasing use of unmanned air vehicles (UAVs) in reconnaissance and combat roles has given significant impetus to the production of smaller, lighter systems suitable to be used as a UAV payload.

Figure 5.35 shows typical turrets with imagery examples.

Figure 5.34 Examples of EO targeting pods.

Figure 5.35 Typical EO turrets and imagery examples.

5.7.4 Internal Installations

Stealthy aircraft such as the F-117 and F-35 incorporate EO sensor suites to assist in engaging ground targets. The F-35 in particular will incorporate an interesting internally carried system called the electrooptic sensing system (EOSS). This comprises two major functional elements:

Table 5.4 Summary of typical EO turreted systems.

Manufacturer Product Role Capabilities Carriage aircraft
Lockheed Martin AN/AAQ-11 target acquisition and designator sight (TADS) Navigation/attack Direct vision optics: WFOV 18° × 18°; NFOV 3.5° × 3.5° AH-64C/D Longbow Apache Over 1000 systems delivered
TV camera: WFOV 4° × 4°; NFOV 0.9° × 0.9°; underscan 0.45° × 0.45°
AN/ASQ-170 pilot’s night-vision sight (PNVS)
FLIR: WFOV 50° × 50°; MFOV 10.2° × 10.2°; NFOV 3.1° × 3.1°; underscan 1.6° × 1.6°
Laser range finder/designator Arrowhead modification program commenced in 2000: M-TADS and M-PNVS utilising RAH-66 Comanche technology. Features: LWIR FLIR using 640 × 480 FPA; stabilisation improvements; eye-safe laser; FLIR and IR sensor fusion; colour CCD-TV camera
Raytheon AN/AAQ-27 Follow on to the AN/AAQ-16B/C/D Surveillance and navigation/attack
  • -16B variant: LWIR FLIR, dual FOV
  • -16C variant: LWIR FLIR, dual FOV
  • -16D variant: LWIR FLIR, three FOV with laser range finder/designator
  • -27: MWIR FLIR using 640 × 480 FPA. Dual and three [AN/AAQ – 27 (3 FOV)] versions available
CH-53E; CH-47; MH-60L/M; SH-60B; podded version on F-18 as AN/AAR-50 MV-22 Osprey RAN Super Seasprite
Wescam AN/ASQ-4 using Wescam MX-20 turret Surveillance Features: high-quality gyrostabilisaton and picture quality; high-magnification step zoom capability on FLIR and TV sensors; geolocation for pinpointing ground target location; MWIR FLIR with 640 × 480 FPA: WFOV 12.0° × 9.3°; NFOV 2.9° × 2.3° P-3C Orion S-2 Viking Nimrod MR-2

Figure 5.36 F-35 EO sensor vertical coverage and EOTS installation.

Figure 5.37 F-35 horizontal coverage using DAS sensors. (Northrop Grumman)

  1. The electro-optic targeting system (EOTS) being developed by Lockheed Martin and BAE SYSTEMS. This is an internally carried EO targeting system that shares many common modules with the SNIPER XR pod already mentioned. The EOTS looks downwards and forwards with respect to the aircraft centre-line, as shown in Figure 5.36. The EOTS installation and EO sensor window are shown.
  2. The distributed aperture system (DAS) being developed by Northrop Grumman together with BAE SYSTEMS comprises six EO sensors located around the aircraft to provide the pilot with 360° situational awareness information that is detected by passive means. The concept of horizontal coverage of the DAS is depicted in Figure 5.37. The six DAS sensors provide a complete lateral coverage and are based upon technology developed for the BAE SYSTEMS Sigma package (shown in the inset). Key attributes are dual-band MWIR (3–5 μm) and LWIR (8–10 μm) using a 640 × 512 FPA. Each sensor measures ∼7 × 5 × 4 in, weighs ∼9 lb and consumes less than 20 W. Sensor devices with megapixel capability (1000 × 1000) are under development and will be incorporated.

References

Atkin, K. (ed.) (2002–2003) Jane’s Electro-Optic Systems, 8th edn.

Bannen, D. and Milner, D. (2004) Information across the spectrum. SPIE Optical Engineering Magazine, March.

Capper, P. and Elliott, C.T. (eds) (2001) Infrared Detectors and Emitters: Materials and Devices, Kluwer Academic Publishers.

Carrano, J., Perconti, P. and Bannard, K. (2004) Tuning in to detection. SPIE Optical Engineering Magazine, April.

Chan, Goldberg, Der and Nasrabadi (2002) Dual band imagery improves detection of military target. SPIE Optical Engineering Magazine, April.

Kopp, C. (1994) The Sidewinder story, the evolution of the AIM-9 Sidewinder. Australian Aviation, April.

Petrie, G. and Buruksolik, G. (2001) Recent developments in airborne infra-red sensors. Geo Informatics, February.