8: A Particle Imaging and Analysis System for Underwater Holograms – Optical Methods for Data Processing in Heat and Fluid Flow


A Particle Imaging and Analysis System for Underwater Holograms

J J Nebrensky, G Craig, G L Foresti, S Gentili, P R Hobson, H Nareid, G G Pieroni, and J Watson


Pulsed holography is an important technique for the study of particle fields: it allows instantaneous, non-invasive high-resolution recording, and the later replay of real images from which one can obtain the size, shape, position, and – if multiple exposures are made – velocity of every object in the sample volume. This Chapter will discuss various issues encountered during the design of a hologram replay machine, with particular emphasis on the physical design of the instrument, and on the volume scanning and image processing techniques needed to pick out the useful information from the terabyte amounts of raw data produced on each hologram.

8.1 Introduction

Pulsed laser holography is an extremely powerful technique for the study of particle fields as it provides instantaneous, non-invasive high-resolution records, avoiding the distortion of the flow field associated with inserted probes, while from the replayed real images one can obtain the size, shape, position and – if multiple exposures are made – velocity of every object in the sample volume. The value of such experiments depends crucially on the quality of the reconstructed image: not only will poor resolution degrade size and shape measurements, but aberrations such as coma and astigmatism can change the perceived centroid of a particle, affecting position and velocity measurements.

The Holomar collaboration (1) is currently working on both an underwater holocamera – uniquely incorporating simultaneous in-line and off-axis holography – for the in-situ recording of plankton species and distributions and ‘HoloScan’, an associated hologram reconstruction and analysis instrument primarily designed for the study of underwater holograms such as those produced by the Holomar project or in HPIV studies. This Chapter discusses various issues associated with the high-fidelity replay of underwater holograms appropriate to the design of the HoloScan replay machine. It commences with a brief description of the sources of some of the aberrations and noise that degrade the reconstructed images, and then considers the design of the scanning machine and the implementation of neural net software for the identification and classification of marine particles. It will be assumed that the recording conditions have already been optimized; this aspect is covered in depth in Hobson and Watson (2).

8.2 Sources of aberrations and noise

8.2.1 Plate position and illumination

A hologram records the shape of an object wavefront relative to a reference beam. For perfect reconstruction the reference beam at replay must therefore be an exact duplicate of that used in recording, i.e. the wavefront phase distribution at reconstruction must be the same as at exposure. This means that the replay beam must have the same degree of collimation (usually exactly parallel) and, if of the same wavelength as at recording, encounter the holographic plate from exactly the same direction.

The replay machine must allow the position of the hologram to be precisely adjusted and hold it securely during the scanning process. In practice it is also possible to cancel out some of the aberrations due to other causes by deliberately introducing an equal but opposite aberration by slightly tilting the hologram plate. As many aberrations depend on the illuminated aperture of the hologram, it is also possible to reduce many aberrations by illuminating only a smaller area of the hologram, but this also reduces the maximum resolution of the image and also its brightness.

8.2.2 Transfer optics

In some applications it is necessary to use transfer optics between the holographic plate and the object to be recorded. These will often introduce aberrations in to the object beam which will then be recorded by the hologram, so that even perfect replay would suffer from them. One possibility is to replay the real image back through the transfer optics, which then undo the aberrations to give a corrected real image. Examples of such optics are the fish-eye lenses used for holographic recording in the large bubble chambers used in high-energy physics.

8.2.3 Refractive index changes

A change in the medium enclosing the hologram or object implies refraction effects; for example, if the hologram and objects are submerged in water during recording then when replaying in air the rays forming the image will be refracted through a greater angle leaving the hologram plate than they would have been in water, and so the image points will be located closer to the hologram plate. Thus for a hologram directly in a medium a distortion of the image field results from a change in the ambient refractive index. This can be corrected by changing the illumination wavelength: as the hologram acts as a diffraction grating shorter wavelength illumination will be diffracted less, so that by careful choice of wavelength the distortion can be corrected. In practice the situation will be complicated by a glass window and air space that act as uncorrected transfer optics and thus introduce extra aberrations. Since it is not practical to replay the holograms in to large tanks of water, it is necessary to try to reduce these aberrations by other means: in particular by appropriate choice of the holocamera's window and air space thicknesses.

8.2.4 Wavelength changes

There is only a limited number of possible (and affordable) laser wavelengths, not all of which are available in both pulsed and continuous operation with suitable powers. This means that in some applications it may not be possible to play back the hologram at the ideal wavelength, and a nearby wavelength must be used instead. It is possible to limit the effects of a wavelength mismatch by changing the reference beam angle to reproduce the required phase distribution across the hologram, but this can drastically reduce the replay efficiency in some situations where the Bragg condition is no longer satisfied within the emulsion.

8.2.5 Viewing angle

For in-line holography, information about the object is recorded on the holographic plate in the vicinity of the projected centre of the object, so that during replay the light forming the image is all more-or-less parallel to the optical axis and different regions of the image can be investigated simply by moving the camera along three axes. In off-axis holography, however, the image is recorded over the whole plate and the field of view can be much wider, so it becomes necessary not only to translate the camera but also to rotate it as it moves away from the optical axis, otherwise it is necessary to use a much wider aperture camera lens and the focal plane of the object may also lie at an angle to that of the detector (Fig. 8.1).

Fig. 8.1 Comparison of viewing geometries and camera movement for in-line (above) and off-axis (below) replay

One interesting development is the In-line Recording and Off-axis Viewing (IROV) technique, in which an in-line hologram is viewed using magnifying optics at a steep angle, so that the light forming the image is only that diffracted from the high-spatial frequencies in the hologram (3). This scheme provides a reduced depth of focus for the particle images and reduced speckle noise compared with conventional in-line reconstruction, at least for small volumes.

8.2.6 Background noise

‘Background noise’ refers to intensity variations in the image, often on relatively large length scales, that may obscure details in the object. There are many possible sources of this noise, including, for example, uneven illumination (say from dirt in the reference beam) and the formation of interference patterns due to stray light or light multiply reflected within the hologram plate or optical components. There is also the possibility of substantial scattering of light within the holographic emulsion, while in particle field holography there may also be out of focus particle images within the field of view.

8.2.7 Speckle noise

The coherent nature of laser light gives rise to the phenomenon of ‘speckle’, which causes the images to have a speckled or grainy appearance. This leads to difficulties when viewing holograms of particle fields: large bright particles such as bubbles may be seen as ‘large bubble moons against a background of speckle noise stars’ (4), but smaller, fainter particles will be lost among the speckles. The phenomenon is aperture dependent: speckle size is reduced (and image quality improved) as the aperture is opened, which means that the optimum aperture is a compromise between increased aberrations and increased speckle. Another way to reduce the speckle visibility is to insert a rotating ground-glass diffuser in to the illumination beam.

8.3 The ‘HoloCam’ underwater holocamera

The Holomar collaboration has developed an underwater holocamera, contained within a pressure housing designed for operation down to 100 m (1). Uniquely, the camera incorporates both the ‘in-line’ and ‘off-axis’ holographic geometries: in-line holography can record organisms in the 5 to 250 μm range at concentrations up to several thousand cm−3 while off-axis holography is better for organisms bigger than 100 μm and at much higher concentrations. The use of both geometries with overlapping sample volumes (of several litres) thus allows recording of a wider range of organisms under a greater variety of conditions than current alternatives. During a dive up to 25 holograms of each geometry may be recorded simultaneously on glass plates by a Q-switched, frequency-doubled Nd–YAG laser, operating at a wavelength of 532 nm and a pulse duration of less than 10 ns to freeze any motion of the organisms. The layout of a lab mock-up of the recording optics is shown in Fig. 8.2. The combination of off-axis reference beam angle (60°), window thickness and window-to-hologram plate spacing during recording has been optimized for replay using a 442 nm HeCd laser (2).

Fig. 8.2 Optical layout of a mock-up of ‘HoloCam’, showing the relationship of the in-line and off-axis sample volumes

8.4 The ‘HoloScan’ replay machine

8.4.1 Background

Automated data extraction from holograms requires that the recorded information is digitized and made available for computer analysis, most obviously by illuminating the hologram with the conjugate of the original reference beam used in recording so as to produce the real image of the sample volume through which a suitable imaging system (such as a bare CCD or a videocamera fitted with a microscope objective) may be scanned. As discussed in section 8.2.5, for the replay of in-line holograms one would prefer to traverse the camera along linear, Cartesian axes of movement (here denoted x and y in the plane of the holographic plate and z in depth), whereas for off-axis systems one would prefer to work in a spherical polar frame of reference.

Several approaches have been used for in-line replay. The illumination system is generally kept fixed; either the holographic plate is moved along all three axes to project the requisite region of the real image in to a fixed camera [e.g. Bexon et al. (5); Green and Zhao (4)] or the plate may be kept fixed and the camera moved (6). For large sample volumes there may be difficulties ensuring the mechanical stability of a three-axis positioning system, so it is common to separate the transverse and in-depth motions, usually moving the holographic plate in-plane (x, y) and focusing by moving the camera assembly (z) [e.g. Borrmann and Jaenicke, (7)]. This arrangement allows an aperture to be introduced in to the replay beam so that only a limited region of the holographic plate is illuminated, which helps reduce background noise due to scattering in the emulsion.

Similar considerations apply to the automated scanning of large-volume off-axis holograms, except that the movements should correspond to a polar frame of reference. Kitagaki (8) built a scanning machine in which the hologram was fixed while a pair of cameras moved through the real image on the end of an arm. Since the distance between the holographic plate and the region of interest was 2 to 3 m, a pair of mirrors was used to fold the image beam along the arm. The sampled view was thus defined by the arm position (φ, θ) and the mirror separation (r). Deason (9) and Naon et al. (10) instead rotated the holographic plate (along with the reference beam) so that the camera system only needed to select the appropriate depth plane, either by a mechanical movement (9) or by a change in the focal length of the optics (10).

As HoloScan must be able to replay both the in-line and off-axis holograms recorded by HoloCam, it has been decided to keep the holographic plate still and scan the videocamera through the projected real image on a set of three linear stages (up to 1000 mm in length); although this will restrict the performance of the replay system near the edges of the off-axis field of view, the increased aberrations of the reconstructed holographic image here will limit the impact on the data gathered. Furthermore, the nature of the work coupled with the overlapping volumes on the camera mean that it is likely that novel organisms or objects requiring manual study will be recorded in two holograms at once, and so the physical machine should be able to be switched between in-line and off-axis modes as quickly and easily as possible.

Several replay machine geometries have been considered (11). The approach of keeping the holographic plate and stages and the laser fixed and moving the collimator assembly to change beam angle between in-line and off-axis replay was discarded because of the difficulties expected in maintaining alignment of the collimator with the laser beam during the geometry change. Keeping the laser and collimator fixed and rotating the holographic plate about a vertical axis also requires that the large and very heavy stage/video camera assembly is moved precisely around the plate; the opposite approach of keeping the stages still and instead pivoting an integrated laser/collimator assembly is more viable but still requires rapid and precise movement of a large but delicate assembly.

8.4.2 Physical layout

The final system, shown in Fig. 8.3, has two fixed arms, one carrying the stages and videocamera assembly and the other the laser/collimator assembly. The replay beam is reflected on to the holographic plate by a large λ/10 mirror which can easily be moved between two fixed positions for off-axis (46 degree beam angle) and in-line (0 degrees, not shown) replay. There is thus only one relatively small moving part which means that the support frame (MiniTec aluminium extrusion) can be both simple and rigid. A Kimmon 180 mW HeCd laser is mounted vertically above the collimator assembly (a Keplerian telescope composed of a CVI 1000 mm focal length achromatic objective and Melles-Griot 10 mm doublet producing a 100 mm diameter beam flat to λ/5 over the likely operating temperature range); the minor pair at the outer end of the laser/collimator arm providing a convenient location for any necessary beam attenuators and shutters.

Fig. 8.3 HoloScan layout (off-axis configuration)

The long (z) stage has a travel of 1000 mm and a step size of 10 μm while the shorter stages have a travel of 200 mm and a step size of 5 μm. The Ealing DPS controller and stages are driven from a PC via an RS-232 connection. The standard monochrome signal from the CCD videocamera (JAI CV-M300) is captured by an ITI PC-Vision framegrabber housed in the same PC; this allows both image acquisition and stage movement to be controlled and automated by a single piece of software. For in-line holograms the camera is fitted with a ×10 microscope objective, so that each video frame represents a field of view of 1 mm by 0.8 mm. The larger particles recorded with off-axis holography allow imaging at 1:1, by projecting the real image directly on to the videocamera CCD.

One difficulty associated with the wavelength change between recording (532 nm) and replay (442 nm) is that after the emulsion shrinkage associated with photographic processing, the Bragg condition may no longer be fulfilled by the off-axis holograms during replay, resulting in the reconstructed image being too faint to be usefully detected by the current camera. As it is desirable to keep to the processing technique that gives the lowest background noise and best image resolution/geometrical fidelity in the final hologram, alternative solutions to this problem are being explored; notably either running the camera in an integrating mode or controlling the ambient humidity in the plateholder to adjust the emulsion thickness.

8.4.3 Scanning strategy

A few typical holographic images of plankton and marine particles are shown in Fig. 8.4. These have been obtained by manually scanning the camera through the whole reconstructed sample volume, locating objects, and focusing by eye. This process is extremely time consuming, especially as the oceanological interest lies primarily not in the appearance of any given individual but in the relative distributions of large numbers of organisms.

Automation of the analysis step in particle holography is critical to large-scale application, otherwise the data-taking can outstrip the data extraction process. Green and Zhao (4) estimated their automated HPIV analysis system would take twice as long as a human, while that of Barnhart et al. (12) could map out velocity vectors through a 1000 cm3 volume sampled in 1 mm steps in 5–6 hours; but such systems perform minimal analysis of individual objects. Using semi-manual scanning systems, Brown (13) took ‘a few hours’ for each 150 cm3 sample volume looking at ice crystals in clouds, while in their studies of meteorological fogs Borrmann and Jaenicke (7) required 32 hours for each hologram covering 8 cm3 and 1000 droplets. Improvements to the holocamera allowed Vössing et al. (14) to record ice crystals and snowflakes over a 500 l volume, pushing analysis times up to 70 hours/hologram. Recently, Katz et al. (6) tested a submersible with only an in-line holographic system for plankton studies. The holocamera uses film and can record up to 300 holograms in each dive, but manual analysis then takes two man-weeks for each hologram of the 300 to 2000 cm3 sample volume. The issues with such protracted data extraction are not merely the time scale per se, but that with any manual involvement operator fatigue can result in the introduction of hidden systematic errors in the results.

Fig. 8.4 Holographic reconstructions of (above) a 300 μm long Thalassiosira and a couple of Asterionella, and a piece of floc (below), all at approximately the same scale

The extraction of species distribution data can conveniently be separated in to two stages: first, the HoloScan machine scans the videocamera through the depth of the sample volume (z) in a series of 0.1 mm steps; when the end of the sample volume is reached the camera is panned sideways before returning so that eventually the entire volume has been covered. For each step, dedicated image pre-processing and tracking software [described in Nebrensky et al., (11)] cleans up the captured image and locates any possible objects in view. These are tracked between steps so that their focal plane can be identified, and the in-focus images are extracted and binarized. In the second stage the binarized images are presented to a neural net (section 8.5) for detailed classification.

8.5 Particle identification and organism classification

Due to the enormous amount of available data and the time necessary for a human operator to perform an analysis, automatic plankton classification is a very challenging and important task. Holographic plankton images present two main problems for the automatic classifier. One is the intrinsic noise of the hologram, which must be dealt with during the pre-processing and binarization of the original images. The second problem (more important from the classification point of view) is that we have two-dimensional images of three-dimensional objects that can be seen randomly from any different point of view and in any orientation. Moreover, plankton is composed of living organisms that can cluster together in different ways, or may have deformations associated with their motion. For this reason the object can change its shape remarkably even among organisms of the same class (see Fig. 8.5).

Fig. 8.5 Binarized images of objects belonging to the same class (Asterionella)

In order to overcome the problems related to rotation in the plane of the image, we decided to use rotation and translation invariant features extracted from the images to train a neural network to classify different organisms. The training algorithm is ‘Back Propagation,’ adopted for its extreme flexibility and noise resistance. Moreover, as a large database of images will be analysed the feature-extraction method must be fast, so computational time has also been investigated.

The following features have been considered: Hu moments (15), pattern spectrum (16), statistical pattern spectrum (17), and high order pattern spectrum (18).

We carried out a performance comparison of the methods for extracting the features by evaluating their invariance under rotations and their computational velocity (see Table 8.1). At the end of the tests, we decided to use the first five Hu moments (the last two are too sensitive to noise), because even if they may not be the most accurate, the computational time is about 1/30 to 1/300 of the time necessary for other methods.

As dimension is an interesting feature for discriminating one species from the other, and Hu moments are shape descriptors invariant under scaling, we decided to add a feature represented by the elongation of the organism (in pixels) along the major inertia axis in order to discriminate objects depending on their size. Another feature that is invariant under rotation is represented by the ratio between elongation along the secondary inertia axis and the elongation along the primary inertia axis. This feature has also been added as it does not need a long computational time and supplies other useful information on object shape. In this way, we have a seven-element vector extracted from the image to be classified by the neural network. A neural network is formed by many processing units (neurons) connected one to another. Computation develops in a parallel way, as many neurons work at the same time; due to the fact that input patterns are evaluated by several processing units, neural nets are noise resistant, and show greater versatility than classical algorithms. Moreover, neural nets have the capacity to generalize, so that a limited number of examples of a given object class become sufficient to allow the recognition of other objects of the same class, thanks to common features. The neural network structure adopted in our experiment is a three layer NN (input, hidden, output) where the first layer (input layer) is formed by seven units, the hidden layer is formed by seven units, and the output layer is formed by five units. It runs a Back Propagation algorithm that has been trained by a 38 pattern training set for 40 000 iterations of learning. The time necessary to train the neural network is 2 h 40 mins (CPU time) by a Pentium II 300 MHz with 128 Mb RAM.

Table 8.1 Features extraction algorithms evaluation

In order to have an effective network, it is necessary to supply a representative training set. The sample images within each class should be neither too similar, to avoid too great a specialization, nor too different, to allow the convergence of the net and to avoid confusion with other classes. Five possible output classes (corresponding to the five output neurons of the network) have been assumed in the available database of test images. There are three different classes of plants (Phytoplankton) – Asterionella, Ceratium, and Thalassiosira; one class of animals (Zooplankton) – Copepods; and a class corresponding to not-living matter or unidentified organisms – Floc.

The activation value of a neuron is a signal that depends on the weights of the connections between that neuron and others (depending on network architecture) and on the input vector. The classification is performed by using activation values of output neurons. During training, the weights are optimized in order to classify well some training vectors. In an ideal classifier, the output neuron corresponding to the correct class should have an activation value of 1, while the other neurons should have an activation value of 0. In this sense, we can say that the activation value is a measure of ‘how well’ a given pattern is classified. Usually, to classify patterns the activation values of all the output neurons are compared and the object is classified as belonging to the class corresponding to the output neuron with highest activation value (winning neuron). In our case, however, due to the extreme variability of the target objects, the classification task is not so simple. Some particular shapes are typical of Floc matter and the network can classify them directly. However, Floc may also have a very strange shape, due to the random aggregation of dead organisms and/or inorganic matter. Such objects may not be recognized by the net as Flocs. In order to classify these objects as a Floc, a control on the activation values of the output neurons has been added to the classification procedure:

Let {P1,...,P5} be the network outputs.

Let Pi* = Max [Pi] i = 1,2,3,4,5, (i* is the winning neuron) and Pj* = Max [Pi] ii*;

If Pi* > T1 and Pi* − Pj* > T2, the object is recognized by the system as belonging to the class i*, otherwise it is classified as Floc. This means that any time the winning neuron is still characterized by a low activation value or if two neurons posses similar high activation the system classifies the object as Floc.

After several tests on classification performances the best values of the threshold have been determined as: T1 = 0.85 and T2 = 0.15. Results on these thresholds are presented in Table 8.2 on a test set of 100 hand-binarized images. At present classification times are between about one second and two minutes, depending on the size and complexity of the presented object.

Table 8.2 Test results for T1 = 0.85, T2 = 0.15


This Chapter has covered some of the issues encountered in the implementation of an automated data extraction system for in situ holograms of marine particles. A scanning machine has been designed and built that can conveniently replay both in-line and off-axis holograms, producing high-quality digitized images of the reconstructed particle field. A set of image processing and enhancement routines has been written that can locate the plane of best focus of an object in a series of images and generate a suitable binarized representation for input to a neural net trained to classify objects by shape – here differentiating between several forms of phyto- and zoo-plankton. The combination of the software and replay machine will make it possible to generate the identity and location of every organism within the recorded sample volume without operator intervention.

So far the image capture, object tracking, and organism classification have been tested separately. However, HoloCam's in-line holograms will cover a volume 100 mm diameter and about 500 mm deep; sampling these at ×10 magnification at 0.1 mm intervals each producing a 500 Kb greyscale image yields an estimate of over 20 Terabytes of potential raw data from each holographic plate. It is this sheer volume of information in each hologram that makes analysis so demanding: it is difficult to store all this data on a single computer, let alone transfer it to another for remote processing. Obviously, most of this data represents empty space and work is currently underway to integrate the scanning and particle extraction stages so that only the relatively small number of binarized images require storage and later analysis. Further options, such as a low resolution pre-scan to identify regions of interest for detailed study are also under consideration.


This work was supported by the EC MAST-III initiative (MAS3-CT97-0079).


(1) Watson, J., Alexander, S., Anderson, S., Craig, G., Hendry, D. C., Hobson, P. R., Lampitt, R. S., Lucas-Leclin, B., Nareid, H., Nebrensky, J. J., Player, M. A., Saw, K., and Tipping, K. ‘Development, Construction and Test of a Subsea Holographic Camera (‘HoloCam’) for Recording Marine Organisms’ Proceedings of Oceanology International 2000, 7th–10th March 2000, Brighton, UK, pp. 183–192 (2000).

(2) Hobson, P. R. and Watson, J. ‘Accurate three-dimensional metrology of underwater objects using replayed real images from in-line and off-axis holograms’ Measurement Science and Technology 10 pp.1153–1161 (1999).

(3) Meng, H. and Hussain, F. ‘In-line Recording and Off-axis Viewing Technique for Holographic Particle Velocimetry’ Applied Optics 34 (11) pp.1827–1840 (1995).

(4) Green, S. I. and Zhao, Z. ‘Reconstructed Double-pulsed Holograms: A System for Efficient Automated Analysis’ Applied Optics 33 (5) pp.761–767 (1994).

(5) Bexon, R., Gibbs, J., and Bishop, G. D. ‘Automatic Assessment of Aerosol Holograms’ Journal of Aerosol Science 7 pp.397–407 (1976).

(6) Katz, J., Donaghay, P. L., Zhang, J., King, S., and Russell, K. ‘Submersible Holocamera for Detection of Particle Characteristics and Motions in the Ocean’ Deep-Sea Research I 46 pp. 1455–1481 (1999).

(7) Borrmann, S. and Jaenicke, R. ‘Application of Microholography for Ground-Based In-Situ Measurements in Stratus Cloud Layers: A Case Study’ Journal of Atmospheric and Oceanic Technology 10 pp.277–293 (1993).

(8) Kitagaki, T. ‘One Meter Holographic Bubble Chamber for TEVATRON Neutrino Experiments’ Photonics Applied to Nuclear Physics (Nucleophot) 2 pp.99–117. CERN Report 85–10 (1985).

(9) Deason, V. A. ‘Some Applications of Holography at the Idaho National Engineering Laboratory’ Industrial and Commercial Applications of Holography – Proceedings of the SPIE 353 pp.131–137 (1982).

(10) Naon, R., Bjelkhagen, H., Burnstein, R., and Voyvodic, L. ‘A System for Viewing Holograms’ Nuclear Instruments and Methods in Physics Research A283 pp.24–36 (1989).

(11) Nebrensky, J. J., Craig, G., Hobson, P. R., Lampitt, R. S., Nareid, H., Pescetto, A., Trucco, A., and Watson, J. ‘A Data Extraction System for Underwater Particle Holography’ in Optical Diagnostics for Industrial Applications, N. A. Halliwell, editor, Proceedings of SPIE vol. 4076 pp. 120–129 (2000).

(12) Barnhart, D. H., Adrian, R. J., and Papen, G. C. ‘Phase-Conjugate Holographic System for High-Resolution Particle-Image Velocimetry’ Applied Optics 33 (30) pp.7159–7170 (1994).

(13) Brown, P. R. A. ‘Use of Holography for Airborne Cloud Physics Measurements’ Journal of Atmospheric and Oceanic Technology 6 pp.293–306 (1989).

(14) Vössing, H-J., Borrmann, S., and Jaenicke, R. ‘In-Line Holography of Cloud Volumes Applied to the Measurement of Raindrops and Snowflakes’ Atmospheric Research 49 pp.199–212 (1998)

(15) J. Wood ‘Invariant Pattern Recognition: a Review’ Pattern Recognition 29 pp.1–17 (1996)

(16) Maragos, P. ‘Pattern Spectrum and Multiscale Shape Representation’ IEEE Transactions on Pattern Analysis and Machine Intelligence 11 pp.701–716 (1989).

(17) Foresti, G., Regazzoni, C. S. and Venetsanopoulos, A. N.: ‘Statistical Pattern Spectrum for Binary Pattern Recognition,’ in Computational Imaging and Vision – Mathematical Morphology and its Applications to Image Processing, J. Serra and P. Soille (Eds.), Kluwer Academic Publishers, pp.185–192 (1994).

(18) Z. Xiaoqi and Y. Baozong ‘Shape Description and Recognition using the High Order Morphological Pattern Spectrum’ Pattern Recognition 28 (9) pp.1333–1340 (1995).

J J Nebrensky and P R Hobson

Department of Electronic and Computer Engineering, Brunel University, Uxbridge, UK

J Watson and G Craig

Department of Engineering, University of Aberdeen, UK

G L Foresti, S Gentili, and G G Pieroni

Department of Mathematics and Computer Science (DIMI), University of Udine, Italy

H Nareid

Formerly of Department of Engineering, University of Aberdeen, UK. Now with Axeon Limited, Aberdeen, UK