4 Feature Measurement and Error Analysis – Image Analysis

4Feature Measurement and Error Analysis

Object feature measurement is often the final goal of many image analysis tasks and needs to be well understood. The characteristics/properties of an object region can be quantitatively described by the values of various features. Feature measurement is often referred to as the process of obtaining quantitative and numerical description values of objects. How to determine/measure suitable features accurately and precisely is critical in image analysis.

The sections of this chapter are arranged as follows:

Section 4.1 introduces the concepts of direct and indirect measures with some examples and introduces some simple methods for combining multiple measures.

Section 4.2 provides some discussion on the two closely related but different terms: accuracy and precision, and their relationship to systematic and statistical errors.

Section 4.3 focuses on the connection ambiguity caused by using both 4-direction connectivity and 8-direction connectivity. Some contrast examples on different situations are specified.

Section 4.4 discusses the factors affecting the measurement error in detail, including the resolution of the optical lens, the sampling density at the time of imaging, the algorithm of object segmentation, and the feature computation formulas.

Section 4.5 introduces a method as an example for analyzing the measurement error, in considering the analysis of the error caused by the discrete distance instead of the Euclidean distance in the distance measurement.

4.1Direct and Indirect Measurements

Measurements of objects can be either obtained directly by quantifying the properties (measuring the quantity) of objects or indirectly derived by combining the direct measuring results.

4.1.1Direct Measurements

Two types of direct measurements can be distinguished:

4.1.1.1Field Measurements

Field measurements usually are collected over a specified number of fields (e. g., images, regions in image), which is determined either by statistical considerations of the precision or by following a standard procedure.

Examples of primitive field measurements include area of features, area of features filled, area fraction, field area, field number, number of features, number of features excluded, number of intercepts, etc.

4.1.1.2Object-Specific Measurements

Examples of object-specific measurements include area, area filled, diameter (including maximum, minimum, and average), feature angle, number of features, hole count, inscribed circle (center coordinates x and y, and radius), intercept count, perimeter, position x and y, and tangent count.

4.1.2Derived Measurements

Two types of derived measurements can be distinguished:

4.1.2.1Field Measurements

Examples of field measurements include the fractal dimension of a surface, stereological parameters (such as the number of points in a volume, the area of a surface, total curvature of a curve or a surface, the density of length or surface or volume), etc.

4.1.2.2Object-Specific Measurements

The number of derived object-specific measurements is unlimited in the sense that it is possible to define any combination of primitive measurements to form a new feature descriptor.

Table 4.1 lists some commonly used (direct and derived) object-specific descriptors (measured from a particular object).

4.1.3Measurement Combinations

There are a number of possibilities to combine (already obtained) metrics to form a new (derived) metric.

Table 4.1: Commonly used object-specific descriptors.

Descriptors Definitions
Area (A) Number of pixels
Perimeter (P) Length of the boundary
Longest dimension (L) Length of the major axis
Breadth (B) Length of the minor axis
Average diameter (D) Average length of axes
Aspect ratio (AR) Length of the major axis/length of the perpendicular axis
Area equivalent diameter (AD) (4A/π)1/2 (the diameter for a circular object)
Form factor (FF) 4πA/P2 (perimeter-sensitive, always ≤ 1)
Circularity (C) πL2/4A (longest dimension-sensitive, ≥ 1)
Mean intercept length A/projected length (x or y)
Roughness (M) P/πD (πD refers to the perimeter of circle)
Volume of a sphere (V) 0.75225 × A2/3 (sphere rotated around diameter)

1.The simplest combination technique is addition. Given two metrics d1 and d2, then

d= d 1 + d 2 ( 4.1 )

is also a metric.

2.Metrics can be combined by scaling with a real-valued positive constant. Given a metric d1 and a constant βR+, then

d=β d 1 ( 4.2 )

is also a metric.

3.If d1 is a metric and γR+, then

d= d 1 γ+ d 1 ( 4.3 )

is also a metric.

The operations in eqs. (4.14.3) may be combined. For example, if {dn: n = 1,..., N} is a set of metrics, then ∀βn, γnR+

d= n=1 N d n γ n + β n d n ( 4.4 )

is also a metric.

It should be noted that the product of two metrics is not necessarily a metric. This is because the triangle inequality may not be satisfied by the product.

4.2Accuracy and Precision

Any measurement taken from images (e.g., the size or the position of an object or its mean gray-level value) is only useful if the uncertainty of the measurement can also be estimated. The uncertainty of the measurement can be judged according to the concept of accuracy and/or precision.

4.2.1Definitions

Precision (also called efficiency) is defined in terms of repeatability, describing the ability of the measurement process to duplicate the same measurement and produce the same result. Accuracy (also called unbiasedness) is defined as the agreement between the measurement and some objective standard taken as the “truth.” In many cases, the latter requires standards that are themselves defined and measured by some national standards body.

Feature measurement can be judged according to its accuracy and/or precision. An accurate measurement (unbiased estimate) ã of a parameter a is the one for which E{ã} = a. A precise measurement (consistent estimate) ã of a parameter a based on N samples is the one which converges to a as N → ∞ (under the condition that the estimator is unbiased). In real applications, a consistent estimate can be systematically different from the true value. This difference is the bias, and could be invisible as the “true value” is unknown. Take creating a public opinion poll as an example. If the questioning (sampling) is not uniformly distributed, that is, every member of the population needs to have an equal chance of being selected for the sample, there will be sampling bias. If leading questions are employed (like the imperfection of calibration of instruments), there will be systematic bias (Howard, 1998).

For scientific purposes, an unbiased estimate is the best that it can hope to achieve if a quantity cannot be directly measured. In other words, unbiasedness is the most desirable attribute that a scientific method can have. In a given experiment, the high accuracy could be achieved using a correct sampling scheme and proper measurement methods. The obtained precision depends on the object of interest and, in many cases, it could be controlled by putting more effort into refining the experiment. Note that a highly precise but inaccurate measurement is generally useless.

4.2.2Relationships

It is possible to have a biased estimator that is “efficient” in that it converges to a stable value quickly and has a very small standard deviation, just as it is possible to have an inefficient unbiased estimator (converging slowly, but very close to the true value). For example, it is needed to estimate the position of the crosshair in the middle box as shown in Figure 4.1. Two estimation procedures are available. The first procedure yields eight estimates as shown in the left box in Figure 4.1. These estimates are not consistent, but they are unbiased if put together. They correspond to an accurate measurement. The second procedure yields the eight estimates shown in the right box in Figure 4.1. These estimates are very consistent but not accurate. They are definitely biased estimations.

Table 4.2 suggests the difference between precision and accuracy by graphical illustrations of different shooting effects at a target. For high precision, the hits are closely clustered together. While for low precision, there is a marked scatter of hits. For high accuracy, the average of the cluster of hits tends toward the bull’s eye. While for low accuracy, the hits are inaccurate or biased.

Figure 4.1: Illustrating the difference between accuracy and precision.

Table 4.2: Effects of accuracy and precision when shooting at a target.

4.2.3Statistical Error and Systematic Error

Accuracy and precision can also be discussed in relation to two important classes of errors. In Figure 4.1, the centroid of different measurements is indicated by the black circle. The statistical error describes the scatter of the measured value (the distribution of the individual measurements) if the same measurement is repeated over and over again. A suitable measurement of the width of the distribution gives the statistical error. From the statistical point of view, the example in the right side of Figure 4.1 is better.

However, this mean value may be further off the true value than what is given by the statistical error margins. Such a deviation is called a systematic error, which is indicated by the difference between the true value and the average of the measured values. A precise but inaccurate measurement is encountered when statistical error is low, but the systematic error is high (as in the example shown in the right side of Figure 4.1). On the other hand, if the statistical error is large and the systematic error is low, the individual measurements scatter widely, but their mean value is close to the true value (as in the example in the left side of Figure 4.1).

It is easy—at least in principle—to get an estimate of the statistical error by repeating the same measurement many times (Jähne, 1999). However, it is much harder to control systematic errors. They are often related to a lack of understanding of the measuring setup, and procedure. Unknown or uncontrolled parameters influencing the measuring procedure may easily lead to systematic errors. Examples of systematic errors are calibration errors and drift caused by temperature-dependent parameters in an experimental setup without temperature control.

4.3Two Types of Connectivity

Regions and boundaries are formed with groups of pixels, and these pixels have certain connections among them. When defining a region and boundary, different types of connectivity should be used; otherwise, certain problems will arise. Some examples are shown in the following.

4.3.1Boundary Points and Internal Points

In image measurement, internal pixels and boundary pixels should be judged with different types of connectivity to avoid ambiguity. This can be explained with the help of Figure 4.2. In Figure 4.2(a), the brighter pixels form an object region. If the internal pixels are judged according to 8-direction connectivity, then the darker pixel in Figure 4.2(b) is an internal pixel and other brighter pixels form the 4-directional boundary (as indicated by thick lines). If the boundary pixels are judged according to 8-direction connectivity, then the three darker pixels in Figure 4.2(c) are internal pixels and other brighter pixels form the 8-directional boundary (as indicated by thick lines). Figure 4.2(b, c) corresponds to our intuition. In fact, this is what it is expected and needed.

However, if the boundary pixels and internal pixels are judged using the same type of connectivity, then the two pixels marked with “?” in Figure 4.2(d, e) would have ambiguity and be judged as an internal pixel or boundary pixel. For example, if the boundary pixels are judged by 4-direction connectivity (as in Figure 4.2(b)) and the internal pixels are judged by 4-direction connectivity, too, a problem will arise. The pixels in question should be judged as internal pixels in the first view since all pixels in the neighborhood belong to the region (see the dashed lines in Figure 4.2(d)), but it should also be judged as a boundary pixel; otherwise, the boundary in Figure 4.2(b) would become unconnected. On the other hand, if the boundary pixels are judged by 8-direction connectivity (as in Figure 4.2(c)) and the internal pixels are judged by 8-direction connectivity, too, the ambiguity problem exists again. The pixels marked with “?” are enclosed inside the boundary as in Figure 4.2(c), so are considered internal pixels, but their 8-connected neighborhoods have pixels that do not belong to the region (see the dashed lines in Figure 4.2(e)), so they should also be considered boundary pixels.

Figure 4.2: The connectivity of boundary pixels and internal pixels.
Figure 4.3: The connectivity of object pixels and hole pixels.

4.3.2Object Points and Background Points

As another example, consider Figure 4.3(a), which consists of a black object with a hole on a white background. If one uses 4-connectedness for both background and objects, then the object consists of four disconnected pieces (in contrast to intuitiveness), yet the hole is separated from the “outside” background, as shown in Figure 4.3(b). Alternatively, if one uses 8-connectedness for both background and objects, then the object is now 1-connected piece, yet the hole is now connected to the outside, as shown in Figure 4.3(c). A suitable solution for this example is to use 4-connectedness for the hole (the hole can be separated from the “outside” background) while using 8-connectedness for the object (the object is one connected piece), as shown in Figure 4.3(d).

The paradox problem as shown by the above examples is called “connectivity paradox,” and it poses complications for many geometric algorithms. The solution is thus to alternatively use 4-neighbour and 8-neighbour connectivity for the boundary and internal pixels of an image.

4.3.3Separating Connected Components

A 4-connected curve may define more than two connected components. In the example shown in Figure 4.4(a), three 4-connected components are defined, containing the points p, q, and r, respectively.

Such a problem is resolved by using 8-connectivity as dual to 4-connectivity as shown in Figure 4.4(b). The 4-connected curve C defines an 8-connected interior component containing the points p and q and the exterior is the 8-connected component containing r. It is clear that there is no 8-connected arc that connects p or q to any exterior point (i.e., a point in the exterior 8-connected component).

Figure 4.4: A 4-digital closed curve separates three 4-connected components.
Figure 4.5: An 8-digital closed curve joins two 8-connected components.

In Figure 4.5, the second type of problem arises. The 8-connected curve C does not separate the digital plane into two 8-connected components, as shown in Figure 4.5(a). As a counter example, there exists an 8-connected arc joining two potential interior and exterior points p and q, respectively.

However, it is clear that an 8-connected curve will define two 4-connected components as its exterior and interior, as shown in Figure 4.5(b).

In conclusion, if the internal pixels were judged according to 8-connectivity, the boundary obtained would be 4-connected. While if the internal pixels are judged according to 4-connectivity, the boundary obtained would be 8-connected.

4.3.4Open Set and Closed Set

In digital image measurements, special attention should be paid to boundary pixels. The boundary B of a region R consists of all the boundary pixels that are 4-connected or 8-connected. The other pixels of the region are called internal pixels. A boundary pixel p of a region R should satisfy two conditions: (1) p belongs to R; (2) in the neighborhood of p, there is at least one pixel that does not belong to R.

The border of a digital set is defined as follows. Given a k-connected set of points R, the complement of R, denoted Rc, define a dual connectivity relationship (denoted k′-connectivity). The border of R is the set of points B defined as the k connected set of points in R that have at least one k′-neighbor in Rc (Marchand, 2000).

Consider the digital image shown in Figure 4.6(a). Points of the 8-connected (k = 8) foreground F are symbolized as black circles (•) and points of the 4-connected {k′ = 4) background Fc are symbolized as white circles (°).

Depending on which foreground or background is chosen as an open set, two different border sets are defined. In Figure 4.6(b), the foreground is considered a closed set. Hence, it contains its border B, the points of which are surrounded by a square box. The pixels belonging to B are black pixels with at least one white pixel among their 4-neighbors. Conversely, in Figure 4.6(c), the foreground is considered an open set. Hence, the background contains the border B. The pixels belonging to B are white pixels with at least one black pixel among their 8-neighbors.

Figure 4.6: Border of a binary digital image.

Although the set of border points B of a connected component is a connected component with respect to the connectivity of the set it belongs to, it generally does not satisfy the conditions for being a digital closed curve. In Figure 4.6(b), the border of the foreground B is 8-connected, but the point marked with p has three 8-neighbors in B. Similarly in Figure 4.6(c), B is a 4-connected component. However, the point marked with q has three 4-neighbors in B. Therefore, in neither case B is a closed curve.

4.4Feature Measurement Error

Images are the mapping results of the real world. However, such a mapping is a degenerated projection process. It means different real-world entities can produce the same or a similar projection. In addition, the digital image is just an approximate representation of the original analogue world/information, due to a number of factors.

4.4.1Different Factors Influencing Measurement Accuracy

In image analysis, the measurement of features consists of starting from digitized data (image) and accurately estimating the properties of original analog entities (in the scene) that produce those digitized data. “The ability to derive accurate measures of image properties is profoundly affected by a number of issues.” (Young, 1988). Along the process from scene to data (such as image acquisition, object segmentation, feature measurement), many factors will influence the accuracy of the measurements. In fact, this is an estimation process, so error is inevitable. Some important factors, which make the real data and estimated data different, are listed as follows (see Figure 4.7, in which the action points of different factors are also indicated).

1.The natural variation of the scene (such as object property and environments).

2.The image acquisition process, including digitization (sampling and quantization), calibration, etc.

3.The various image processing and analysis procedures (such as image segmentation, feature extraction, etc.).

4.The different measurement processes and feature computation formulas (including approximation error).

5.Noise and other degradations introduced during treatments.

Figure 4.7: Some factors influencing measurement accuracy.

In the following, some factors listed in 2–4 will be discussed.

4.4.2Influence of Optical Lens Resolution

The actual image acquisition uses optical lens. Resolution of optical lens has a major impact on the image samples. For a limited scattering optical lens, the radius of point spread function at the first zero point in the imaging plane is

r= 1.22λ D d i ( 4.5 )

where λ is the wavelength of light (often take λ = 0.55 μm for natural light); di is the distance from the lens to the imaging plane; D is the diameter of the lens. According to the Rayleigh resolution criterion, if the distance between the two points of the source image is r, then they can be distinguished.

The following gives some discussions on the use of several different imaging lens in imaging apparatus.

4.4.2.1Normal Camera

In common case of taking picture, the distance from lens to the object d0 >> dif (f is the focal length of the lens). Let the f-factor of the lens (the ratio of the focal length and the diameter of the lens) nf = f/D; at this time, the radius r corresponding to the resolution of camera is

r= 1.22λ D d i 1.22λ f D =1.22λ n f ( 4.6 )

The above formula shows that except in a very close-up shots (macro), other shot circumstances will have better results. With the close-up shots, di would be much bigger than f, and the approximating effect will be relatively poor.

4.4.2.2Telescope

If a telescope is used to observe constellations, it should be noted that the constellation is actually equivalent to point source, their image size would be many times smaller than the radius corresponding to the first zero point of point spread function of the best telescopes in the imaging plane. At this time, the constellation cannot produce the images of their own but only to copy the point spread function of telescope on the imaging plane. In this case, it is the size of the point spread function of telescope determining the resolution.

4.4.2.3Microscope

In the optical microscope, di is determined by the optical tube length, which is generally between 190 and 210 mm. In contrast to common case of taking picture, except the microscope lenses with less than 10 times magnification, there are di >> d0 = f. The numerical aperture of the lens is defined as

NAD/2f( 4.7 )

The radius r corresponding to the resolution of camera is

r= 1.22λ 2NA =0.61λ/NA( 4.8 )

Table 4.3 gives some theoretical resolutions of typical microscope lenses. Theoretical unit size refers to the theoretical size of a single unit in CCD. Theoretical unit number along the target diameter can be obtained by considering the field of view of the modern microscope. The diameter of the field of view of the modern microscope is 22 mm (0.9 in.), while the diameter of the field of view of the earlier microscope is 20 mm (0.8 in.).

In the following, consider the calculation of physical resolution of a CCD image. In a typical usage example, a simple CCD camera with the diagonal distance of 13 mm (0.5 inches) is used, the resulting image is 640 × 480 pixels, in which each pixel has a scale of 15.875 μm. By comparing this value with theoretical unit size in Table 4.3, it can see that the simplest camera is enough for utilization, if no lens was added between the camera and the eyepiece. Many metallography microscope allows to add a 2.5× zoom lens, which can make the camera to get unit size smaller than the above-discussed theoretical unit size.

In common cases, using an optical microscope together with a camera having resolution higher than 1,024 × 1,024 pixels could only increase the amount of data and analysis time, while cannot provide more information.

Table 4.3: Resolutions and CCD unit sizes of some typical microscope lenses.

4.4.3Influence of Sampling Density

There is a profound difference between image processing and image analysis, and then, the sampling theorem (a statement that can be shown to be true by reasoning) is not a proper reference for choosing a sampling density. Some points are discussed below.

4.4.3.1Applicability of Sampling Theorem

As it is known, the Shannon sampling theorem points out that if the highest frequency component in a signal f(x) is given by w0 (if f(x) has a Fourier spectrum F(w), then f(x) is band-limited to frequency w0 if F(w) = 0 for all |w| > w0), then the sampling frequency must be chosen such that ws > 2w0 (note that this is a strict inequality). It is also proved that for any signal and its associated Fourier spectrum, the signal can be limited in the time space (space-limited) or the spectrum can be limited in the frequency space (band-limited), but it cannot be limited in both the time and frequency spaces. Therefore, the applicability of the sampling theorem should be studied.

The sampling theorem is really concerned with image processing. The effects of under sampling (aliasing) and improper reconstruction techniques are usually compensated by the sensitivity (or lack of it) of the human visual system. Image analysis is not in a position to ignore these effects. The ability to derive accurate measurements of image properties is profoundly affected by a number of issues, one of which is the sampling density (Young, 1988).

The situation changes in image analysis. In general, the more pixels you can “pack” into an object, the more precise the boundary detection when measuring the object. The call for the increase in magnification to resolve small features is a better sampling requirement. Due to the misalignment of square pixels with the actual edge of an object, significant inaccuracies can occur when trying to quantify the shape of an object with only a small number of pixels. One example is shown in Figure 4.8, in which three scenarios of the effects of a minute change in position of a circular object within the pixel array and the inherent errors in size that can result are illustrated.

Figure 4.8: Different measurements resulting from different positioning.

The above trouble can be diminished by increasing the sampling density. However, such a problem could not be solved completely as indicated in Section 2.3. It is concluded that it is impossible to satisfy the sampling theorem in practice, except for periodic functions. As the applicability of the sampling theorem could not be guaranteed in general situations, sampling density selection should be considered for image analysis.

4.4.3.2Selection of Sampling Density

In the structure analysis of biomedical images with a high-resolution microscope image system, the spatial resolution and the sampling density are critical. On the one hand, many of the relevant details in the stained cell and tissue images are smaller than 1 μm. On the other hand, the theoretical spatial resolution limit of a light microscope is a function of the wavelength of illumination light as well as the aperture of the optics and is approximately 0.2–0.3 μm for incoherent illumination. Theoretical resolution, d, of an optical microscope is given by the expression (ASM, 2000)

d= 1.22λ 2NA ( 4.9 )

where λ is the wavelength of light (approximately 0.55 μm) and NA is the numerical aperture of the objective (e.g., 0.10 for an objective with magnification 4×, and 1.40 for an objective with magnification 100×).

The information content of a digitized image can be described mathematically by the modulation transfer function (MTF, the amplitude of the Fourier transform of the point-spread function) of the complete optical and imaging system and of the spectrum of the image itself. The plot of one experimental result with cameras is shown, by the MTF in cycles per μm (vertical axis) versus the sampling density in pixel per μm (horizontal axis), in Figure 4.9 (Harms, 1984).

A reasonable choice for the sampling density necessary to detect subtle/fine cellular structures with a digital image analysis system seems to be in the range of 15–30 pixel/μm:

Figure 4.9: Resolution (cycle/μm) versus Sampling density sampling density (pixel/μm).

1.The conventional sampling densities of less than 4–6 pixel/μm are sufficient in systems that locate cells and determine their size and shape as well as cellular form factors.

2.The theoretical band limit of the scanning system is approximately 4–5 cycle/μm (spatial resolution, depending on the aperture of the microscope optics and the illumination wavelength). This corresponds to a sampling density of 10–15 pixel/μm; however, the information content in the digitized image is less than that in the microscopic image as seen with human eyes.

3.At sampling densities of more than 20–30 pixel/μm, the computational costs increase with no significant increase in the information content of the image. The cutoff values of the MTF do not increase with sampling densities above 30 pixel/μm (Figure 4.9).

Based on the above observation and discussion, it can be concluded that sampling densities, which are significantly higher than what one expects from the sampling theorem, are needed for the analysis of microscopic images (Harms, 1984).

4.4.3.3One Example in Area Measurement

In real applications, to obtain high-accuracy measurement results or to reduce the measurement error to below certain predefined values, a sampling density, which is much higher than that determined by the sampling theorem, is often used. One factor is that the selected sampling density is related to the size of the object to be measured. Take the example of measuring the area of a circular object. The relative measurement error is defined by

ε= | A E A T A T ×100%( 4.10 )

where AE is the measured (estimated) area and AT is the real area of the object. Based on a large number of statistical experiments, the relationship between the relative error of the area measurement and the sampling density along the diameter of the circular object is shown in Figure 4.10. The double log coordinates have been used in Figure 4.10, and in this figure, the curve of the relative error of the area measurement versus the sampling density along the diameter is almost a mono-decreasing line.

Figure 4.10: The curve of the relative error of the area measurement versus the sampling density along the diameter of a circular object.

It can be seen from Figure 4.10 that the selection of the sampling density should consider the requirement of the permitted measurement error. For example, if the relative error of the area measurement for a circular object is smaller than 1%, then 10 or more samples should be obtained along the diameter of the object. If the required relative error is smaller than 0.1%, at least 50 sample operations should be performed along the diameter of the object. Such sampling densities are often much higher than those determined by the sampling theorem. It has shown that high sampling density is needed not only for measuring the area, but also for high accuracy measurements of other features (Young, 1995).

The above discussion shows that oversampling is often needed for high accuracy measurements of the analogy property from digitized data. Such a sampling density is in general higher than that determined by the sampling theorem. The requirement here is different from the task of image reconstruction based on sampling, as in image processing.

4.4.4Influence of Segmentation

Feature measurements are made for object regions in an image, so the quality of image segmentation for delineating the objects from the background certainly affects the accuracy of feature measurements (Zhang, 1995). The quality of image segmentation, in turn, is determined by the performance of the segmentation techniques applied to get segmented images.

To investigate the dependence of object feature measurements on the quality of image segmentation, a number of controlled tests are carried out. In the following, the basic testing images, segmentation procedure, and testing features are first explained (the criteria for performance assessment have been discussed in Section 2.4). Then, the experiments are performed and the result assessments are presented.

4.4.4.1Basic Test Images

Since synthetic images have the advantage that they can easily be manipulated, that is, acquisition conditions can be precisely controlled, and experiments can be easily repeated, synthetically generated test images are used. They are produced by using the system for generating synthetic images as described in Zhang (1992a).

The basic image is 256 × 256 pixels, with 256 gray levels and composed of a centered circular disc object (diameter 128), whose gray-level value is 144, on a homogeneous background, whose gray-level value is 112. Particularities of test images generated from this basic image will be described along with the experiments.

4.4.4.2Segmentation Procedure

A segmentation procedure that can provide gradually segmented images is useful in this task. In addition, this procedure should be relatively independent of the shape/size of objects in images. A large number of segmentation algorithms have been proposed in literature (see Chapter 2). Among them, thresholding techniques are popularly employed. The global thresholding techniques differ mainly in the way they determine the threshold values. To make the study more general, many different threshold values are applied instead of using a specific threshold technique. The goal is not to compare different thresholding techniques but to investigate the dependence of feature measurements on the threshold value, in other words, on the segmentation procedure.

The real procedure is as follows. In order to obtain a group of gradually segmented images, the test images are first multiple thresholded with a sequence of values. These threshold values are taken from the original gray levels between the background and objects. This thresholding process produces a series of labeled images. Then, one opening process is applied to each labeled image to reduce random noise effects. Finally, the biggest object is selected in each image and the holes inside the object are filled. Such a procedure is simple but effective in practice.

4.4.4.3Testing Features

In various image analysis applications, geometric features are commonly employed (see Chapters 3 and 6). Seven geometric features are considered here. These are the area (A), perimeter (P), form factor (F), sphericity (S), eccentricity (E), normalized mean absolute curvature (N), and bending energy (B) of objects. Area and perimeter have been discussed in Section 3.5.1. Form factor, sphericity, and eccentricity will be described in Section 6.3.

The normalized mean absolute curvature is proportional to the average of the absolute values of the curvature function K(m) of an object contour, given by

N m=1 M |k( m )| ( 4.11 )

where M is the number of points on the object contour and K(m) can be expressed with the chain code C(.) of the object contour by K(m) = [C(m) – C(m – 1)]/{L[C(m)] + L[C(m – 1)]} and L(.) is the half-length of the curve segments surrounding the contour point, Bowie (1977).

Finally, the bending energy is proportional to the sum of the squared curvature around an object contour, Bowie (1977)

B m=1 M k 2 ( m ) ( 4.12 )

4.4.4.4Experiments and Results

The following four quantitative experiments have been carried out to study the dependence of RUMA (or normalized AUMA or scaled AUMA) on image segmentation. The test images will be described along with each experiment. Since the segmented images are indexed by the threshold values used for obtaining the respective images, the RUMA of features as the function of the threshold values used in the segmentation procedure is plotted. The RUMA represents the disparity between true and segmented objects. The smaller the value, the better the segmentation accuracy.

Feature Difference

The first experiment compares the normalized AUMA of these seven features for the same-segmented images. As an example, Figure 4.11 shows the results obtained from one image that contains the same object as in the basic image. The SNR of this image is 16. The resulted values of different features have been normalized for the purpose of comparison.

Three points can be noted from Figure 4.11. The first is that all curves have a local minimum located at the inner region between the gray levels of the object and background and have higher values at the two sides. Intuitively, this implies that the measurement accuracy of those features is related to the quality of segmented images. The second, however, is that those minima are not located at the same place. This means that there exists no unique best-segmented image with respect to all these features. The third is that the seven curves have different forms. The A curve has a deeper valley and decreases or increases around the valley quite consistently. In other words, it steadily follows the change of the threshold values. Other features are not always so sensitive to such a change. It is clear that the accurate measurement of the object area depends more on the segmentation procedure than on other features. The measurement accuracies of different features are a function of segmentation.

Figure 4.11: Results for feature difference.
SNR Influence

The second experiment looks at the dependence of scaled AUMA on segmentation when images have various SNR levels. Four test images with different SNR levels are generated by adding Gaussian noise with standard deviations 4, 8, 16, and 32, and the SNR levels are 64, 16, 4, and 1, respectively. These values cover the range of many applications and they are compatible with other studies. These four images contain the same object as in the basic image. In Figure 4.12, the scaled AUMA curves of three features, namely A, E, and P, are presented.

The influence of noise on the results is quite different as shown in Figure 4.12. The A curves gradually shift to the left as the SNR level decreases, though their forms remain alike. It is thus possible, by choosing the appropriate values of algorithm parameters, to obtain similar measurement accuracy from images of different SNR levels. On the contrary, the E and P curves gradually move up as the SNR level decreases. In other words, the best measurement accuracy of E and P is associated with the SNR level of images. The higher the SNR level, the better the expected measurement accuracy. In Figure 4.12(b, c), the E curves are jagged, whereas the P curves are smoother. This implies that E is more sensitive to the variation of segmented objects due to noise. Among other features, B, N, and F curves are also smooth like the P curves, while S curves show some jags like the E curves.

Figure 4.12: Results of SNR influence.
Influence of Object Size

In real applications, the objects of interest contained in images can have different shapes and/or sizes. The size of objects can affect the dependence of RUMA on segmentation, as shown in the third experiment. Four test images with objects of different sizes are generated. Their diameters are 128, 90, 50, and 28, respectively. The SNR of these images is fixed to 64 to eliminate the influence of SNR. The results for three features, namely A, B, and F, are shown in Figure 4.13.

In Figure 4.13(a, b), the measurement accuracy of A and B show an opposite tendency with respect to the change of object size. When the images to be segmented contain smaller objects, the expected measurement accuracy for A becomes worse while the expected measurement accuracy for B becomes better. Among other features, E and S exhibit a similar tendency as A but less significantly, while N curves are more comparable with B curves. Not all features show clear relations with object size; for example, the four F curves in Figure 4.13(c) are mixed. The P curves also show similar behavior.

Object Shape Influence

The fourth experiment is used to examine how the dependence of RUMA on segmentation is affected by object shape. Four test images containing elliptical objects of different eccentricity (E = 1.0, 1.5, 2.0, 2.5) are generated. Though the shapes of these four objects are quite distinct, these objects have been made similar in size (as the object size in the basic image) to avoid the influence of the object size. This was achieved by adjusting both the long and short axes of these ellipses. In Figure 4.14, the results obtained from these four images with SNR = 64 for three features, A, N, and S are given.

The difference among the four curves of the same feature in Figure 4.14 is less notable than that in Figure 4.13. In Figure 4.14(a), for example, the four A curves are almost overlapped with each other. This means that the influence of the object shape on the measurement accuracy is much less important than the object sizes. Other feature curves, except B curves, have similar behavior as S curves in Figure 4.14(c), while B curves are more like N curves in Figure 4.14(b).

Figure 4.13: Results for object size influence.
Figure 4.14: Results for object shape influence.

4.4.5Influence of Computation Formulas

The formula for feature computing is also an important component in analyzing measurement errors. In the following, the distance measurements based on chain codes are discussed as examples.

Generally, the correct representation of simple geometric objects such as lines and circles is not clear. Lines are well defined only for angles with values of multiples of 45°, whereas for all other directions, they appear as jagged, staircase-like sequences of pixels (Jähne, 1997).

In a digital image, a straight line oriented at different angles and mapped on to a pixel grid will have different lengths if measured with the fixed link length. Suppose two points in an image are given, and a (straight) line represented by 8-directional chain code has been determined. Let Ne be the number of even chain codes, No be the number of odd chain codes, and Nc be the number of corners (i. e., the point where the chain code changes direction) in the chain codes. The total number of chain codes is N (N = Ne + No), while the length of this line can be represented using the following general formula (Dorst 1987):

L=A× N e +B× N o +C× N c ( 4.13 )

where A, B, and C are the weights for Ne, No, Nc, respectively. The length computation formula is thus dependent on these weights. Many studies have been made for these weights. Some results are summarized in Table 4.4.

Five sets of weights for A, B, and C are listed in Table 4.4. Separately introducing them into eq. (4.9) can produce five length computation formulas for lines, which are distinguished by the subscripts. For a given chain code representing a line in the image, the length computed using the five formulas are generally different. The expected measurement errors are also different; a small error is expected with formulas having the bigger labels. In Table 4.4, E represents the average mean-square difference between the real length and the estimated length of a line and shows the errors that may be produced by using these formulas. E is also a function of N.

Table 4.4: A list of length measurement formulas.

Now let us have a closed look at these formulas. In 8-directional chain codes, even chain codes correspond to the codes representing horizontal or vertical straight-line segments, while odd chain codes correspond to the codes representing diagonal straight line segments. The simplest method for measuring the length of a line is to count each code in the chain code as a unit length. This results in the formula for L1. It is expected that L1 would be a short-biased estimator, as in an 8-directional chain code representation the even numbered codes have lengths bigger than unit. Making some scaling for these weights provides the second estimator L2. If the distance between two adjacent pixels, along the horizontal or vertical direction, is taken as a unit, the distance between two diagonally adjacent pixels must be a 21/2 unit. This gives the formula for L3, which is the most popularly used formula for chain code. However, L3 is also a biased estimator. Compared with L1, the difference is that L3 is a long-biased estimator. In fact, a line at different angles will have different lengths according to L3, as the sum overestimates the value except in the case of exact 45° or 90° orientations. To compensate this effect, the weights are scaled and another estimator L4 is obtained. The weights A and B are selected to reduce the expected error for longer lines, so L4 would be better used for longer lines as the error is inversely proportional to the lengths of the lines. Finally, if the number of corners is considered as well as the number of codes, a more accurate estimator L5 is obtained.

One example used to compare the five estimators, or in other words, to compare the five length computation formulas, is given in Figure 4.15. In Figure 4.15, given two points p and q, different analogue lines that join these two points are possible, only two of them (with the dotted line and dashed line, respectively) are shown as examples. The solid line provides the chain codes, and the lengths computed using the five formulas for this chain code are shown in the right of the figure. The comparison is made by taking L5 as the reference, that is, the error produced by each formula (listed in the parentheses) is obtained by

ε= | L i L 5 | L 5 ×100%i=1,2,3,4,5( 4.14 )

Figure 4.15: Compute the chain-code length with different formulas.

4.4.6Combined Influences

The above discussed influence factors (as well as other factors) can have some combined influence on measurements, or in other words, their effects are related Dorst (1987). For example, the influence of computation formulas on the length estimation error is related to the sampling density. With coarser sampling, various computation formulas give compatible results. While with the increase in the sampling density, the differences among the results of various computation formulas increase, too. The estimators with bigger serial numbers become even better than the estimators with smaller serial numbers. That is, the measurement errors will decrease with the increase in sampling density, and the decline is fast for estimators that show better behavior. This can be seen from Figure 4.16, in which the curves of computation error as the function of the sampling density for three estimators are plotted.

It can be seen from Figure 4.16 that the measurement errors for L3, L4, and L5 decrease with the increase in sampling density. The L4 curve declines faster than the L3 curve and the curve L5 declines even faster than the L4 curve.

Finally, it should be noted that the decrease ratios of these estimators become smaller and smaller with the increase in sampling density. They finally approach their respective bounds. It has shown that these estimators all have some low bound (Table 4.3) that will not be changed by the sampling density. To get measurements with even more accuracy, some even more complicated formulas are needed (Dorst, 1987).

Figure 4.16: The curves of computation error versus the sampling density for three estimators.

4.5Error Analysis

Measurement errors arise from different sources for various reasons. Two commonly encountered measurement errors are analyzed in the following examples.

4.5.1Upper and Lower Bounds of an 8-Digital Straight Segment

Given two discrete points p and q, the grid-intersect quantization of [p, q] defines a particular straight segment between p and q. However, based on chain codes and the definition of a shift operator (below), there exists a set of inter-related digital straight segments, which can be defined between p and q, Marchand (2000).

Given a chain-code sequence {ci}i=1,...,n, the shifted chain code is given by shift({ci}) = {c2, c3,...,cn, c1}. Given the chain code {ci}i=1,...,n of a digital straight segment Ppq, the shift operator can be applied successively n – 1 times on {ci} for generating n – 1 shifted chain codes, corresponding to different digital arcs from p to q. It is proven that any shifted chain code defines a new digital straight segment from p to q. The union of all shifted digital straight segments forms an area, which in turn defines the lower and upper bounds of the digital straight segment, as shown by the following example.

Given the two discrete points p and q illustrated in Figure 4.17(a), an 8-digital straight segment can readily be obtained as the grid-intersect quantization of the continuous segment [p, q] (represented by a thick line). Its chain code is given by {ci}i=1,…n = {0, 1, 0, 0, 1, 0, 0, 1, 0}. Consider the eight possible shifted chain codes given in Table 4.5. These possible digital straight segments all lie within the shaded area associated with the continuous segment [p, q]. The upper and lower bounds of this area are called the upper and lower bounds of the digital straight segment Ppq. These bounds are represented as thick and dashed lines, respectively, in Figure 4.17(b).

In this example, the upper and lower bound chain codes are represented by shift (1) and shift (2), respectively.

Figure 4.17: Upper and lower bounds of a digital straight segment.

Table 4.5: Shifted chain codes.

4.5.2Approximation Errors

The relative approximation error between Euclidean distances and discrete distances depends on the value of the move lengths. In the following, the calculation of the relative error resulting from the move da,b along a vertical line is discussed.

First, the definition of relative error is provided. The relative error between the values of a given discrete distance dD and the Euclidean distance dE between two points O and p is calculated as

E D ( O,p )= ( 1/s ) d D ( O,p ) d E ( O,p ) d E ( O,p ) = 1 s [ d D ( O,p ) d E ( O,p ) ]1( 4.15 )

Parameter s > 0 is called the scale factor. It is used to maintain consistency between radii of the discrete and Euclidean discs. When using chamfer distances, a typical value is s = a.

With the aid of Figure 4.18 (only considering the first octant, that is, between lines y = 0 and y = x),

d a,b ( O,p )=( x p y p )a+ y p b( 4.16 )

The error Ea,b is measured along the line (x = K) with K > 0, and the value of the relative error at point p is

E a,b ( O,p )= ( K y p )a+ y p b s K 2 + y p 2 1( 4.17 )

Typically, the graph of Ea,b(O, p) for yp ∈ [0, K] is shown in Figure 4.19.

Figure 4.18: Calculation of da,b in the first octant.
Figure 4.19: The graph of Ea,b in the first octant.

Since Ea,b is a convex function on [0, K], its local extreme can be obtained at p such that ∂Ea,b/∂y = 0. For all p such that xp = K and 0 ≤ ypK

E a,b y ( O,p )= 1 s K 2 + y p 2 [ ( ba ) [ ( K y p )a+ y p b ] y p K 2 + y p 2 ]( 4.18 )

In case p = [K, (b – a)K/a], there is ∂Ea,b/∂y = 0, and Ea,b(O, p) = [a2 + (b – a)2]1/2/s – 1.

The maximum relative error defined as Emax = max {|Ea, b(O, p)|:p/xp = K; 0 < ypK} is then either reached at the local extreme or at a bound of the interval yp ∈ [0, K] (see Figure 4.19). Now, Ea,b(O, p) = a/s–1 if p = (K, 0) and Ea,b(O, p) = b/(21/2s) 1 if p = (K, K). Hence,

E max ( O,p )=max{ | a s 1 |,| a 2 + ( ba ) 2 s 1 |,| b 2 s 1 | }( 4.19 )

Finally, although the error is calculated along a line rather than a circle, its value does not depend on the line (i. e., Emax does not depend on K). Hence, the value of Emax is valid throughout the 2-D discrete plane.

Some numerical examples for several commonly used move lengths are given in Table 4.6 (note d4 and d8 can be seen as particular cases of the chamfer distance in the 4- and 8-neighborhoods, respectively).

Table 4.6: Maximum relative errors for some move lengths.

4.6Problems and Questions

4-1Under what conditions will eqs. (4.14.3) be the special cases of eq. (4.4)?

4-2What is the applicability of a measurement with high accuracy and low precision?

4-3In Figure Problem 4-3, the locations of the concentric circle indicate the expected measurement positions and the locations of the black squares indicate the results of real measurements.

(1)Compute separately the average values of real measurements for two images, and compute the 4-neighbor distance between the average values and the expected values.

(2)Compute the mean and variances of the four real measurements.

(3)Discuss the measurement results obtained from the two images.

Figure Problem 4-3

4-4*A given object’s area is 5. After capturing it in an image, two methods are used to estimate its area. Table Problem 4-4 gives two groups of estimated values using these two methods, respectively. Compute the means and variances of these two estimations. Compare the accuracy and the precision of these two methods.

Table Problem 4-4

4-5Find examples and explain that there is also connection confusion in 3-D images.

4-6How do you improve the optical resolution of an image acquisition system?

4-7Use the applicability of the sampling theorem to distinguish image processing and image analysis.

4-8(1)Consider Figure 4.8(a). Increase the sampling rates in both the X and Y directions by a factor of two, and compute the area of the circle (suppose that the radius of the circle is 3). Compare the area measurement errors under two sampling rates.

(2)Move down the circle in Figure 4.8(a) by 1/4 pixel and then move right by 1/4 pixel. Separately compute the areas of the circle under the original sampling rates and under double increased sampling rates. Compare the area measurement errors under two sampling rates.

4-9To remove (Gaussian) noise, a smooth filter is often employed. This process could also influence the final object measurement. Look at the literature and analyze the concrete influence.

4-10*Take a pixel in an image as the center and draw a circle with radius 2 around this pixel (suppose that the pixel size is unit). Represent this circle by an 8-connected chain code and use the five formulas for length measurement listed in Table 4.4 to compute the length of the chain code.

4-11Consider Figure 4.8(c). Obtain the 8-connected chain code from the circle, and use the five formulas for length measurement listed in Table 4.4 to compute the length of the chain code.

4-12Using d5,7 as a discrete distance measurement, instead of the Euclidean distance measurement, what would be the maximum relative error produced?

4.7Further Reading

1.Direct and Indirect Measurements

Many measurement metrics have been proposed and used in various applications. Besides those discussed in Section 4.1, more metrics can be found in Russ (2002).

2.Accuracy and Precision

More discussions on the statistical error and the system error can be found in Haralick (1993) and Jähne (1999).

3.Two Types of Connectivity

Problems caused by the existing two types of connectivity have also been discussed in many textbooks, such as Rosenfeld (1976) and Gonzalez (1987). These problems always happen in square lattices (Marchand, 2000).

4.Feature Measurement Error

More formulas for computing the length of a line can be found in Dorst (1987).

Many global measurements are based on local masks. Designing local masks for global feature computations is different from determining feature computation formulas (Kiryati, 1993).

Using local masks to make the length measurement of a line in 3-D has been discussed in Lohmann (1998). Different masks designed can be seen in Borgofors (1984), Verwer (1991), Beckers (1992), and Kiryati (1993).

Some considerations on feature measurement errors have been discussed in Zhang (1991a).

5.Error Analysis

Some other errors in feature measurement can also be analytically studied, for example, Dorst (1987).