8.6 Practical Algorithms of Grayscale Morphology – Image Analysis

In extreme cases, a soft morphological operation is simplified into the standard morphological operation. For example, r = 1 or C = B, a soft morphological operation becomes the standard operation with structure element B. If r > |BC|, a soft morphological operation becomes the standard operation by using the center of structure set C.

Figure 8.34: Soft dilation for a binary image.
Figure 8.35: Soft erosion for a binary image.

For a grayscale image, let the structuring set B = {(–1, 0), (0, 1), (0, 0), (0, –1), (1, 0)}, and its center C = {(0, 0)}, then the soft erosion with structuring system [B, C, 4] is defined as:

f[ B,C,4 ]( x,y )=4thMinof{f( x1,y ),f( x,y+1 ),f( x,y ),f( x,y ), f( x,y ),f( x,y ),f( x,y1 ),f( x+1,y )} ( 8.63 )

The output at point (x, y) will be f(x, y), unless all values of the set {f(b): b(BC)(xy)} are smaller than the value of f(x, y). In the latter case, the output is the maximum value of the set {f(b) :b(B – C)(x–y)}.

8.6Practical Algorithms of Grayscale Morphology

With the help of the previously described basic operations and composition operations of grayscale mathematical morphology, a series of practical algorithms may be constituted. Here, the operating target and the processing result are both gray-scale images. The general effect is often more obvious in the brighter or darker regions.

8.6.1Background Estimation and Elimination

Morphological filter can change the gray value of image, but the change of the gray value depends on the geometry of the filter and can be controlled by means of structure elements. A typical example is the background estimation and elimination. Morphological filtering can detect weak (less obvious) objects well, especially for images with low-contrast transition regions. Opening operation can remove regions brighter than the background and smaller than the size of the structure elements. Therefore, by selecting a suitable structure element for opening, it is possible to leave only the background estimates in the image. If subtracting the estimated background from the original image the required object will be extracted (see top hat transforms).

Background Estimation=fb( 8.64 )

Background Estimation=f( fb )( 8.65 )

Closing operation can remove regions darker than the background and smaller than the size of the structure elements. Therefore, by selecting a suitable structure element for closing, it is also possible to leave only the background estimates in the image. If subtracting the original image from the estimated background, the required object will also be extracted (see bottom hat transforms).

Background Estimation=fb( 8.66 )

Background Estimation=( fb )f( 8.67 )

In the estimation of background, the effect of cylindrical structure elements would be good. But in the filtering process, the hemispherical structure elements would be better than that of the cylindrical structure elements. Due to relatively sharp edges, cylinder will remove a lot of useful grayscale information, while hemisphere grazed the original image surface with more moderate edges.

Example 8.24 Comparison of hemispherical and cylindrical structure elements.

One illustration of using the cylinder and hemispherical structure elements of same radius for the opening of 1-D cross-section is shown in Figure 8.36. Figure 8.36(a, b) is the schematic diagram and operating result of using a hemispherical structure element, respectively. Figure 8.36(c, d) is the schematic diagram and operating result of using a cylindrical structure element, respectively.

It is seen that the result obtained by using the cylindrical structure element is better than that of the hemispherical one. The cylindrical structure element allows minimal residual structure. When the opening is performed with the hemispherical structure element, its upper portion may be fitted to the spike structure and could make the estimated background surface not really reflect the gray-level of background. Thus, when the background surface is subtracted from the original gray level, the gray level near the peak structure will be reduced. If the cylindrical structure element is used, this problem will be substantially eliminated. Besides, it is seen from the figure: to remove the light regions from the image, a cylindrical structure element with diameter larger than these regions should be used for opening; while to remove the dark regions from the image, a cylindrical structure element with diameter larger than these regions should be used for closing. Of course, if using a hemispherical structure element with large enough radius, similar results can also be obtained. However, the computing time will be greatly increased.

8.6.2Morphological Edge Detection

Many commonly used edge detectors work by calculating the partial differential in local region. In general, this type of edge detector is sensitive to noise and thus will strengthen the noise. Morphological edge detector is also more sensitive to noise but does not enhance or amplify noise. Here, the concept of morphological gradient is used. The basic morphological gradient can be defined as follows (see eq. (8.56)):

Figure 8.36: Comparison of hemispherical and cylindrical structure elements.

grad 1 =( fb )( fb )( 8.68 )

Example 8.25 Morphological gradient grad1.

One example of using eq. (8.68) to detect edges in a binary image is shown in Figure 8.37 (an 8-neighborhood structure element is used). Figure 8.37(a) is the image f. Figure 8.37(b) shows fb. Figure 8.37(c) shows fb. Figure 8.37(d) is grad1 image. It is seen that fb expands the bright region in f with one pixel in all directions, while fb shrinks the dark region in f with one pixel in all directions, so the edge contour provided by grad1 is two pixels wide.

Relatively sharp (thin) contours can be obtained by using the following two equivalent (strictly speaking, not equivalent in discrete domain) forms of morphological gradients:

grad 2 =( fb )f( 8.69 )

grad 2 =f( fb )( 8.70 )

Example 8.26 Morphological gradient grad2.

One example of using eq. (8.69) to detect edges in a binary image is shown in Figure 8.38 (an 8-neighborhood structure element is used). Figure 8.38(a) is the image f. Figure 8.38(b) shows fb. Figure 8.38(c) is grad2 image (fb – f). The edge contour thus obtained is one pixel wide. As shown in Figure 8.38, the single pixel edge contour thus obtained is actually inside the background. If the eq. (8.70) is used, the resulting single pixel edge contour will belong to the object.

Figure 8.37: Morphological gradient grad1.
Figure 8.38: Morphological gradient grad2.

Note that the above grad1 and grad2 will not magnify the noise in image, but still retain the original noise. Here is another form of morphological gradient:

grad 3 =min{ [ ( fb )f ],[ f( fb ) ] }( 8.71 )

This morphological gradient is not sensitive to isolated point noise. If it is used over the ideal ramp edge, it will detect the edge with good results. Its disadvantage is that the ideal step edge cannot be detected. However, in this case, the image may be blurred first to convert the step edge to ramp edge, and then, grad3 can be used. It should be noted here that the scope of blurring masks and the scope of structure elements for the morphological operation should be consistent. When using 4-neighborhood cylinder structure elements, for a given image f, its corresponding blurred image is h:

h( i,j )=[ f( i,j )+f( i,j+1 )+f( i1,j )+f( i,j1 ) ]/5( 8.72 )

The morphological gradient thus obtained is:

grad 4 =min{ [ ( hb )h ],[ h( hb ) ] }( 8.73 )

As a result of blurring, the edge strength thus obtained will be weakened, so if the noise in the image is not too strong, it is better to directly use grad3 while not to blur image. When choosing one of grad3 and grad4, it must take into account both requirements of the large signal to noise ratio and the sharp edges.

8.6.3Cluster Fast Segmentation

The combination of conditional dilation and final erosion (also known as the ultimate erosion) can be used to achieve image segmentation.

The standard dilation may have two extensions: one is the conditional dilation; other is repeated conditional dilation. The dilation of f by b under the condition X (X can be regarded as a limiting set) is denoted by fb; X, which is defined as

fb;X( fb )X( 8.74 )

Repeated conditional dilation is an extension of the above operation. It is denoted by f ⊕ {b}; X (here {b} means to iteratively dilate f by b until no further change).

f{ b };X[ [ [ ( fb )X ]b ]X ]b...( 8.75 )

Final erosion refers to repeatedly erode an object until it disappears, then to keep the result before the last step (this result is also known as the seeds of object). Let fk = fkb, where b is the unit circle, kb is the circle with radius k. The set of final erosion gk is the element of fk, if l > k, gk will be disappear in fl. The first step of the final erosion is the conditional dilation:

U k =( f k+1 { b } ); f k ( 8.76 )

The second step of the final erosion is to subtract the above dilation result from the erosion of f, namely

g k = f k U k ( 8.77 ) (8.77)

If the image has multiple objects, the final eroded object set g can be obtained by taking the union of each respective gk. In other words, the final erosion image is:

g= k=1,m g k ( 8.78 )

where m is the number of erosion.

The cluster fast segmentation (CFS) for image containing the object with convex contour includes three steps:

1.Iterative erosion of f: Using unit circular structure element b to iteratively erode the original image f:

f k =fkb,k=1,...,mwhere{ m; f m O }( 8.79 )

Here, f1 = f ⊖ b, f2 = f ⊖ 2b, subsequently until next fm = fmb and fm+1 = ∅; where m is the maximum number of non-empty graph.

Example 8.27 Repeated conditional dilation.

One example of repeated conditional dilation is shown in Figure 8.39. Figure 8.39(a) is a binary image, where pixels of different gray levels are used to facilitate the following explanation. In this example, a 4-neighborhood structure element is used. First erosion will remove the dark gray region away and make the other two regions shrink and separated, the result is shown in Figure 8.39(b). Figure 8.39(c) provides the result of conditional erosion of Figure 8.39(b) by taking Figure 8.39(a) as condition. Second erosion is made on Figure 8.39(b); it eroded the medium gray region away and made the light gray region shrink, and the result is shown in Figure 8.39(d). Figure 8.39(e) provides the result of conditional dilation of Figure 8.39(d) by taking Figure 8.39(b) as condition. If a third erosion was made, it will give the empty set. The first seed is obtained by subtracting Figure 8.39(c) from Figure 8.39(a); the second seed is obtained by subtracting Figure 8.39(e) from Figure 8.39(b); Figure 8.39(d) provides the third seed.

Figure 8.39: Steps of cluster fast segmentation.

2.Determining the final erosion set gk: For each fk, perform the final erosion and subtract the erosion result from fk:

g k = f k ( f k+1 { b }; f k )( 8.80 )

Here, gk is the ultimate erosion set, or the seed of each gk.

 The current step can also be explained with the help of Figure 8.39. If Figure 8.39(b) is iteratively dilated with Figure 8.39(a) as constrain, Figure 8.39(c) can be obtained. Comparing Figure 8.39(c) with Figure 8.39(a), the only difference is the dark gray point. To obtain the dark gray point, the Figure 8.39(a) is subtracted from Figure 8.39(c), and the result obtained is the final erosion of Figure 8.39(a). This is the first seed. If repeating the dilation of Figure 8.39(b) with b and using Figure 8.39(b) as constrain, the region associated with the first seed can be restored. Further, subtracting this result from Figure 8.39(b) will give the seed in medium gray region. Similarly, the seed in light gray region can also be obtained.

3.Determining the object contour: Starting from each seed, the full size of each corresponding original region can be restored back with the following equation:

U= g k ( k1 )bfork=1tom( 8.81 )

8.7Problems and Questions

8-1(1)Prove eqs. (8.1), (8.4), and (8.6) are equivalent.

(2)Prove eqs. (8.3), (8.5), and (8.7) are equivalent.

8-2(1)Draw a schematic representation of a circle of radius r dilating by a circular structure element of radius r/4;

(2)Draw a schematic representation of a square r × r dilating by the above structural elements;

(3)Draw a schematic representation of an isosceles triangle with one side length r dilating by the above structural elements;

(4)Change the dilation operation in (1), (2), and (3) to erosion, and draw the corresponding schematic representations, respectively.

8-3Prove the holding of eqs. (8.16) and (8.17).

8-4*With the original images as shown in Figure Problem 8-4(a), produce the results of the hit-or-miss transform, with the structure elements (three center pixels corresponding to hit, the pixels around corresponding to miss) given in Figure Problem 8-4(b) and Figure Problem 8-4(c), respectively.

Figure Problem 8-4

8-5Given the image and the structural element shown in Figure Problem 8-5, provide the steps in obtaining the result of thinning.

Figure Problem 8-5

8-6Provide the results of each step in using thickening, instead of thinning, to Figure 8.13(b).

8-7Given the structure elements shown in Figure Problem 8-7(a) and the image shown in Figure Problem 8-7(b):

(1)Calculate the result of grayscale dilation;

(2)Calculate the results of grayscale erosion.

Figure Problem 8-7

8-8*The image f(x, y) and the structural element b(x, y) in eq. (8.44) are all rectangular, Df is {[Fx1, Fx2], [Fy1, Fy2]}, Db is {[Bx1, Bx2], [By1, By2]}:

(1)Supposing (x, y) ∈ Db, derive the interval satisfied by eq. (8.44) with the translational variables s and t. These intervals on s and t axes define the rectangular region of (fb) (s, t) on the ST plane;

(2)For the erosion operation, determine the corresponding intervals according to eq. (8.46).

8-9Prove eqs. (8.48) and (8.49).

8-10Prove eqs. (8.52) and (8.53).

8-11(1)A binary morphological gradient operator for edge enhancement is G = (A ⊕ B) – (A ⊖ B). Using a 1-D edge model, show that the above operator will give a wide edge in the binary image.

(2)If the grayscale dilation () is achieved by calculating the local maximum of the luminance function in a 3 × 3 window, grayscale erosion () is achieved by calculating the local minimum of the luminance function in a 3 × 3 window, provide the results of using the operator G. Prove that the effect is similar to the Sobel operator when the orientation is neglected (amplitude only):

g= ( g x 2 + g y 2 ) 1/2

8-12The grayscale image f(x, y) is disturbed by nonoverlapping noise which can be modeled with a small cylindrical shape with radius Rminr ≤ Rmax and height HminhHmax,

(1)Design a morphological filter to eliminate these noises;

(2)Supposing now the noise is overlapped, repeat (1).

8.8Further Reading

1.Basic Operations of Binary Morphology

The introduction to basic operations is also in Serra (1982) and Russ (2016).

More discussions on the case where the origin does not belong to the structural elements can be found in Zhang (2005b).

2.Combined Operations of Binary Morphology

More discussions on the hit-or-miss transform can see Mahdavieh (1992).

Details about 3-D topology refinement are visible in Nikolaidis (2001).

3.Practical Algorithms of Binary Morphology

One example application of detecting news headline based on morphological operation can be found in Jiang (2003).

4.Basic Operations of Grayscale Morphology

In the mathematical morphology of grayscale images, there are many kinds of structural elements. For example, the stack filters are based on the morphological operators of flat structure elements (Zhang, 2005b).

5.Combined Operations of Grayscale Morphology

The morphological filter is a nonlinear signal filter that locally modifies the geometrical characteristics of the signal, while the soft morphological filter is a weighted morphological filter. For example, see Mahdavieh (1992).

6.Practical Algorithms of Grayscale Morphology

The practical grayscale morphological algorithms discussed can make use of the grayscale morphological operations to distinguish the difference between various grayscale pattern areas, so they can be used for clustering segmentation, watershed segmentation, and texture segmentation (Gonzalez, 2008).