Skip to main content

Activity 7: Morphological Operations

When talking about morphology, what immediately comes to mind are the forms and structures or shapes of objects. Hence, performing morphological operations imply that the shape or form of an object is altered. 
     In this activity, we will perform morphological operations on binary images. In particular, we make use of erosion and dilation.

Erosion and dilation were performed on the following:
1. A 5×5 square
2. A triangle, base = 4 boxes, height = 3 boxes
3. A hollow 10×10 square, 2 boxes thick
4. A plus sign, one box thick, 5 boxes along each line

Using each of the structuring elements below:
1. 2×2 ones
2. 2×1 ones
3. 1×2 ones
4. cross, 3 pixels long, one pixel thick.
5. A diagonal line, two boxes long, i.e. [[0 1],[1 0] ].

     When performing these operations, it is important to note the “anchor” or “origin” of the structuring element in order to give an accurate prediction of the result. For the 2x2 ones, 2x1 ones, and 1x2 ones, the origin is set at [1,1]. For the diagonal line, it was set at [2,1] while for the cross it was set at the center [2,2].
     The results of the erosion and dilation processes were predicted and drawn in a graphing paper. These were then verified using the erode() and dilate() commands in Scilab 4.1.2 SIP toolbox.

A. Dilation
If you have an image A, and a structuring element B, the dilation of A by B results to the expansion of A by the shape of B as shown in Figure A1.
Figure A1. Dilation of A by B

     This may be described by the following mathematical expression

which includes all z’s that are translations of a reflected B, which, when intersected with A does not give an empty set.
     The figures below show the hand-drawn predictions and the results using Scilab. The shaded regions indicate the areas that were added to the original image. Also, the broken lines represent the boundaries of the original image and the solid lines represent the sides of the new image. 
     The first column shows the original binary image. The 2nd to 6th columns are the dilated images using the structuring elements 1 to 5 (above), respectively.
As shown, all of the predictions are in agreement with the results from Scilab.

1. 5x5 Square   

2. Triangle

3. Plus sign

4. Hollow 10x10 square

B. Erosion
Unlike dilation, erosion causes the shape of the image to be reduced. The erosion of A by B results to the reduction of A by the shape of B as shown in Figure B1.
Figure B1. Erosion of A by B
     The following mathematical expression describes the process of erosion.
i.e. the erosion of A by B is the set of all  z’s, such that B, translated by z is contained in A.
     The figures below show the hand-drawn predictions and the results using Scilab. This time, the shaded regions indicate the areas that were removed from the original image. The broken lines  still represent the boundaries of the original image and the solid lines represent the sides of the new image. 
     The first column shows the original binary image. The 2nd to 6th columns are the eroded images using the structuring elements 1 to 5 (above), respectively. As shown, all of the predictions are in agreement with the results from Scilab.

1. 5x 5 Square 

2. Triangle 

3. Plus sign

4. Hollow 10 x 10 square

Finally, I give myself a grade of 10/10 for being able to do all the required tasks and for understanding the concepts of morphological operations, particularly erosion and dilation.
Thank you to Ms. Mabel Saludares for the graphing paper.

Reference:
1. M. Soriano, "Activity 7: Morphological Operations," App Phy 186 Activity sheet, 2012                              

Comments

Popular posts from this blog

Activity 1 - Digital Scanning

The first activity for our AP 186 class was very interesting and quite useful. I have had problems before concerning manufacturers who give calibration curves but do not give the values. It’s really troublesome when you need them and you can’t find any way to retrieve the data. Fortunately, this digital scanning experiment resolves this dilemma. Way back when computers were not yet easily accessible, graphs were still hand-drawn. In this activity, we went to the CS Library to find old journals or thesis papers from which we can choose a hand-drawn graph. Our chosen graphs are to be scanned as an image where data are to be extracted. The graph that I chose was taken from the PhD Dissertation of Cherrie B. Pascual in 1987, titled, Voltammetry of some biologically significant organometallic compounds . The scanned image was tilted so I had to rotate it using Gimp v.2 (see Figure 1). Figure 1. Concentration dependence of DPASV stripping peaks of triphenyltin acacetate usi...

Activity 10 Applications of Morphological Operation 3 of 3: Looping through images

When doing image-based measurements, we often want to separate the region of interest (ROI) from the background. One way to do this is by representing the ROIs as blobs. Binarizing the image using the optimum threshold obtained from the image histogram simplifies the task of segmenting the ROI. Usually, we want to examine or process several ROIs in one image. We solve this by looping through the subimages and processing each. The binarized images may be cleaned using morphological operations.  In this activity, we want to be able to distinguish simulated "normal cells" from simulated "cancer cells" by comparing their areas. We do this by taking the best estimate of the area of a "normal cell" and making it our reference.  Figure 1 shows a scanned image of scattered punched papers which we imagine to be cells examined under the microscope. These will be the "normal cells." Figure 1. Scattered punched paper digitized using flatbe...

Activity 11: Color image segmentation

In image segmentation, we want to segment or separate a region of interest (ROI) from the entire image. We usually do this to extract useful information or identify objects from the image. The segmentation is done based on the features unique to the ROI.  In this activity, we want to segment objects from the background based on their color information. But real 3D objects in images, although monochromatic, may have shading variations. Hence, it is better to use the normalized chromaticity coordinates (NCC) instead of the RGB color space to enable the separation of brightness and pure color information.  To do this, we consider each pixel and the image and let the total intensity,  I, for that pixel be  I = R + G + B. Then for that pixel, the normalized chromaticity coordinates are computed as: r = R/I;                g = G/I;                   b = B/I The sum of all thre...