Skip to main content

Activity 11: Color image segmentation

In image segmentation, we want to segment or separate a region of interest (ROI) from the entire image. We usually do this to extract useful information or identify objects from the image. The segmentation is done based on the features unique to the ROI. 

In this activity, we want to segment objects from the background based on their color information. But real 3D objects in images, although monochromatic, may have shading variations. Hence, it is better to use the normalized chromaticity coordinates (NCC) instead of the RGB color space to enable the separation of brightness and pure color information. 

To do this, we consider each pixel and the image and let the total intensity,  I, for that pixel be  I = R + G + B. Then for that pixel, the normalized chromaticity coordinates are computed as:
r = R/I;                g = G/I;                   b = B/I
The sum of all three is equal to unity, so it is enough to express chromaticity using only two coordinates r and g since b is dependent on both (i.e. b = 1- r - g). The r-g color space is shown in Figure 1.


Figure 1. Nomalized chromaticity
space. X-axis is r and y-axis is g.

I. Parametric probability distribution estimation

In the first technique we segment a color by taking the probability that a certain pixel belongs to the color distribution of interest. Pixel membership to a region of interest is determined from the joint probability p(r)p(g) where p(r) and p(g) are independent probability distributions along the r and g, respectively. We assume Gaussian probability distribution functions (PDFs) for both so we can write them as:



The yellow car in the image in Figure 2 is segmented using this technique. 
Figure 2. The yellow car is to be segmented from the entire image.

I tried two different patches and compared the results (Figure 3). The first patch was taken from the hood of the car while the second was taken from its door. Clearly, the second patch gave a better result since the we were able to segment much of the car's body instead of just the outlines as in the first patch. Hence, in selecting a reference region for the ROI it is important to take into account all possible shading variations in the color that we want to segment.



 Figure 3. ROI Segmented from the image using parametric estimation and the corresponding reference patches used where (a) was taken from the hood of the car and (b) was taken from the door.

II. Non-parametric Segmentation
For non-parametric probability distribu, we use the image's 2D histogram to determine the membership of a pixel to the region of interest. To do this, we use histogram back-projection in which we give each pixel location  a value equal to it's histogram value in chromaticity space.

The 2D histogram (32 bins) of the two reference patches in Figure 3  are shown in Figure 4. We know that this is correct since the location of the peaks are in approximately the same region as the observed color for the ROI (i.e. yellow). The histogram of the second reference patch covers a larger area since it has more shade variations.

Figure 4. 2D histogram of the two reference patches for the ROI (a) for the patch taken from the hood and (b) for the patch taken from the door

Again, we try to segment the car in Figure 2 using the same reference patches. The results are shown in Figure 5. Again, the second patch gave better results than the first patch. 



 Figure 5. ROI Segmented from the image using non- parametric estimation and the corresponding reference patches used where (a) was taken from the hood of the car and (b) was taken from the door.


Segmented ROIs using the first method produced images with smoother variations in shade since we used an analytic function to estimate the PDF, giving us continuous values. Segmentation using non-parametric was able to segment more of the car (observe the roof part) but it also included other parts of the image which had yellowish hue such as the portions of land visible through the vegetation. This means that parametric estimation is more selective in pixel-membership tagging compared to non-parametric estimation.

I would like to thank Ms. Maria Isabel Saludares for helpful discussions. 

Finally, I give myself a grade of 10/10 for successfully segmenting a region of interest (ROI) from the background. :)

Reference:
1. M. Soriano, "A11-Color Image Segmentation, " AP 186 Manual, 2012

Comments

Popular posts from this blog

Activity 1 - Digital Scanning

The first activity for our AP 186 class was very interesting and quite useful. I have had problems before concerning manufacturers who give calibration curves but do not give the values. It’s really troublesome when you need them and you can’t find any way to retrieve the data. Fortunately, this digital scanning experiment resolves this dilemma. Way back when computers were not yet easily accessible, graphs were still hand-drawn. In this activity, we went to the CS Library to find old journals or thesis papers from which we can choose a hand-drawn graph. Our chosen graphs are to be scanned as an image where data are to be extracted. The graph that I chose was taken from the PhD Dissertation of Cherrie B. Pascual in 1987, titled, Voltammetry of some biologically significant organometallic compounds . The scanned image was tilted so I had to rotate it using Gimp v.2 (see Figure 1). Figure 1. Concentration dependence of DPASV stripping peaks of triphenyltin acacetate usi...

Activity 2: SciLab basics

For the second activity we had a bit of practice in using the SciLab programming language. We had to produce the following synthetic images: a.        Centered square aperture b.       Sine wave along x direction (corrugated roof) c.        Grating along x direction d.       Annulus e.       Circular aperture with graded transparency (Gaussian function) But first we had to follow a sample code given by Dr. Soriano. The code produced a 100 x 100 pixel – image of a centered circular aperture with radius of 35 pixels (Figure 1). Figure 1. Code and synthetic image for centered circular aperture After doing the centered circular aperture I am ready to do the other synthetic images. The easiest was the annulus since you just have to tweak the code for the centered circular aperture. I just replaced line 7 of the code with: A(find(r...