Skip to main content

Activity 4 - Area Estimation for Images with Defined Edges

In this activity, what we need to do is to find the area of a certain region of concern in an image by (1) counting the pixels enclosed by the said region and (2) by  using Green's theorem whose discrete form is given by:
1. Pixel counting 
In pixel counting, we need our image to be binarized such that the area of interest is white and the background is black. Then to find the area, we just count the number of pixels which are white (or 1). 

2. Green's Theorem
To able to use Green's theorem we first need to learn how to take the points which consist the contour of the said region. This is fairly easy to do if your image has defined bounds by using the edge() command in SciLab and finding the indices of the array elements which are white. But another requirement of Green's theorem is for the contour to be taken either in the clockwise or counterclockwise direction. This is where the challenge lies. One technique in sorting the points is by taking the angles between the x-axis and the line which connects the points to the center of the image which acts as the origin. 

3. Code Snippet
The code snippet below shows the implementation of the technique in SciLab.
//Load image as array
I = imread("C:\Users\Tin\Desktop\AP_186\Act4-Area Estimation\square_250.bmp");
//Actual areas
area1 = 250*250
area2 = %pi*125^2

//Estimate area by pixelcount
pixels = size(find(I > 0));
pixelcount = pixels(2);
err_p = (pixelcount - area1)/area1;

//For Green's theorem
//Isolate edge
E = edge(I, 'canny',0)
E = bool2s(E);
//find points on edge
[i,j]= find(E);
ind = [i;j]';

//plot(ind(:,1),ind(:,2),"bo")
s = size(I); // size of image
n = size(ind); // size of index list

//set origin at center of image 
c_x = s(1)/2;
c_y = s(2)/2;
//Calculate angles
t = n(1);//for limit of for loop
angles = zeros(n(1));
for i =1:t
    x = -(ind(i,1)-c_x);
    y = ind(i,2)-c_y;
    r = (x^2 + y^2)^0.5
    if x==0 & y>=0 then
        angles(i) = 90;
    elseif x==0 & y<0 then
        angles(i) = 270;
    elseif x<0 & y<=0 then
        angles(i) = atan(y/x)*180/%pi + 180;
    elseif x>0 & y<=0 then
        angles(i) = 360 + atan(y/x)*180/%pi;
    elseif x>0 & y>0 then
        angles(i) = atan(y/x)*180/%pi;
    elseif x<0 & y>0 then
        angles(i) = atan(y/x)*180/%pi + 180;
    end
end 
//save angles (x,y) pairs in another array then sort 
B = zeros(n(1),3);
B(:,2:3) = ind;
B(:,1) = angles;
C = lex_sort(B);

//Estimate area using Green's theorem
sum_ = 0
for j = 1:t-1
    sum_ = sum_ + (C(j,3)*C(j+1,2)- C(j,2)*C(j+1,3));
end
A = 0.5*sum_;
err_g = (A-area1)/area1;

4. Results for black and white regular geometric shapes
I used both techniques for different black and white images with known area. Note that we consider that the actual areas of the regions we are interested in are exactly the same as the regions in white. That is, we consider the edges to be completely smooth instead of being discretized. This way, we will be able to include the errors that may result when we delineate our regions of interest in a real-world scene.  The results are as follows (in square pixels):

250 pixel by 250 pixel square
Actual Area: 62500
Green's theorem: 62559.5 (0.095% error)
Pixel counting: 62500 (0% error)




100 pixel by 100 pixel square
Actual Area: 10000
Green's theorem: 10079  (0.97% error)
Pixel counting: 10 000 ( 0% error) 
Circle with radius of 125 pixels
Actual Area: 49087.4
Green's theorem: 48995.5 (0.19% error)
Pixel counting: 48936 (0.31% error)
Circle with radius of 50 pixels
Actual Area:7854
Green's theorem: 7917  (0.8% error)
Pixel counting: 7820 (0.4% error)
Triangle with base = 401pixels and height = 264pixels
Actual Area: 52932 
Green's theorem:  53049 (0.22% error)
Pixel counting: 53167 (0.44% error)




From the estimates obtained using Green's method and pixel counting, the techniques may be considered accurate, yielding percent errors which are less than 1%. Let us now apply the technique to a real-world object or region. 


5. real-world application
The following figure shows the City Commercial Center (C3) in our city (Pagadian City) in map view and in its binarized form made using Gimpv.2.8. 



Using the code above, the following edge was detected.



The contour is obtained using the code above is shown below:
The theoretical area of the region in sq. pixels was determined by manually measuring each side and taking areas of the rectangles that make up the entire region. The areas calculated in square pixels are as follows :
Theoretical: 66830 sq. pixels
Green's theorem : 67622 sq. pixels (1.14% error)
Pixel counting: 67595 sq. pixels (1.18% error)
By counting the number of pixels which correspond to 20m in the scale bar, I obtained a scaling factor of  3.4 pixels/m which I can use to convert my computed area in sq. meters.


Green's theorem: 5849 sq. m. 
Pixel counting: 5847 sq. m 


These values for the area are good estimates with percent errors less than 2%. 


I would like to thank Ms. Maria Isabel Saludares for giving me an idea on the use of angles when my first idea did not work. Also, I thank Mr. Gino Borja and Ms. Eloisa Ventura for helpful discussions on the commands and syntax in SciLab.


Finally, I give my self a grade of 10/10 for being able to take the estimates of areas using Green's theorem and pixel counting. I have gained much skill while doing this activity.


Reference:
M. Soriano, AP 186 Activity 4 - Area estimation for images with defined edges instruction sheet, 2012 













Comments

Popular posts from this blog

Activity 1 - Digital Scanning

The first activity for our AP 186 class was very interesting and quite useful. I have had problems before concerning manufacturers who give calibration curves but do not give the values. It’s really troublesome when you need them and you can’t find any way to retrieve the data. Fortunately, this digital scanning experiment resolves this dilemma. Way back when computers were not yet easily accessible, graphs were still hand-drawn. In this activity, we went to the CS Library to find old journals or thesis papers from which we can choose a hand-drawn graph. Our chosen graphs are to be scanned as an image where data are to be extracted. The graph that I chose was taken from the PhD Dissertation of Cherrie B. Pascual in 1987, titled, Voltammetry of some biologically significant organometallic compounds . The scanned image was tilted so I had to rotate it using Gimp v.2 (see Figure 1). Figure 1. Concentration dependence of DPASV stripping peaks of triphenyltin acacetate usi...

Activity 10 Applications of Morphological Operation 3 of 3: Looping through images

When doing image-based measurements, we often want to separate the region of interest (ROI) from the background. One way to do this is by representing the ROIs as blobs. Binarizing the image using the optimum threshold obtained from the image histogram simplifies the task of segmenting the ROI. Usually, we want to examine or process several ROIs in one image. We solve this by looping through the subimages and processing each. The binarized images may be cleaned using morphological operations.  In this activity, we want to be able to distinguish simulated "normal cells" from simulated "cancer cells" by comparing their areas. We do this by taking the best estimate of the area of a "normal cell" and making it our reference.  Figure 1 shows a scanned image of scattered punched papers which we imagine to be cells examined under the microscope. These will be the "normal cells." Figure 1. Scattered punched paper digitized using flatbe...

Activity 11: Color image segmentation

In image segmentation, we want to segment or separate a region of interest (ROI) from the entire image. We usually do this to extract useful information or identify objects from the image. The segmentation is done based on the features unique to the ROI.  In this activity, we want to segment objects from the background based on their color information. But real 3D objects in images, although monochromatic, may have shading variations. Hence, it is better to use the normalized chromaticity coordinates (NCC) instead of the RGB color space to enable the separation of brightness and pure color information.  To do this, we consider each pixel and the image and let the total intensity,  I, for that pixel be  I = R + G + B. Then for that pixel, the normalized chromaticity coordinates are computed as: r = R/I;                g = G/I;                   b = B/I The sum of all thre...