Skip to main content

Activity 2: SciLab basics


For the second activity we had a bit of practice in using the SciLab programming language. We had to produce the following synthetic images:

  1. a.       Centered square aperture
  2. b.      Sine wave along x direction (corrugated roof)
  3. c.       Grating along x direction
  4. d.      Annulus
  5. e.      Circular aperture with graded transparency (Gaussian function)


But first we had to follow a sample code given by Dr. Soriano. The code produced a 100 x 100 pixel – image of a centered circular aperture with radius of 35 pixels (Figure 1).

Figure 1. Code and synthetic image for centered circular aperture


After doing the centered circular aperture I am ready to do the other synthetic images.

The easiest was the annulus since you just have to tweak the code for the centered circular aperture. I just replaced line 7 of the code with: A(find(r<0.7 & r>0.3)) = 1. This logical statement would only return True for regions between circles with r<0.7 and r>0.3, hence, only this interval would have a value 1 giving us an annulus (Figure 2). To change the thickness of the annulus, you just have to change the values for the radii.

 I used 500 x 500 pixels for my image dimensions for this and all the other images.



Figure 2. Code and synthetic image for annulus


The centered square aperture was fairly easy to do. It’s still similar to that of the centered aperture but this time you have to find values of x and y that are less than a certain value s which corresponds to the length of a side. The code and the resulting image is shown in Figure 3.
Figure 3. Code and synthetic image for centered square aperture


Next is the corrugated roof done using a sine function along the x-direction. The code and results are found in Figure 4. 
Figure 4. Code and synthetic image for sinusoid along x direction 

To create the grating, I used a sinusoid along the x direction. I just replaced the values in the matrix that are greater than 0 with 1, and those less than or equal to 0 with 0. The code and results are shown below (Figure 5). 
Figure 5. Code and synthetic image for grating along x direction 

Finally, for the circular aperture with graded transparency, I just multiplied the aperture matrix with the Gaussian mask matrix which I produced by using the Gaussian function. For different values of the parameter c,which gives the variance of the Gaussian function, the resulting image also varies. Code and image found in Figure 6.
Figure 6. Code and synthetic image for circular aperture 
with graded transparency (Gaussian function)


Here are some other images that I tried, the criss-cross pattern and the double slit:


Code Snippet:

//cross
x2 = linspace(-7*%pi, 8*%pi, 500)
y2 = sin(x2);
grating = ndgrid(y2,y2);
grating(find(grating>0))= 0;
grating(find(grating<=0))= 1;
checkerboard = grating'- grating;
imwrite(checkerboard, "cross.png");

//double slit
nx = 200;
grid = zeros(nx, nx)
dist = 20;
width = 30;
ind1 = nx/2 -dist/2;
ind2 = nx/2 +dist/2;
grid(1:nx,ind1-width:ind1) = 1;
grid(1:nx,ind2:ind2+width) = 1;
imwrite(grid, "double_slit.png");

For this activity, I give myself a grade of 12/10 because I was able to produce all the required synthetic images and took the initiative to create other patterns. I also think that the codes that I used were efficient and effective. :D

I would like to thank Ms. Mabel Saludares and Mr. Gino Borja for helpful discussions. :)

Comments

Popular posts from this blog

Activity 1 - Digital Scanning

The first activity for our AP 186 class was very interesting and quite useful. I have had problems before concerning manufacturers who give calibration curves but do not give the values. It’s really troublesome when you need them and you can’t find any way to retrieve the data. Fortunately, this digital scanning experiment resolves this dilemma. Way back when computers were not yet easily accessible, graphs were still hand-drawn. In this activity, we went to the CS Library to find old journals or thesis papers from which we can choose a hand-drawn graph. Our chosen graphs are to be scanned as an image where data are to be extracted. The graph that I chose was taken from the PhD Dissertation of Cherrie B. Pascual in 1987, titled, Voltammetry of some biologically significant organometallic compounds . The scanned image was tilted so I had to rotate it using Gimp v.2 (see Figure 1). Figure 1. Concentration dependence of DPASV stripping peaks of triphenyltin acacetate usi...

Activity 10 Applications of Morphological Operation 3 of 3: Looping through images

When doing image-based measurements, we often want to separate the region of interest (ROI) from the background. One way to do this is by representing the ROIs as blobs. Binarizing the image using the optimum threshold obtained from the image histogram simplifies the task of segmenting the ROI. Usually, we want to examine or process several ROIs in one image. We solve this by looping through the subimages and processing each. The binarized images may be cleaned using morphological operations.  In this activity, we want to be able to distinguish simulated "normal cells" from simulated "cancer cells" by comparing their areas. We do this by taking the best estimate of the area of a "normal cell" and making it our reference.  Figure 1 shows a scanned image of scattered punched papers which we imagine to be cells examined under the microscope. These will be the "normal cells." Figure 1. Scattered punched paper digitized using flatbe...

Activity 11: Color image segmentation

In image segmentation, we want to segment or separate a region of interest (ROI) from the entire image. We usually do this to extract useful information or identify objects from the image. The segmentation is done based on the features unique to the ROI.  In this activity, we want to segment objects from the background based on their color information. But real 3D objects in images, although monochromatic, may have shading variations. Hence, it is better to use the normalized chromaticity coordinates (NCC) instead of the RGB color space to enable the separation of brightness and pure color information.  To do this, we consider each pixel and the image and let the total intensity,  I, for that pixel be  I = R + G + B. Then for that pixel, the normalized chromaticity coordinates are computed as: r = R/I;                g = G/I;                   b = B/I The sum of all thre...