Hello! In this activity we will try to process a video of a kinematic event in order to extract information such as constants, frequencies, etc. For our group, we took a video of a 3D spring pendulum which we observed in one plane. We would like to trace its path and then try to determine its phase-space plot. The mass was covered in masking tape with the bottom colored red to facilitate easier segmentation. The video was taken using a Canon D10 camera at frame rate of 30fps. Media 1. Video of the spring pendulum (first 50 frames only) The frames of the video were then extracted using Avidemux 2.5. The mass was then segmented from each frame using parametric segmentation. The patch of the region of interest (ROI) used for color segmentation is shown in Figure 1. Figure 1. Patch used to segment ROI Using morphological operations, particularly Open and Close operations, the segmented images were cleaned. The extracted frames for different observation time t and th
In image segmentation, we want to segment or separate a region of interest (ROI) from the entire image. We usually do this to extract useful information or identify objects from the image. The segmentation is done based on the features unique to the ROI. In this activity, we want to segment objects from the background based on their color information. But real 3D objects in images, although monochromatic, may have shading variations. Hence, it is better to use the normalized chromaticity coordinates (NCC) instead of the RGB color space to enable the separation of brightness and pure color information. To do this, we consider each pixel and the image and let the total intensity, I, for that pixel be I = R + G + B. Then for that pixel, the normalized chromaticity coordinates are computed as: r = R/I; g = G/I; b = B/I The sum of all three is equal to unity, so it is enough to express chromaticity using only two coordinates r and g since b