Week 1: If planning on using Matlab (recommended), watch the tutorial videos provided in the corresponding section, and perform “help images” in the Matlab command line for examples of important image related commands. Write a computer program capable of reducing the number of intensity levels in an image from 256 to 2, in integer powers of 2. The desired number of intensity levels needs to be a variable input to your program. Using any programming language you feel comfortable with (it is though recommended to use the provided free Matlab), load an image and then perform a simple spatial 3x3 average of image pixels. In other words, replace the value of every pixel by the average of the values in its 3x3 neighborhood. If the pixel is located at (0,0), this means averaging the values of the pixels at the positions (-1,1), (0,1), (1,1), (-1,0), (0,0), (1,0), (-1,-1), (0,-1), and (1,-1). Be careful with pixels at the image boundaries. Repeat the process for a 10x10 neighborhood and again for a 20x20 neighborhood. Observe what happens to the image (we will discuss this in more details in the very near future, about week 3). Rotate the image by 45 and 90 degrees (Matlab provides simple command lines for doing this). For every 3×3 block of the image (without overlapping), replace all corresponding 9 pixels by their average. This operation simulates reducing the image spatial resolution. Repeat this for 5×5 blocks and 7×7 blocks. If you are using Matlab, investigate simple command lines to do this important operation. Week 2: o Divide the image into non-overlapping 8x8 blocks. o Compute the DCT (discrete cosine transform) of each block. This is implemented in popular packages such as Matlab. Quantize each block. You can do this using the tables in the video or simply o divide each coefficient by N, round the result to the nearest integer, and multiply back by N. Try for different values of N. You can also try preserving the 8 largest coefficients (out of the total of 8x8=64), and simply rounding them to the closest integer. o Visualize the results after you invert the quantization and the DCT. As before.1). compare your implementation with Matlab’s built-in function. o predicting based on the average of the pixels at (-1. Repeat this N times. (-1. compare your implementation with Matlab’s. and add the resulting images.0). simply perform quantization on the original image.html). o predicting based on just the pixel at (0. Which predictor will compress better? Week 3: (Optional programming exercises) Implement a histogram equalization function. Do JPEG now for color images. and (0. Compute the histogram of a given image and of its prediction errors. If using Matlab. Add different levels and types of noise to an image and experiment with different sizes of support for the median filter. use the rgb2ycbcr command to convert the Red-Green-Blue image to a Lumina and Chroma one. increase the compression of the two chrominance channels and observe the results. Consider an image and add to it random noise. Compare your results with those available in IPOL as demonstrated in the video lectures.0). In Matlab.Repeat the above but instead of using the DCT. use the FFT (Fast Fourier Transform). see for example the function at http://www. Implement the non-local means algorithm. Add different levels of noise and see the influence of it in the need for larger or smaller neighborhoods.1). What do you observe? . Repeat the above JPEG-type compression but don’t use any transform. for different values of N.0). Try different window sizes. invert the color transform and visualize the result.com/help/images/ref/blockproc. If the pixel being processed is at coordinate (0.mathworks. After inverting the compression. While keeping the compression ratio constant for the Y channel. (Such block operations are easy when using Matlab. Implement a median filter. Compute the entropy for each one of the predictors in the previous exercise.1). then perform the JPEG-style compression on each one of the three channels independently. consider o predicting based on just the pixel at (-1. 'position'.Width. % Read one frame at a time. [150 150 vidWidth vidHeight]) % Play back the movie once at the video's frame rate. for k = 1 : nFrames im = read(xyloObj. What looks better? See this example on how to read and handle videos in Matlab: xyloObj = VideoReader('xylophone.Height.mp4'). 'uint8'). 3. vidHeight = xyloObj. end % Size a figure based on the video's width and height. % here we process the image im mov(k). hf = figure. zeros(vidHeight. vidWidth = xyloObj.NumberOfFrames. Now consider a group of frames as a large image and do histogram equalization for all of them at once. mov(1:nFrames) = struct('cdata'. . What happens when the 3 channels are equal? Take a video and do frame-by-frame histogram equalization and run the resulting video. set(hf. []).cdata = im. Implement the basic color edge detector. nFrames = xyloObj. k). vidWidth. 'colormap'. % Preallocate movie structure. movie(hf. Repeat but now using a group of frames as a large image.. so register the other N-1 to it). Take a video and do frame-by-frame non-local means denoising. Practice with Wiener filtering. Compare the results of non-local-means from the previous week (use for example the implementation in www. Apply to it non-local means.FrameRate).mathworks. for blurry images. Note: Registration means that you are aligning the images again. that uses the same concepts as non-local-means? Make multiple (N) copies of the same image (e. Evaluate what levels of noise you consider still acceptable for visual inspection of the image.wikipedia. This allows you for example to find more matching blocks (since you are searching across frames). Search for “camouflage artist liu bolin.html or http://en. and then average them.” Do you think you can use the tools you are learning to detect him? Week 4: (Optional programming exercises) Add Gaussian and salt-and-pepper noise with different parameters to an image of your choice. To each copy.g. N=10).. Change the window size of the filter and evaluate its relationship with the noise levels. see for example http://www. Could you design a restoration algorithm. e. Observe if you manage to estimate the correct rotation angles and if you manage to reduce the noise.com/help/images/ref/imregister. Blur an image applying local averaging (select different block sizes and use both overlapping and not overlapping blocks). Observe if it helps to make the image better.im) with those of Wiener filtering.g. What happens if now you use 3D spatio-temporal blocks. apply a random rotation and add some random Gaussian noise (you can test different noise levels). xyloObj. Apply median filter to the images you obtained above. Using a registration function like imregister in Matlab. 1. 5×5×3 blocks and consider the group of frames as a 3D image? Try this and compare with previous results.org/wiki/Image_registration . Consider for example a Gaussian blurring (so you know exactly the H function) and play with different values of K for different types and levels of noise.ipol. mov. Compare the results. register the N images back (use the first image as reference. Plot the histogram of the prediction error. and recursively add neighboring pixels as long as they are in a pre-defined range of the pixel values of the seeds. Implement a region growing technique for image segmentation. . 5) randomly located in the image. denoted as seeds. Implement region growing from multiple seeds and with a functional like MumfordShah.. Can you apply any of the techniques learned so far to enhance the image. Implement the Hough transform to detect straight lines and circles in the same image. Consider an image with 2 objects and a total of 3 pixel values (1 for each object and one for the background). start from multiple points (e. Implement and test Otsu’s algorithm with this image. Add Gaussian noise to the image. Week 5: Implement the Hough transform to detect circles. Implement the Hough transform to detect ellipses. Apply JPEG compression to an image. with high levels of compression such that the artifacts are noticeable. The basic idea is to start from a set of points inside the object of interest (foreground). Try to fit a function to it to learn what type of distribution best first the prediction error. reduce the artifacts or the blocking effects? Try as many techniques as you can and have time to do.g. Grow the regions. considering a penalty that takes into account average gray value of the region as it grows (and error it produces) as well as the new length of the region as it grows. Apply any image predictor as those we learned in Week 2. for example. Consider growing always from the region that is most convenient. In other words.