Ameba Ownd

アプリで簡単、無料ホームページ作成

Paul Simon's Ownd

Digital image processing by sridhar pdf download

2021.12.20 17:00






















What is going on here? In fact the image emu. Information about your image A great deal of information can be obtained with the imfinfo function. For example, suppose we take our indexed image emu.


The fact is that Matlab does not distinguish between greyscale and binary images: a binary image is just a special case of a greyscale image which has only two intensities.


However, we can see that text. There are others; see the help for datatypes. An important consideration of which we shall more is that arithmetic operations are not permitted with the data types int8, int16, uint8 and uint A greyscale image may consist of pixels whose values are of data type uint8. These images are thus reasonably efficient in terms of storage space, since each pixel requires only one byte.


However, arithmetic operations are not permitted on this data type; a uint8 image must be converted to double before any arithmetic is attempted. We can convert images from one image type to another. Table 3. Note that the gray2rgb function, does not create a colour image, but an image all of whose pixel colours were the same as before. Exercises 1. Make a list of these sample images, and for each image a determine its type binary, greyscale, true colour or indexed colour , b determine its size in pixels c give a brief description of the picture what it looks like; what it seems to be a pixture of 2.


Pick a greyscale image, say cameraman. What are the sizes of those files? Repeat the above question with a a binary image, b an indexed colour image, c a true colour image.


In this chapter, we investigate this matter in more detail. We look more deeply at the use of the imshow function, and how spatial resolution and quantization can affect the display and appearance of an image.


In particular, we look at image quality, and how that may be affected by various image attributes. Quality is of course a highly subjective matter: no two people will agree precisely as to the quality of different images. However, for human vision in general, images are preferred to be sharp and detailed.


This is a consequence of two properties of an image: its spatial resolution, and its quantization. This is reasonable, since the data type uint8 restricts values to be integers between 0 and However, not all image matrices come so nicely bundled up into this data type, and lots of Matlab image processing commands produces output matrices which are of type double. We have two choices with a matrix of this type: 1.


The second option is possible because imshow will display a matrix of type double as a greyscale image as long as the matrix elements are between 0 and 1. However, as you can see, figure 4. Conversely, values greater than 1 will be displayed as 1 white and values less than 0 will be displayed as zero black.


In the caribou image, every pixel has value greater than or equal to 1 in fact the minimum value is 21 , so that every pixel will be displayed as white. To display the matrix cd, we need to scale it to the range 0—1. We can vary the display by changing the scaling of the matrix. Dividing by darkens the image, as all matrix values are now between 0 and 0.


Dividing by means that the range is 0—2, and all pixels in the range 1—2 will be displayed as white. Thus the image has an over-exposed, washed-out appearance. The display of the result of a command whose output is a matrix of type double can be greatly affected by a judicious choice of a scaling factor. We can convert the original image to double more properly using the function im2double.


This applies correct scaling so that the output values are between 0 and 1. It is important to make the distinction between the two functions double and im2double: double changes the data type but does not change the numeric values; im2double changes both the numeric data type and the values. The exception of course is if the original image is of type double, in which case im2double does nothing.


Although the command double is not of much use for direct image display, it can be very useful for image arithmetic. We have seen examples of this above with scaling. Corresponding to the functions double and im2double are the functions uint8 and im2uint8. Binary images Recall that a binary image will have only two values: 0 and 1. Matlab does not have a binary data type as such, but it does have a logical flag, where uint8 values as 0 and 1 can be interpreted as logical data.


A very disappointing image! But this is to be expected; in a matrix of type uint8, white is , 0 is black, and 1 is a very dark grey—indistinguishable from black. Since this bit has the least effect in terms of the magnitude of the value, it is called the least significant bit, and the plane consisting of those bits the least significant bit plane.


Similarly the 7th bit plane consists of the first bit in each value. This bit has the greatest effect in terms of the magnitude of the value, so it is called the most significant bit, and the plane consisting of those bits the most significant bit plane.


If we take a greyscale image, we start by making it a matrix of type double; this means we can perform arithmetic on the values. We can do this with the mod function. Note that the least significant bit plane, c0, is to all intents and purposes a random array, and that we see more of the image appearing. Open the greyscale image cameraman. What data type is it? View this image. What does the function im2uint8 do? What affect does it have on a the appearance of the image?


What happens if you apply im2uint8 to the cameraman image? Experiment with reducing spatial resolution of the following images: a cameraman. However, image processing operations may be divided into into three classes based on the information required to perform the transformation.


From the most complex to the simplest, they are: 1. We require a knowledge of all the grey levels in the entire image to transform the image.


In other words, the entire image is processed as a single large block. This may be illustrated by the diagram shown in figure 5. Spatial filters. To change the grey level of a given pixel we need only know the value of the grey levels in a small neighbourhood of pixels around the given pixel.


Point operations. Although point operations are the simplest, they contain some of the most powerful and widely used of all image processing operations. They are especially useful in image pre-processing, where an image is required to be modified before the main job is attempted. We can obtain an understanding of how these operations affect an image by looking at the graph of old grey values against new values.


Figure 5. And when we subtract , all grey values of or less will be mapped to 0. By looking at these graphs, we see that in general adding a constant will lighten an image, and subtracting a constant will darken it. We can get round this in two ways. To implement these functions, we use the immultiply function. Table 5. All these images can be viewed with imshow; they are shown in figure 5.


Compare the results of darkening b2 and b3. This is because in image b2 all pixels with grey values or less have become zero. A similar loss of information has occurred in the images b1 and b4. Note in particular the edges of the light coloured block in the bottom centre; in both b1 and b4 the right hand edge has disappeared. However, the edge is quite visible in image b5. Complements The complement of a greyscale image is its photographic negative.


Or we could take the complement of pixels which are or greater, and leave other pixels untouched. The effect of these functions is called solarization.


The result is shown in figure 5. Since the grey values are all clustered together in the centre of the histogram, we would expect the image to be poorly contrasted, as indeed it is. Given a poorly contrasted image, we would like to enhance its contrast, by spreading out its histogram.


There are two ways of doing this. We can stretch the grey levels in the centre of the range out by applying the piecewise linear function shown at the right in figure 5. Grey levels outside this range are either left alone as in this case or transformed according to the linear functions at the ends of the graph above. Use of imadjust To perform histogram stretching in Matlab the imadjust function may be used.


In its simplest incarnation, the command imadjust im,[a,b],[c,d] stretches the image according to the function shown in figure 5. Note that imadjust does not work quite in the same way as shown in figure 5.


However, values less than one produce a function which is concave downward, as shown on the left in figure 5. We may view the imadjust stretching function with the plot function. Since p and ph are matrices which contain the original values and the values after the imadjust function, the plot function simply plots them, using dots to do it. A simple procedure which takes as inputs images of type uint8 or double is shown in figure 5. Sometimes a better approach is provided by histogram equalization, which is an entirely automatic procedure.


We would expect this image to be uniformly bright, with a few dark dots on it. This is far more spread out than the original histogram, and so the resulting image should exhibit greater contrast. These results are shown in figure 5.


Notice the far greater spread of the histogram. This corresponds to the greater increase of contrast in the image. We give one more example, that of a very dark image. We can obtain a dark image by taking the index values only of an indexed colour image. Since the index matrix contains only low values it will appear very dark when displayed.


To apply histogram stretching, we would need to stretch out the values between grey levels 9 and Thus, we would need to apply a piecewise function similar to that shown in figure 5. The dashed line is simply joining the top of the histogram bars. However, it can be interpreted as an appropriate histogram stretching function. But this is precisely the method described in section 5. Thresholding is a vital part of image segmentation, where we wish to isolate objects from the background.


It is also an important component of robot vision. Thresholding can be done very simply in Matlab. Suppose we have an 8 bit image, stored as the variable X. We can view the result with imshow. The resulting image can then be further processed to find the number, or average size of the grains.


To see how this works, recall that in Matlab, an operation on a single number, when applied to a matrix, is interpreted as being applied simultaneously to all elements of the matrix; this is vectorization, which we have seen in chapter 2. This command will work on greyscale, coloured and indexed images of data type uint8, uint16 or double. As well as isolating objects from the background, thresholding provides a very simple way of showing hidden aspects of an image. For example, the image paper.


However, thresholding at a high level produces an image of far greater interest. Consider the following sequence of commands, which start by producing an 8-bit grey version of the indexed image spine.


Note how double thresholding brings out subtle features of the Figure 5. We can obtain similar results using im2bw: imshow im2bw x,map,0. When we want to remove unnecessary detail from an image, to concentrate on essentials.


Examples of this were given in the rice and bacteria images: by removing all grey level information, the rice and bacteria were reduced to binary blobs. But this information may be all we need to investigate sizes, shapes, or numbers of blobs.


To bring out hidden detail. This was illustrated with paper and spine images. In both, the detail was obscured because of the similarity of the grey levels involved. But thresholding can be vital for other purposes. We list a few more: 3. When we want to remove a varying background from text or a drawing. We can simulate a varying background by taking the image text. We then read in the text image, which shows white text on a dark background. The third command does several things at once: not t reverses the text image so as to have black text on a white background; double changes the numeric type so that the matrix can be used with arithmetic operations; finally the result is multiplied into the random matrix, and the whole thing converted to uint8 for display.


The result is shown on the left in figure 5. What happens to the results of thresholding as the threshold level is increased? If not, why not? Superimpose the image text. In each case draw the histogram corresponding to these grey levels, and then perform a histogram equalization and draw the resulting histogram.


The following small image has grey values in the range 0 to Is the histogram equalization operation idempotent? That is, is performing histogram equal- ization twice the same as doing it just once? Apply histogram equalization to the indices of the image emu. Apply histogram equalization to it, and compare the result with the original image.


Using p and ph from section 5. Experiment with some other greyscale images. Spatial filtering may be considered as an extension of this, where we apply a function to a neighbourhood of each pixel. As we do this, we create a new image whose pixels have grey values calculated from the grey values under the mask, as shown in figure 6. If the function by which the new grey value is calculated is a linear function of all the grey values in the mask, then the filter is called a linear filter.


We can implement a linear filter by multiplying all elements in the mask by corresponding elements in the neighbourhood, and adding up all these products. We see that spatial filtering requires three steps: 1. This must be repeated for every pixel in the image.


The output of our working will thus consist only of nine values. We shall see later how to obtain 25 values in the output. If we continue in this manner, we will build up the following output: This can be written as a matrix. In such a case, as illustrated in figure 6. Figure 6. That is, we only apply the mask to those pixels in the image for with the mask will lie fully within the image.


This means all pixels except for the edges, and results in an output image which is smaller than the original. If the mask is very large, we may lose a significant amount of information by this method. We applied this method in our example above.


We assume that all necessary values outside the image are zero. This gives us all values to work with, and will return an output image of the same size as the original, but may have the effect of introducing unwanted artifacts for example, edges around the image. We can create our filters by hand, or by using the fspecial function; this has many options which makes for easy creation of many different filters. We shall discuss the use of this function later. The averaging filter blurs the image; the edges in particular are less distinct than in the original.


The image can be further blurred by using an averaging filter of larger size. Notice how the zero padding used at the edges has resulted in a dark border appearing around the image. This is especially noticeable when a large filter is being used. In such cases, too much detail may obscure the outcome. One important aspect of an image which enables us to do this is the notion of frequencies. Fundamentally, the frequencies of an image are the amount by which grey values change with distance.


High frequency components are characterized by large changes in grey values over small distances; example of high frequency components are edges and noise. Low frequency components, on the other hand, are parts of the image characterized by little change in the grey values. These may include backgrounds, skin textures. We note that the sum of the coefficients that is, the sum of all e elements in the matrix , in the high pass filter is zero. We shall see how to deal with negative values below.


High pass filters are of particular value in edge detection and edge enhancement of which we shall see more in chapter 8. But we can provide a sneak preview, using the cameraman image. However, the result of applying a linear filter may be values which lie outside this range.


Make negative values positive. This will certainly deal with negative values, but not with val- ues greater than Hence, this can only be used in specific circumstances; for example, when there are only a few negative values, and when these values are themselves close to zero.


Clip values. In such a case this operation will tend to destroy the results of the filter. Scaling transformation. This means the output of mat2gray is always of type double. The function also requires that the input type is double. This can be be viewed with imshow. We can make it a uint8 image by multiplying by first. The result can be seen in figure 6. Using mat2gray Dividing by a constant Figure 6. These can be seen quite clearly in the right hand image of figure 6. The fspecial function can produce many different filters for use with the filter2 function; we shall look at a particularly important filter here.


Large value of Small value of Figure 6. Gaussian filters have a blurring effect which looks very similar to that produced by neighbour- hood averaging. If the filter is to be square, as in all the above examples, we can just give a single number in each case.


Now we can apply the filter to the cameraman image matrix c and vier the result. We see that to obtain a spread out blurring effect, we need a large standard deviation. In fact, if we let the standard deviation grow large without bound, we obtain the averaging filters as limiting values. Although the results of Gaussian blurring and averaging look similar, the Gaussian filter has some elegant mathematical properties which make it particularly suitable for blurring.


Other filters will be discussed in future chapters; also check the documentation for fspecial for other filters. However, non-linear filters can also be used, if less efficiently. The function to use is nlfilter, which applies a filter to an image according to a pre-defined function. If the function is not already defined, we have to create an m-file which defines it.


The function must be a matrix function which returns a scalar value. The result of this operation is shown in figure 6. Note that in each case the image has lost some sharpness, and has been brightened by the maximum filter, and darkened by the minimum filter.


The nlfilter function is very slow; in general there is little call for non-linear filters except for a few which are defined by their own commands. We shall investigate these in later chapters. The array below represents a small greyscale image.


Compute the images that result when the image is convolved with each of the masks a to h shown. At the edge of the image use a restricted mask. In other words, pad the image with zeroes. Check your answers to the previous question with Matlab. Describe what each of the masks in the previous question might be used for. Can you now see what each filter does? Apply larger and larger averaging filters to this image. What is the smallest sized filter for which the whiskers cannot be seen?


Repeat the previous question with Gaussian filters with the following parameters: Size Standard deviation [3,3] 0. Can you see any observable differnce in the results of average filtering and of using a Gaussian filter? Read through the help page of the fspecial function, and apply some of the other filters to the cameraman image, and to the mandrill image.


Apply different laplacian filters to the mandrill and cameraman images. Which produces the best edge image? Matlab also has an imfilter function, which if x is an image matrix of any type , and f is a filter, has the syntax imfilter x,f ; It differs from filter2 in the different parameters it takes read its help file , and in that the output is always of the same class as the original image.


Compare the results with those obtained with filter2. Which do you think gives the best results? Display the difference between the cmax and cmin images obtained in section 6. Can you account for the output of these commands? If an image is being sent electronically from one place to another, via satellite or wireless transmission, or through networked cable, we may expect errors to occur in the image signal. These errors will appear on the image output in different ways depending on the type of disturbance in the signal.


Usually we know what type of errors to expect, and hence the type of noise on the image; hence we can choose the most appropriate method for reducing the effects. Cleaning an image corrupted by noise is thus an important area of image restoration.


In this chapter we will investigate some of the standard noise forms, and the different methods of eliminating or reducing their effects on the image. Salt and pepper noise Also called impulse noise, shot noise, or binary noise. This degradation can be caused by sharp, sudden disturbances in the image signal; its appearance is randomly scattered white or black or both pixels over the image.


The twins image is shown in figure 7. We can observe white noise by watching a television which is slightly mistuned to a particular channel. Gaussian noise is white noise which is normally distributed. It can be shown that this is an appropriate model for noise. Speckle noise Whereas Gaussian noise can be modelled by random values added to an image; speckle noise or more simply just speckle can be modelled by random values multiplied by pixel values, hence it is also called multiplicative noise.


Speckle noise is a major problem in some radar applications. Periodic noise If the image signal is subject to a periodic, rather than a random disturbance, we might obtain an image corrupted by periodic noise.


The effect is of bars over the image. Salt and pepper noise, Gaussian noise and speckle noise can all be cleaned by using spatial filtering techniques.


Periodic noise, however, requires image transforms for best results, and so we will leave the discussion on cleaning up periodic noise until a later chapter. Median filtering Median filtering seems almost tailor-made for removal of salt and pepper noise.


Recall that the median of a set is the middle value when they are sorted. If there are an even number of values, the median is the mean of the middle two. Thus the median will in general replace a noisy value with one closer to its surroundings. The result is a vast improvement on using averaging filters.


As Figure 7. Rather than take the median of a set, we order the set and take the -th value, for some predetermined value of. Matlab implements rank-order filtering with the ordfilt2 function; in fact the procedure for medfilt2 is really just a wrapper for a procedure which calls ordfilt2.


There is only one reason for using rank-order filtering instead of median filtering, and that is that it allows us to choose the median of non-rectangular masks. To overcome this difficulty, Pratt [16] has proposed the use of cleaning salt and pepper noise by treating noisy pixels as outliers; that is, pixels whose grey values are significantly different from those of their neighbours.


This leads to the following approach for noise cleaning: 1. Choose a threshold value. There is no Matlab function for doing this, but it is very easy to write one. Nonetheless, we introduce a different method to show that there are other ways of cleaning salt and pepper noise. An immediate problem with the outlier method is that is it not completely automatic—the threshold must be chosen.


An appropriate way to use the outlier method is to apply it with several different thresholds, and choose the value which provides the best results. The threshold value D must be chosen to be between 0 and 1. If we choose , we obtain the image in figure 7.


Clearly using an appropriate value of is essential for cleaning salt and pepper noise by this method. This will result in a blurring effect, similar to that obtained by using an averaging filter.


If is chosen to be too large, then not enough noisy pixels will be classified as noisy, and there will be little change in the output. The outlier method is not particularly suitable for cleaning large amounts of noise; for such situations the median filter is to be preferred.


An example is satellite imaging; if a satellite passes over the same spot many times, we will obtain many different images of the same place. In such a case a very simple approach to cleaning Gaussian noise is to simply take the average—the mean—of all the images.


We first need to create different versions with Gaussian noise, and then take the average of them. We shall create 10 versions. Each time randn is called, it creates a different sequence of numbers. So we may be sure that all levels in our three-dimensional array do indeed contain different images. The result is shown in figure 7. This is not quite clear, but is a vast improvement on the noisy image of figure 7. An even better result is obtained by taking the average of images; this can be done by replacing 10 with in the commands above, and the result is shown in figure 7.


Note that this method only works if the Gaussian noise has mean 0. The larger the size of the filter mask, the closer to zero. Unfortunately, averaging tends to blur an image, as we have seen in chapter 6. However, if we are prepared to trade off blurring for noise reduction, then we can reduce noise significantly by this method.


If we can minimize this value, we may be sure that our procedure has done as good a job as possible. Filters which operate on this principle of least squares are called Wiener filters. They come in many guises; we shall look at the particular filter which is designed to reduce Gaussian noise. This filter is a non-linear spatial filter; we move a mask across the noisy image, pixel by pixel, and as we move, we create an output image the grey values of whose pixels are based on the values under the mask.


See Lim [13] for details. NOISE the variance of all grey values under the mask. This can be very efficiently calculated in Matlab.


Suppose we take the noisy image shown in figure 7. We will use the wiener2 function, which can take an optional parameter indicating the size of the mask to be used. Being a low pass filter, Wiener filtering does tend to blur edges and high frequency components of the image.


But it does a far better job than using a low pass blurring filter. We can achieve very good results for noise where the variance is not as high as that in our current image. The result is a great improvement over the original noisy image. The arrays below represent small greyscale images. Use the outlier method to find noisy pixels in each of the images given in question 1. Write a Matlab function to implement the pseudo-median, and apply it to the images above with the nlfilter function.


Does it produce a good result? Produce a grey subimage of the colour image flowers. Which method gives the best results? In each case, attempt to remove the noise with average filtering and with Wiener filtering. Can you produce satisfactory results with the last two noisy images?


We may use edges to measure the size of objects in an image; to isolate particular objects from their background; to recognize or classify objects. There are a large number of edge-finding algorithms in existence, and we shall look at some of the more straightforward of them. In this chapter, we shall show how to create edge images using basic filtering methods, and discuss the Matlab edge function.


An edge may be loosely defined as a line of pixels showing an observable difference. This would be easily discernable in an image—the human eye can pick out grey differences of this magnitude with relative ease.


Our aim is to develop methods which will enable us to pick out the edges of an image. If we consider the grey values along this line, and plot their values, we will have something similar to that shown in figure 8. If we now plot the differences between each grey value and its predecessor from the ramp edge, we would obtain a graph similar to that shown in figure 8. It appears that the difference tends to enhance edges, and reduce other components.


We are now in the position of having to apply two filters to an image. For example, let us take the image of the integrated circuit shown in figure 8. This is a grey-scale image; a binary image containing edges only can be produced by thresholding. Figure 8. The result is shown in figure 8. Note that figures 8. This is because the edge function does some extra processing over and above taking the square root of the sum of the squares of the filters. Of the three filters, the Sobel filters are probably the best; they provide good edges, and they perform reasonably well in the presence of noise.


EDGES 8. To see how the second difference affects an edge, take the difference of the pixel values as plotted in figure 8. It is also extremely sensitive to noise.


However, the Laplacian does have the advantage of detecting edges in all directions equally well. The value gives the Laplacian developed earlier. If we look at figure 8. Sunday, 17 May Computer Vision Resources. The main computer vision conferences:. Other regular conferences of interest:. See also Keith Price's listing of computer vision conferences.


Journals and E-publishing. Technical papers. Keith Price's annotated computer vision bibliography. Trade publications. Probability and Statistics. Thursday, 30 April Python - A Powerful Friend. It is free for use under the open source BSD license. The library is cross-platform.


It focuses mainly on real-time image processing. The library has more than optimized algorithms. History: OpenCV was started at Intel in by Gary Bradski for the purposes of accelerating research in and commercial applications of computer vision in the world and, for Intel, creating a demand for ever more powerful computers by such applications.


Over time the OpenCV team moved on to other companies and other Research. Several of the original team eventually ended up working in robotics and found their way to Willow Garage. In , Willow Garage saw the need to rapidly advance robotic perception capabilities in an open way that leverages the entire research and commercial community and began actively supporting OpenCV, with Gary and Vadim once again leading the effort.


Installing OpenCV from prebuilt binaries Below Python packages are to be downloaded and installed to their default locations. Matplotlib Matplotlib is optional, but recommended since we use it a lot in our tutorials.


Install all packages into their default locations. Enter import numpy and make sure Numpy is working fine. Download latest OpenCV release from sourceforge site and double-click to extract it. Copy cv2. If the results are printed out without any errors, congratulations!!! You have installed OpenCV-Python successfully. Wednesday, 29 April Toward practical compressed sensing. What is Scribd? Digital Image Processing - S.


Uploaded by bbaskaran. Document Information click to expand document information Description: Image processing. Did you find this document useful? Is this content inappropriate? Report this Document. Description: Image processing. Flag for inappropriate content. Download now. Sridhar For Later. Related titles. Carousel Previous Carousel Next. Jump to Page. Search inside document. Related Interests Books Media. Raja Sekhar. Pritesh Gupta. Mohit Goel. Anonymous BN6NPeuqc.


Abhishek Shaolin.