Member Login
 

Image Processing Fundamentals
Part 1

Image Processing Fundamentals
Part 1

Part of the AIA Certified Vision Professional-Basic program, Dr. Romik Chatterjee, Vice President of Engineering at Graftek Imaging, Inc., teaches the fundamentals of vision and imaging algorithms. You'll learn the software side of vision system requirements and its role in solving application challenges.

Advanced Training

Subscribe for access to Advanced Certified Vision Professional Training

Click Here

Exam Locations

Locate a Certified Vision Professional exam location and test your vision training knowledge.

Click Here
Click to View Transcript services image processing fundamentals my name is analog roaming Chatterjee and the graphic imaging of in Austin Texas been there for about seven years now developing a group of machine vision integration so he so we’ll start with section 1 of the agenda today is go through some objectives and motivations for image processing look at the various topics and enhancing images checking for the presence of our locating part measuring features and finally will end up with identifying and verifying components in so the class goals are will teach the fundamentals of image processing tools available in machine vision software pages and provide some background into home so the key algorithms were these in this processing to this should help it so great learning curve of machine vision some of the key thing expectations are also what not dudes act you won’t learn how to develop a complete vision machine vision system today there’s much more to a point just beyond the Army you won’t learn about specific in binary algorithms that are present in lots of I just some of the advanced topics the CD vision are and other topics like that will not be covered here are discussed in the benefits of different application involve environment so there’s should be to see is the most blacks are land viewer defend of programming environments will not be such so we’ll talk with image processing from machine vision so what is the objective image processing machine vision generally it’s still extract useful information is present in an image in a limited amount of time strained is really secondary objective might be to display an image for users you have a user interface you want to display something that the user can use to review the results are the fiddle status of an application is generally not to improve the appearance of any so there are other packages like Adobe Photoshop wary specifically use image processing to improve the appearance of an image that’s not what the fuss this is about so what is image processing use foreign machine vision is a couple key things image preprocessing images used to minimize the variations of the information and prepare the image for these specific tasks that are needed in the in the horror application and then there is application-specific processing so there would be things like counting particles in an image locating in measuring accurate is and then marriage better particulars of the application really depends of what you’re trying to do for so we’ll start with what I wanted and needed the various types of images others grayscale images which are typically used in machine vision the most common type is an eight bit grayscale image where the pixel values of our imagers that range from 0 to 25 there are other types of images with greater bit depth and while or 16-bit images that go to a larger range values so 01 4095 or zero 65,000 there is also called images which is simply a composition of three grayscale images in most cases so there’s the red green and blue flames that that the color information and that are other types of images that will will be yelling went here that are generally the result of image processing functions so typical one will be looking at is binary just so binary images generally consist of pixels that are either zero or one value and their commonly used to identify objects of interest so once you go through a particular step in you located are things that are interesting might be turned on and and the rest of the image but be turned off in pixel easier so this is the usually this is of the results of image processing functions in some intermediate steps in English processing get a floating-point image again the difference being that these are not imager values series are decimal values in not not very common but again might be used in the intermediate step in generally you’re looking for a an image where is the range of grayscale values this grant are well chip in if it’s an eight bit image that means there is these five not of the pixels are saturated so will be mean by sensor. It is threshold stuck at 255 for most applications there are exceptions where you lots of the pixels is the reason you don’t want saturation this because you want to avoid the case were you you can’t distinguish between the values of the because the whole point of the doing image processing machine vision is to be limited differentiate between the pixels in information you want to contrast between the image between between the right parts of an image so that depending on what are your inspecting work partly imager are interesting you want to maximize the contrast between God and you the background or other parts of the image home most importantly you are beautiful in the because you been be using the same image or same type of image over and over and find reviews hopefully consistent results and safety don’t have repeatable images coming in it be struggling and software to create repeatable information that uses the motions so ensure an ideal image requires the least number of stops the same the resolved that your trying so the class organization will be going through talking about how we enhancing images Howie check for particles in an image all parks and an image then how we locate these are civic regions of an image of the level of four I we measure objects in an image and then finally will end up with a brief the review of advanced some mice advanced topics plate OCR and one being to Ivar so in this section will be talking about this announcer read the characters on textured surface and I did a simple threshold I would gag this is a result very hard to you segment the characters in here and be able to do that balloon OCR that identified the part is correct with a few simple preprocessing Stenhouse I did cleanup this image is grayscale image that I started with getting better results will show you how given the fact that this the noise in this image is the texture in this images periodic I don’t do a periodic filter and once I removed the periodic pattern iron of with the grayscale images of those things this way there is one of the background is constant that is the characters are dark so that if I do it threshold a been end up with an image is this where this is a much more easy probably in the image to process to create an OCR information of so resolves with be brought without three processing inverse results with preprocessing to clearly preprocessing is you desired result that is not or so with that administration the the objective of the news preprocessing step is to process an image so that the resulting image is more suitable than the original image so we started with for a specific application and this is important because it preprocessing method that works for one application may not be the best burnout application depends aware divided do in the end finally preprocessing occurs before the particular applications civic processing so they might be different preprocessing starting with the same grayscale image or color so once an acquisition of his step occurs and you get an image out of the camera you do preprocessing so common types of the processing might be shading correction the blurring of the damage you know is enough bit image sensor contrast enhancement and sensor feature application-specific against but I have processing will then include things like intensity measurement OCR are matching the aging or reading the barcode or measuring the size of particles or objects in the so the common enhancement techniques in image processing are either broken up into two types of categories there are either spatial domain Windsor pixel wise operations for frequency so in the spatial domain there is the typical brightness contrast and gamma love love you learn lookup tables reply to enhance the contrast or change the big grayscale values on the than there’s grayscale morphology and grayscale so frame will look at all of those and then finally will look at brief the with frequency domain preprocessing can look like so what is brightness so on be the end image on the low that here is the image that we started with homeland three sliders that will look absolutely so brightness contrast and gamma and the lookup table below shows you the the input are not the pixels in this case they’re all the same so there is straight-line when I have brightness doing image I simply add a constant value to all the pixels in an image so the lookup table ship so what is constant value and thewriter the point of brightness is that he can improve appearance of an image is not really useful for most image processing steps because you in the best case you were not changing information is already in the image of not adding any more information image numerous case you’re actually losing some information so you look see that the pixel values there were a higher range and saturated so now you lost the ability to distinguish the the between those pixels so not the most useful image for a preprocessing step but is commonly present in every image processing tools good the contrast this is a little more useful is used to increase or decrease contrast in normal contrast is the 45 of value 45 supplies at 45° in a higher contrast is harder than 45 to get the line this steeper the lookup table and a lower contrast is lower than 45 by the Stephen for so the typical application contrast is to use it is to improve the test detection in particular parts of and in failure traded in the hands the the features and particular part of the image that your trying to roll out while sacrificing contrast of the so and so it’s the you definitely losing some information in the bottom part of the well the table there in the top part of the sewers definitely sacrifices one range of values proven however might be useful because a particular application that’s the range of easy care gamma is the topic contrast were you doing a non-lean your lookup table and it has some advantages over this typical in your contrast that you don’t saturate the pixels so you can get a higher contrast for range of values while not completely saturating or losing information and other range of value so this is this deductible be used in improving furans but all in being able to process certain parts of so we talked about lookup tables 100 look at one another lookup tables here and in particular and then review and a few others look tables are useful way of computing a transformation from one set of grayscale values so easily generalize that idea you can locate a Canadian in image so the top image there Reno and we can do with falling histogram equalization and has a very equalization also the grayscale values the pixels of the bidding on usually distributed across the entire grayscale so there is an equal lines images lean on their the can see the in the inner the bright parts whether or not I and then the brightest range of the image the full to be made brighter and then the lower part of old to me to be made darker so no information was lost in the the guarantor that image was me to be such that it filled out the entire range of the grayscale values so it enhances contrast’s use for for the purpose but then will look at a few more reasons why this uses their all the tables like reversing an image that is possible mucus do nonlinear loads of tables like squaring an image in a course of the inverse of that we can take the square root alone over acts of power and so what is histogram equalization useful if I start with a an image of where of the IC component where I’m trying to read the characters on it this is a nice image I could do it threshold and and because the characters however if I did a histogram equalization I get better contrast better separation between and are part of Breitbart so that when I do a threshold is less likely to start picking up the dark arts when I try to pick of the Breitbart so this is an interesting example litigants better if I look at it with varying lighting conditions so if I have dark image of the same part now if I offended you it threshold I would have been invented just threshold parameters that are needed to the love those characters instead if I did a preprocessing step where I equalize the image now that image the resulting human pheromones the same as in the bright image so my application-specific stack reaches thresholding vein the OCR does not change even though plane coming in change so I can do preprocessing and take care of the variations are barred variations of image out these are all tools are present to allow you to use certain interesting things when you have are barred variation or lighting variation in your goal should really be in machine vision system to the always bad as constant as possible so things are lighting conditions in camera conditions and making sure that the image is repeatable before you start physical minimize the amount of work you have to do you get to that application-specific step in at any given moment time you can have another image where there is a lower contrast you panic no equalization you don’t get as gray and everything is a and imager’s you got before but again if you threshold light image with the same thresholding parameters as you did before you end up with the same information as he did in the bright pixels in an inverse leave you have a higher contrast image again you can see do equalization and get the same result okay so hopefully that makes sense in terms of what you can do this is as early as advancers of Israel (or lookup tables that that help you standardize a noonish for you process that will look at some other functions that are only available image processing tools one of them is a spatial filtering so just like unison you might be there aware of in insisting one the signal processing I pass filters in low passholders are uses separate and frequency domain components thereof the the analog of that in 2-D image processing is the Hamilton do a linear faster linear mode passed over so on the top there stop right there I shown an image of the gradient in a particular direction not only can you get a variation of the image in in in all the different directions and you can take the direction that you want to look at Seton find edges that are aligned in particular direction you can do gray in Berlin flossing is doing a high I pass other reason is a hypothesis because is speaking on a those fit the components of the image that are varying in a short spatial scale desires frequencies scale in in this so in the spatial frequency if you do a linear mode class you can do you can use different kernels one of the ones who shown is is a you and the gassing filters so basically every pixel and that image is been involved with the Gaussian filter to give you a smooth version of that image so that’s the the image on the right there there are nonlinear I have some nonlinear low pass filters some which are optimized to do varies things in typical nonlinear test of reason and should section filter this is a so belle France from that are shown your Fuster Vic on VA there are other transforms like Robertson three with the is you similar but slightly different results enough might be used for different applications so again the idea in areas that he gives you the highly very they of the the act information which is the boundary then varies over a very short scale are nonlinear low class instead of doing a is smooth averaging over a region of pixels you take either a median which is a nonlinear function on and for function that gives you the same still used to look in the statistics of vigor given region around pixel and then your thinking the value of further pixel that the this and replace the value of that pixel didn’t on the statistics and that the the the function might be a median or in order function so what is use floor is noticed the them the noise bright particles your have been removed without actually changing modes of the grayscale information in the larger blogs slacks test really what significances whereas in the smoothing with a linear function you Don essentially same thing but then you learn the edges of the bigger blogs the nonlinear function helps you keep the edges very sharp whereas the linear functions that is smooth everything in our lives are quite grave morphology so the morphology implies changing its shape of objects it can be done with a grayscale and they deserve binary mission will look at the binary image functions later but with grayscale image you can do a basically a couple of simple things and then it goes into more complex staffs that are based on these run in these fundamental functions some of the fundamental phages that you can do is you rode a particle or road and it so notice that place start with his grayscale image and I perform an erosion function it basically takes every pixel and replaces it with the minimum of its neighbors in a given region and so what it does is if you look at the little is slimmest there is channel becomes narrower in the island in section dilation is opposite it replaces every pixel maximum of that saver that expands the region and again if you look at the little island there it of the business becomes large that open and close our our steps through composed of dilation and erosion so an open is an erosion followed binary dilation closes they dilation all of binary of their use for different things and open is used to eliminate bright particles of the little articles on the edge of your were lost will begin an open well the general shape of the large blogs are a net and there there are slight changes with digital changes on the site close is used to remove the opposite so there’s of Czar: the large lobby here that I can then close and fill out by using close operation so like we said before those were all spatial filtering techniques were middle of the frequency domain filtering techniques any filtering they can be done in the spatial domain can also be done in the frequency domain there sometimes an advantage in doing the frequency domain first the bar are accuracy of performance for reasons so standard filtering or bandpass filtering that there is reasons to apply it in the frequency domain that that can actually be the result be more optimal than in this facial so that is the status of view you would normally applies you’re doing in the days vacancy domain is that you take the input image you and lie affect the do it and then with the the frequency domain complex in image you would then multiply it by a filter function in this particular image that we shown here with periodic noise we shown in the previous example was noise that might be introduced by in a noise in a cabler a lady variations in suckling that the periodic and because the noise is exactly Sirianni it’s better to to remove this sort of noise in the frequency domain because this it separates itself better from the rest of information and so when the do a band style filter and remove this noise the the resulting image we get back his free my the original image with very little loss in ash the work were resulted from doing other of filtering techniques so here’s a couple more examples of what low pass filtering might look like there’s an LCD panel were primary the characters eyes you did a threshold in the original grayscale image contract do I slave the characters you would end of baking soda dark dots in the background so if you do a low pass filter you can remove those and get to a very clean image that you can then threshold and isolate us characters from being able to do OCR is a low pass image he examples are low pass filtering example with the Gaussian filter and what we did here is deliberately and create a more fuzzy image where the also learn and each other so this is again an example where appearance of the image is the appears to be degraded but it’s actually more suitable for image processing is now when you threshold of image those characters are Americans stand out as a single objects as opposed to each.staying of is a single object so you processing might be counterintuitive and that it might be great the so-called appearance of the image the user’s actually is the right thing to do for them to go in machine vision application scenario some examples of high pass filtering the opposite will be talked about so typically legitimately be doing is attacking edges with a high pass filter serves some IC components on PCB need to see the outline of those components were picked on a high pass filters single step. Here is a laptop might be an application we actually looking at the the actual location the parties just insisting the outline the are here is an application where you might be looking like beaded before to read the characters on this particular IC components so even though you lighting variations in and things that are changing in the top row in the bottom row of these components when you sharpen the image you get really contrast for all those part now you can do threshold in each of those bars and then because the care so in the next section year will be talking about image calibration so what is calibration calibration corrects for a couple things and then is moving talking about today transfer lens and perspective distortion it allows the user to take real-world measurements image space from an image based on the pixel locations in the So I’ve shown in a slows the Medicare I camera lens and in the angle of this camera to the known object of Kenya it is the perspective distortion that is that you are looking sensor looking straight down a good-looking angle so they gives you widening of of the image the one side of the image most of the site been there might be unknown orientation says of the entire camera might be rotated with respect to the art of looking at so that that’s something you can crack more and then there is you know only pixels are very particular unit area nested pixels per inch of it is special calibration you so what is it take to calibrate and then generally wedging maybe doing is inquiring and images the calibration. Great with known real-world distances between the dots in an leave this software package your your unit be using them almost all commercial software packages allow the ability to learn the calibration and mapping information from this person from this image and then gumshoe perspective and the distortion errors that are present in that particular optical center and then from there on the software can then why what is learned to subsequence images of that same feel the so again in the right there you see in you know what would you see the the in terms of perspective distortion and and given the and known distance FDX in the lie that you know between the the dots you can then learn those and so is an example years in image of some clients with both perspectives and and know my new distortion in that you can then correct me in it in software to give you an image were all those coins look like to have the right size even though there and part of the myths so there raised fact that calibration in yet be sure that your expectations for what you what you want from a calibration are very realistic so HED spatial calibration is over talking about here is applied to a single frame of time so depending on where you lay this grade of dots and and confused that calibration is the to change if you change that to the plane between between the varies of is that you so it’s only strictly applied to plan only corrects for lens and perspective the start distortion a very informed plaintiff that calibration does not improve the resolution of the measure so if you don’t have the resolution to measure the particular measurement inning a camera I given based on a camera or your are your lines you cannot improve that with calibration generally calibration cannot compensate for poor lighting or unstable conditions so it’s not the job of calibration and image processing there are other types of calibration we will be covering for example 3-D calibration there’s some sulci you are calibration base were that basically corrects for lighting variations from the furthest parts of an image and then there is color calibration that you can get the same color information from one image we won’t be talking to the so this is the part were will be talking about looking at the particles making the intensity measurements of article more application-specific algorithms that are using image processing before you get into that we Lucy does the bond some things before but loves is defined some things I just did Rands what is an histogram and histogram provides quantitative distribution pixels in an image so irrespective of the location of an image if you just count the number of pixels that are in particular grayscale value you can then create a chart that shows you what with the range of that though those values are inserted in the gets a number of pixels underneath grayscale by this is important because when you do certain sets like thresholding you want to be looking at a histogram see understand where you apply the threshold so this thresholding step than converts each pixel in an image into even either value of zero or one morning based on the original value and at an is used to extract again you can structures and so here’s the grayscale image with with a wide range of values and Seo one of the big not only the bright pixels on here I would typically examine few examples of this the same as sunlight just one you want to look at the repeatability and be sure that you are looking at more than one in one instance of this image and then locally histogram and decide that all pixels above a certain value and 125 of the call out is one and all pixels below 125:0 so when I apply that staff I get this resolved images of binary image were all the pixels there were bright are represented by red there not really read there just value one in the city other pixels been turned off and represented the so this can get a little more complex can have do specials you have gray images and strength bright images but gray outdistance the right object you can set lower bound upper bound of the values that you will threshold very sorry in this case I can choose that articles below are bright Ms. case and then anything about that can be also sent to black pixels below AEE are sent to live between the range of 80 1/5 you are sent to their are much more interesting in automated ways of doing thresholding there that are you should be aware I some of these are proprietary implementations it depends on which packages you do use since starting with the grayscale image I can do a manual threshold the given particular value and get this resolved but then I could also do what’s called Erin on all the threshold with clustering say given the fact that I had to separate regions in the histogram whether were others dark pixels in bright pixels I can use an algorithm to to separate those clusters and figure out the optimal threshold needs to be apply for this particular image so I again a pretty much identical resolved by light clustering algorithm for this particular image there are other ways to you that using the were moments of statistical measures of that eagerness to and begin each of these techniques is specific to particular application it should be used only when needed there’s a couple of other things he can do it specials say here’s an image of an LCD panel were the background is varying from left the top level I the bottom right corner is firefly be a single threshold value and try to isolate the characters I get a result the legislative that I can use would call a local threshold so all these statistic the we talked about here are confuted globally for so early in tarnish the this histogram is for the entire image there so is that I can use a local threshold on the other side where to find a reason of the histogram examine on it is for that particular region and then for every get invasion the algorithm will take the optimal threshold that’s decides for their particular region size you go from the top left corner the bottom right corner intensity changes in the threshold will value will be adapted to so that gives you much more accurate resolved for the LCD panel in these are advanced state that means for threshold of the concepts are based in the same it’s Howdy Vick number two separate interesting objects from the background and so before we get into more interesting binary processing will talk about this concept of particles and connectivity so thresholding creates binary images pixels there are either zero or one particle and an image is a little pixels that are in a of the group connected one pixels so the these the interesting objects in an image so we has some simple definitions air and connectivity of for is will be all it is is is is the case where to pixels are considered part of the same article if there is a horizontally or vertically J is so the center pixel there that are basal is considered to be a connectivity of forming visit touching do bright pixels on top and the bottom in the 200 the right and left are connectivity is a is is one the example shown there includes both the horror top bottom of the horizontal and vertical articles pixels and the diagonal pixels in this case so this is important because will will look at some operations will be looking at sulci Brownlie and depending on the connectivity of article whether or are a you might be affected if so again they is simply floor is the diagonal connectivity so look at a couple examples years Somali particles and connectivity for immune generally based on this definition of looking at horizontal and vertical connectivity you’ll see that there three articles there and then if you look at Tony particles with connectivity a Holy Ghost you particles more little sparse there are connectivity a because though the ones that have both horizontal and vertical and diagonal line is this important is will be talking about binary morphology some morphology is is the step that all eyes you to change the shape of articles and the functions that you apply to that image again change the particles differently based on whether they are connectivity of for a so by morphology extracts and alters the structure of articles in the binary image there are used to remove unwanted information and image generally so things like noise particles particles trapped in the border of dynamic image articles touching each other particles with uneven bars so you is just based the a cleanup or separation step that shoot you So this follows some of the explanations we met before but is generally a more clear picture of in binary images erosion decreases size of an object is to talk about so removes the layer pixel that is the outermost layer of the pixel bar eliminates pixels isolate aided in the background and removes narrow peninsula so you used to rush into separate articles for counting so in this image here are lots of little articles here and you may not usually care about and is in the application and so in the erosions about rid of those in that image I also be separation between particles so notice when these two particles affecting year I broke them up to seven particles set so it does she definition of the article in your the camera a lot that when you can move it using inheritance dilation is the opposite pieces the size of an object in an image in the layer of pixels around the boundary including the inside the for objects with Olsons not exclusive to the outside since it applies to every pixel in the generally used for illuminating tiny holes in an object and remove the pastor days of insufficient with so in this trying time they end up with a lot of the little articles dilated to buy articles that were barely touching light here are me to watch and they’re inviting. Large So if you accidentally broke up something special sent you can be divided with your of violations so to review again you erosion removes a layer of pixels article dilation add a layer of glue glue here for the pixels there are more complex steps all open and close to provide for their count is very morphology that are it in this case you open is an erosion followed by dilation sees removes all particles and smooth boundaries of larger particles is not significantly alter the size of articles and soothing words significantly is important it if it dies all through the size not generally not significantly the borders removed by erosion on generally you place by the domination process however is not perfect so you look at the little little regional particles you little insincere didn’t get remove in the cleanup. Something so rare connected by an activity or for a can be remove when when you do it erosion followed by dilation so for the most are the Lord articling this is not active and you were out of the shed so you counting the area not discounting number of articles this is an interesting step to apply it you have to be careful that it does not introduce a bias because step two so close is the opposite is open so that the dilation followed by erosion it feels holding creates boundaries so in this case I didn’t quite fill this hole here because is not the the the Col. was not large enough that the the ideas and Philip it does not alter the size or shape particles of particles that do not connect after dilation are nine changed case of the began the it’s not a sick is it that there is a matter of significant in in there is not perfect in terms of offering this the area are it’s used to eliminate small hole the dude noise in there are advanced more all morphologies stamps that go beyond the dilated in a row and their used for a lot of different things these functions generally will do things like separate article so there algorithms that are in that are optimized with in are not as simple dilation you specifically separate articles remove small large partsize those particles and then people remove articles of an identified by morphological parameters this is important you don’t have to just look at size and that Glenn with aspect ratios and thousands of parameters that you can define for particular part seven particles and then so you can filter based on all of those you can fill holes like that shown in the previous image sometimes enclosed is not actually go a whole there are other operations that is specifically designed to fill in all holes in it in a close revision he can remove articles from advantage borders of the stashing it in the lower version image in the eliminated and then segmenting the image into various touching particles in his is a very advance so we’ll talk a little bit of a particle filtering particle filtering keys or removes the particles based on the geometric features that that are in the image so and it the you you can use an area man aspect ratio and other features to remove particles so this particular image figure grayscale image the threshold of it and then I want to move the small particles only love and care about the really bright ones and so even though the threshold was apply less aggressively I tried to threshold only the really bright regions I didn’t I be applied where a is safe threshold level where you think that some of the other small particles in the reason Salamat by the frame size so it’s used to clean up noisy images will finish with a couple of the application examples is particular case counting the number of non-broken pills are blister packed so we start with the grayscale images like this again here’s an example where there is a little bit is saturation in the images are full length in the event’s but we talked about the defendant demurred out patient vigil looking so fine in threshold on image I get something love the site this where I might see all bills and I see a lot of other information I can then do a lot of filtering based on area and then I can have then the number of full articles the hills overlap whole bills and then you so this is a real application based on all the know you in the areas the only one we happen to use your we use under other page it over the the of the particles will look at one more example so this is an example where in general you know we only live the grayscale images so far but if you have lots of different pictures of fabric is great that it you’re looking at you want to live identify the quality or you do some sort of advanced quality inspection on these regions you it take a caller image in then extract a gracefully grayscale information out of it then that simply either him rolling on the luminance plane or one of the planes color ways of the color and and then based on the grayscale image I I can do a simple threshold just pick up the regions out of the the background and then each of those regions than in the mass so I am flying the binary image back to the grayscale image and I choose the regions I wanted and examine and for each of those masks Regency.

Signal and the histogram would give me essentially a signature sonohysterogram on the firm correction factors here and you get entirely different histograms and see you could be looking at this might quality changing those particular edges are and say then fairly realistic example basins to so this includes the section 1 of of this so worse will start section 2 in this

Search AIA:


Browse by Products:


Browse by Company Type: