Member Login

Image Processing Fundamentals
Part 2

Image Processing Fundamentals
Part 2

Part of the AIA Certified Vision Professional-Basic program, Dr. Romik Chatterjee, Vice President of Engineering at Graftek Imaging, Inc., teaches the fundamentals of vision and imaging algorithms. You'll learn the software side of vision system requirements and its role in solving application challenges.

Advanced Training

Subscribe for access to Advanced Certified Vision Professional Training

Click Here

Exam Locations

Locate a Certified Vision Professional exam location and test your vision training knowledge.

Click Here

Course Slides

Click below to purchase.

Digital Basic

Digital Advanced

Click to View Transcript welcome back to the section 2 of image processing fundamentals will be starting on looking at algorithms used for locating regions of the part of in images so we’ll start with of popular topic Pages pattern matching so what is pattern matching it locates the regions of a grayscale image that matched a predefined template so not only does it do that it also calculates a score for each matching regions you can than verify the quality of the match and it returns generally coordinates and in some cases the rotation angle and scale for each of them so I am in fiducial mark here on the side of of PCB and as that part rotates in the presentation changes I need to be able locate that fiducial mark and so you know I have a little bit of obstruction here on in stuff in the in the image that’s not exactly a template it still matches that region well enough and then as it rotates and moves around in the lighting conditions change it still matches that of is a lot of things that are matches can be varied to serve makes it a very powerful tool cylinder applications of a pattern match on their many presence detection overlooking for particular part of a certain shape County so once you’ve detected the presence of a part in increment a counter and examine EVR alignment this is a very critical step in will be talking more about this later and inspection so you’re basically looking to see whether certain certain parts of their non-or the quality of a certain region matches your expectations so without defining too much in terms of exactly what it is looking for exactly this one is to look like this so how does it where it’s a two-step process usually a learned template from a given set of images and while you are learning the template you’ll extract information that you uniquely characterizes that pattern so this is really important because you don’t want information that does not uniquely characterizes that I’ll see some examples of how to matter define a template so and then organizes it that information to facilitate faster search so even though we could will talk about how pattern matches are actually implemented on a even though most of them in and like in this case our correlation-based there are proprietary enhancements to better mashed that makes this the search faster especially when your large and so in step two which is the part you’ll be running over and over again for every image that you know of lot give to this tab is can it take that template and try to find a matching region and that it and the emphasis is in this case is unstressed methods that quickly locate mastery of himself as a trade-off between how accurately you located versus how quickly you can locate an agreement is okay and there are strategies under finding a match once you actually find it cheaper so once you’ve located in a in a course where you can go back and say exactly how much is it rotated and how well does it so there are different ways to form pattern matched based on information extracted from them extracted from a template the two common methods are correlation-based pattern matching to the primarily relies grayscale information from the pattern matches since the standard idea of height to visit information an individual Crosscorrelation between the two minute of idea of how well they correlated are not then there is some advanced pattern matching algorithms that use symmetric pattern matching so they don’t look at all of the grayscale of threat information they look at the edges and domestic features of algorithms that use symmetric pattern matching so they don’t look at all of the grayscale of threat information they look at the edges and domestic features of that of that template and only locate those in the image that their pictures searching So what is correlation-based pattern matching so I’ve given it up in the template of the top of the capacitor on the print circuit board on you to go look at that grayscale information and that grayscale information is now present in multiple parts of that image like and find the capacitors it directly use of the underlying grayscale information for the image matching so even though there can be some lighting variation think I got that are allowed but generally using all of the grayscale information that in that template and what what it’s computing is a normalized life crosscorrelation between that template in various parts of the confined correlates the best care and that’s done enough in an optimized way that you know makes these things implement very very quickly in a standard scores are generally from a range of zero hundreds allows you to understand in 100 would be a perfect match and your would be no mention all essentially no court so what is a a good template was the bad template and when to use correlation-based pattern matching is open to look at the so that template primarily characterizes it is a template in this case is primarily characterized with a grayscale information and matching it in for four correlation database pattern matching can work with lot of with different lighting conditions it is you’re looking at again the event normalized cross correlation so even though the intensity of a particular image change from enough from the time you defy the template to the time you’re looking at the image you searching for a normalized crosscorrelation is installed they got the right piece of information and then the other part here is that because you’re taking all of the grayscale commission on extracting features and not applying any in advance algorithms to figure out where the things are it does not handle occlusion or scale changes very well so literally looks set exactly the grayscale information that you gave it you asked for the love And so this is a good template here this is a bad template because it’s not unique information the dark pixels here could be shifted or not unique from enough you look at all the characters certainly a bit though level at this is the image was taken there’s not much uniqueness and you could get a lot of false pattern matches when effective pattern that something like that site you need if you wanted to make this unique you will look at a boundary or something else that would in a allow you to uniquely define that particularly so when not to use correlation-based pattern matching when you have nonuniform lighting so annoying phone lighting it can attend a live agent in again the correlation scores in a change suicide and be able to pick out the parts when you have occlusion generally more than 10% occlusion is when another part or region the object in the image is blocking the part that you’re trying to identify and scale changes this is a tough one so it is for some reason you cannot control the presentation of your part and can be the image is changing and scale even though the unit looks like it’s obvious that it’s easy to find if you don’t handle or scale change for example in a preprocessing step you try to fit the user template that was defined in a different scale than you are not unified so what you do in those cases well this is where geometric pattern matching fixed pattern matching cakes and helps you said this is a tool that helps you locate parts a distinct edge information on and with them went with you don’t want to be doing is using this when the template is predominantly defined by texture so grayscale information so really lousy clean edge information making all these images here where the background is why an department store and so there’s a distinct edge information will see how that’s used that’s all very good if you had a lot of grayscale variation in these images then that would mean you would want to use correlation is pattern matching domestic so GPM or domestic pattern matching is is very tolerant to occlusion… The really good reason to be using the licking locating service in on things like machine parts scale changes again because you define that you want a particular shape of an object and not looking for innovative definition is not based on the group of pixels you can look at all kinds of different scale so even though there are limits in general scale agendas are handled very well and nonuniform lighting so as long as you can extract that edge information very well even if you lighting changes you’re getting be able to pick up the edge and then find that part and lastly background changes so even though you might use the image on top to actually define the wood with a very clean white background define the three template when the background changes like that shown that the bomb to images more great information you can still extract that information edge information and that step is in a work just as well as it did before so what is geometric pattern matching its feature-based you take an image and you extract the couriers and manage so this is this is all done behind the scenes for you with some circle in front for extraction tool and then you go generally beyond that expected features and so began the week and be done behind the scenes for you or you can define your own features in this case that the step would then find the circles in the parallel lines and so when it’s trying to match this template to a given image is going to try to find the score with which the circles are matched the powerlines are seeking it scores for individual features to so has that he could he could either use in a standard set of features or you can define your own features that you want for a domestic so like a quick comparison between the capabilities of the two pattern matching went to use and when not to use it’s a jolly if there’s texture like information that you want to be using correlation pattern matching not domestic pattern matching domestic pattern match can be used for mostly all of the other applications if you are dealing with multiple mesh locations you can use either certainly if you’re dealing with scale and occlusion variations you want to be using only to Mac

In only geometric data management give you good results under nonuniform lighting conditions in it because the correlation-based pattern match looks at the grayscale’s so next topic will be looking at quarter systems for me get into that will define what we call a region of interest to you live this term a lot in terms of over doing with image processing steps up it for everything we’ve looked at so far we look to be entire image based on will in the future we can talk about looking at is a section of an image is defined as a region of interest so a region of interest is a portion of an image on which an image processing step might be performed I can be defined in a statically fixed and defined it by hand or with respect to the corner of an image on or dynamically based on features located in them so you might say I want to find the horizon then only take the region around her and so in this case if I look at the histogram of the entire Lena image and look for the histogram just run the face and gives three very different pieces of information so if I want a new threshold where I wanted to just you know extract let’s say in her eyes the size of the People’s or something there’s really no reason to threshold race on the entire image because that’s that’s not information that is relevant when uncountable at the size of the so I want to isolate the face or maybe even just the I and basin the histogram that of that portion of the image of vendor find that special so that’s what region of interest is useful so in general corner systems are defined by a reference point in origin and an angle with respect to the image use either the angle worthy the X and Y axes can be user-defined important system what is a cornice system lets you do it allows you to define a search area that can move around with the objects in an image so the parties movies given the exact location in part is going around and you want to you know only examine part of that object you can define a way to get to that particular part which are actually kind of the inspection so the pattern matching is used to locate the features of an object and then based on the features that you you found on the on the image you establish a coordinate system so as it allows a coordinate system set up so you defined in origin based on an easy to find feature in your reference and so for example in looking at the battery clamp here I defined in origin of the center of one of the that the poles that hold the and then set up a coordinate system based based on this location and orientation so I set up a next and Y axis based on exactly where located that pattern and then based on that I can define an inspection weren’t measuring the diameter of the inside of the clamp and the width of the account and at the part and then rotates or flips around I can then automatically with the pattern match locate were how much it rotated set up a coordinate system again and then the inspections performed with respect to the rotated and oriented part and all of this is done automatically and so you can handle within reason the movement and rotation of a part within an image when you can control the presentation of so were to move onto the next section to talk about measuring features and an objects look at detecting edges measuring distances and when calculating Jemison from the south edge detection is a really critical step in image processing and very simple to understand and very fundamental to a lot of the things that are number of machine vision image processing so it’s a process of detecting transitions in an image eventually it’s one of the most commonly used machine vision tools mostly because it’s very simple to understand and spin is localized processing that is very fast and it seems generally applicable to many application will show you some examples of so the that are of an image that feeling to defining what an engine are other parameters are the contrasts those the change in the pixel value as you go from one pixel to the other in the width the region of the that borders the edge where your computing them that the mean pixel value of the package the third part here is steepness which is the slope of the edge of the of the image so when you’re going from a low pixel value.I pixel value that they’re the past is located there is a steepness of the curve of the edge went is a parameter that used to find that edge that went on the other set is also another so you can add as you get different illuminations of his economic gray background white background is slightly lighter background any as you get different on edge polarity seek another rising edge or falling edge depending on which way you ran the race you can get different results for these edge so detecting at points along a line go through the steps the basic operation that involves getting pixel values along the lines that even a predefined line I would been a lot while the grayscale values on that line of computer gradient information so the computer gradient information I would use a spatial carnal favorite and here’s with the width becomes important thing is a kernel with the width of one so I can computer the difference between those parcels is a kernel with a slightly wider car with so this is what’s involved with that pixel value information to give me the gradient information that curve on the and then looking at that on those curves and represent the edge locations to not these edges are all different hedges so I can get the first that I can get the first and last – it’s the best that could get rising edge is falling edges start to write for bright to dark edges lots of different things based on exactly what I need to but this is a basic set of steps that go through to find attached information along a line so the meal here time called sub pixel accuracy that thrown around what is it so given the information we had in the previous image if we looked at them the gradient of the recomputed along the line is the Reagan information the Racine and what you would do as normally you’d identified the peak value in there so it roughly 78% as as the edge location instead if I wanted I could look so 77 pixel 77 with the location of the fee no immediate information as location instead effect a parabola to the given set of points that I found and now I could find the peak of the parabola is to be shifted just slightly to the right of the three points that affect so now I am located in edge slightly to the right of where I observed the peak to be this then with a lot of assumptions don’t hand in alarming to say I’ve located nags edge to lets say 110 of my actual pixel resolution even though I don’t have another pixel there I could interpolate and model the fact that I have subpixel accuracy in you don’t really have more resolution you’re simply interpolating information with the model to create that subpixel accuracy so if you really need that accuracy you should go get a better camera and lighting in blend set up not traded we use this this is useful for lots of different things like repeatability but not improve the resolution of so we talk about simple online-based edge detector tools that you can use the same thing went with the circular geometry that there are higher-level tools based on inner groups of line base on the detectors called Raikes are used to fund multiple edges invented a line or curve through them in this case I have the circular rate so I have a group of line that go out from the center out and based on all the points of this found I think I can into the circle the inside of her floppy ear and a couple other examples I can find it in this is a and are in some funny edges along an arc as I can find that the angular orientation of the speedometer needle and here’s again a a simple group of Raikes where they’re going from left to right and in this case I sit a line defined in the linear edge in that particular So you can have configurable search directions vertical subsampling ratio so I can have you many more of those race in a given area or many of you not sure if I wanted to and then display settings I can show what I want like and show that the various edges they found or that the lines I sit through those edges and to look for some examples of what what straight edge line defections are used for their this is the next extension of the one the edge detection refund single point so straight edge detections are based on generally a group of single line edge detection the race-based detection might be talked about and there are other techniques called projection-based method so we won’t go at doing detail that the lookup with the whole image in a different way and I can also find straight edges to that is a couple applications were glad through I’m looking at the location of the Pattison and based on that the edges they find their find is facing in the location the fence I can find vent fins and I see I’m looking an alignment of this entire setup with respect of the DM horizontal plane is an application where you can use Raikes for gauging some measuring the gap in their based on the again a group of edged section lines revealed that went into it and then finding from the ones I find that the points that is that the Raikes identify I convinced an alignment on each side and computer distance between so that’s advanced geometry step that comes after the straight edge another example of thought gauging is defending the the gap between the edges of the label and by this time in a world we know overlooking for looking for straight lines and then finding the gap between that he is an more advanced example were finding out a defect in a and M and a textured fabric in and that there is a straight line there is an art to see this is an example of where you would not be using the race-based method because the race-based methods would find a lot of edges here but now you’re using what’s called a projection-based benefits of the one going to detail they exist and that allow you to then find what you can visually see as an edge but would be hard to find with other techniques another example of it of an inspection thing and this is the one that uses the race-based methods are looking for the alignment of the top of the bottle label and you can see in this case it’s a failed because it’s misaligned your entire application that will look at based on some of the techniques we looked at the pattern matching in an Raikes and looking at coordinates systems so it in this application were looking for several different things were making sure that we found for the tops of the of the little and templates here and then were looking to be sure that we got the right liquid level inside of it and that there is no extra molding of flash at husband in any of the samples so we use pattern matching to find the tops you can see the coordinates systems being drawn each of these each of these are inspected to be sure that they are there and based on where I those that can do some inspection down below here I can draw a rate to be sure that the illiquid levels correct somewhere I found that so that’s repeated false for in everything passes except in that image is an extra piece of molding in the middle of the the first two templates and that fails because I can do a and intensity check based on the coordinate system that I define with respect to the top if and not sales intensity check some based on all the tools we’ve looked at so far and is an example of a real application that could be developed there’s another example of the dimensional verification so dimensional measurements such as Lanson distance and examiner can be measured using all these tools that we just talked about there is a couple of ways to look at dimensionless sentiments you could do in-line gauging inspections which are used to generally verify assembly and packaging retains a making sure that everything is put together and generally the the the the the geometry of the part and the dimension of the parts are in and the within the ranges that you expect this to be you would use something like off-line gauging to really measure the quality of a problem if you’re trying to look at in a bipartisan certain tolerance and as my first processor capable to certain level you would not want to do that in you doing that off-line with the more careful presentation of the site is an example flow machine part where you looking at you is is the part with a certain tolerance and dimension of the holes separated roughly the same rest but the right of this is an example of the measurement system which is which is really easy to give you a pass fail measurement but not a metrology system include this section by looking at some techniques do OCR and read bar codes we will not be going into much detail would just be reviewing some of the capabilities are available in packages these generally be out of those behind these are fairly sophisticated and proprietary so and it’s hard to go into details so watermark of Sir one decodes into decodes the we be looking at the looking different marketing methods and how their read and will be looking to examples of occasions so one decodes something around for from the more than 35 years by now and generally the barcode information and in that and when the barcode is an index to a large central database so you get a number you look up that number in a database and ego lots of information… The code is easily read by laser scanners fairly well-established technology but Saslow data capacity and a very large footprint so there is a lot of pleasant as you move away from one to admit that it codes 1D barcodes into TV codes are different types of two decodes there that can be read understate a man makes Expedia for 17 in QR codes that are shown as example here usually channel-based vision systems are used at our preferred to read these barcodes because of the robustness that they bring to the application they give you these barcodes because if you much higher data capacity and smaller foot but they have their own challenges so as a comparison again when the barcodes give you lunatic data capacity to DVR data capacity when the is a index into a large database to D is a self-contained piece of data is the actually have all the data in that barcode and not have to look up another database the footprint of a 2-D barcode is actually smaller than the boat when the market so there’s a lot less redundant information and therefore a smaller footprint in the 2-D barcode That the the challenges can be that the contrast in the city barcode can be significant of a challenge so special events and new something like reading about 2-D barcode with it with a color camera for example so will look at optical character recognition is the next topic here so there’s many many characters it can be read with the vision system and looking here are some examples of in our LCD displays foreign characters in his characters printed with that of the doctrine or pretty much anything that you can define as a character can be read in most commercial packages that is character nation where typically there are two processes that you have to go through one is to train with those characters are so that this is a a again a template-based process where you define each character manually or somewhat semi automatically and then you go to through and apply that template-based process online when you’re actually reading those parts inspection is so in the in a training step you would acquire an image you specify the region of interest to the right pattern match or manually and then OCR then separates each character from the backgrounds this is the thresholding step up and talk about applications are solely and then there’s feature information that’s extracted to specific geometric features that we’ve talked about her extracted from each of these characters and then each character is then assigned the value button manually And all that information has been set off to set up the characters that fall when you’re actually doing inspection online you would open an OCR session you would pull in that character subfile and then after you acquired the image is specified VR line of this ROI might be different because part orientation might be different you would you go through the thresholding step to actually pull out the characters you would think this is not shown here now but you lots and feature information so were the corners are right or cooler parts of the of these characters are and then compared to those features to the character set file that you saved up on file and based on that you’d find the closest match sort like the Metro pattern matching and then give you that the actual results of that character so this builds upon politically easy worked about what will work through this section but it didn’t there are some subtleties here there are more intricate so OCR can be used to either recognize the characters so I cannot put the actual value of what I’m looking at their work can be called also used to verify that the print quality is a is a certain about it so I can see how close those are those characters are to what I expect in a golden template that is fine so that’s the conclusion of this section will finish with that on Murali Chatterjee and graphic imaging and our information on your need to contact us think

Search AIA:

Browse by Products:

Browse by Company Type: