• Font Size:
  • A
  • A
  • A

Feature Articles

The Case for Machine Vision vs. Human Vision

by Nello Zuech, Contributing Editor - AIA

Justifying machine vision seems to always come down to dollars and cents when, in fact, machine vision is frequently the only option to assuring quality and consistency of product. A recent article in the Wall Street Journal describing findings associated with ‘‘tests’‘ of airport screeners verifies that people are just not good at performing repeated visual tasks where decision have to made based on the scene presented.

The job of an airport screener is not unlike that of an inspector in many industries. The screener’s responsibility is to detect and identify specific banned objects in carry-on bags, packages, etc. An inspector’s responsibility is to similarly detect unwanted variables in the scene. In many industries his job is even harder than that of an airport screener. In theory, the airport screener has been trained to recognize specific objects – guns, knives, scissors, screw drivers, etc. In many industries the inspector must detect any anomaly in the product passing before him and there could be many variations of anomalies to be detected and, in some cases, classified.

The study of the performance of airport screeners conducted as the Threat Image Projection by British authorities is most interesting and relevant to manufacturing industries in general. What they did was create a challenge set by digitally inserting one of 250 images of guns or banned objects into x-ray images of bags. At first the screeners’ performance was mediocre (authorities did not release actual results citing security reasons), but over several months it improved. Today mediocre is not good enough in manufacturing. Then the testers changed the challenge set introducing new images belonging to the same categories of banned items. The screeners’ performance dropped off to no better than it was when the initial challenge test was begun.

The screeners had become ‘‘good’‘ at recognizing the 250 images that kept popping up but were not able to generalize to images of specific objects, a gun with a different grip, for example, or a knife in a different orientation. As noted in the article, scientists have long known that the ability to pick out a target or specific thing in a complex scene suffers when there are also many things you are not looking for present. Another well-known issue is trying to identify a specific thing that is surrounded by distracters that are similar to it.
As the article notes, this phenomenon should be familiar to anyone who has looked for a beer in the refrigerator. The beer could be front and center, but if the bottle looks different from the one in your mind’s eye, it may as well be invisible.

In studies funded by the TSA, cognitive scientist J. David Smith (State University of New York, Buffalo) trained scores of volunteers to learn a number of origami-like targets (rotated or slightly distorted versions as well as in the original state). The volunteers saw one shape at a time, on a computer screen, and determined whether it belonged to a target category. The volunteers made the right call just 76% of the time!!! Imagine that performance in the automotive industry – most cars would not get off the sales lot.

The scientists then embedded target shapes in cluttered scenes, where items overlapped, touched and took different orientations. Does that sound like a familiar scenario? The volunteers got better at finding the targets, but as soon as they had to find a target that was slightly different from the one they had learned, their performance plummeted. The volunteers spotted only targets they had seen repeatedly, not variations of them. It was as if someone learned what dogs are by studying collies and poodles, and then didn’t recognize a spaniel as a dog.

Just to be sure there wasn’t something especially difficult about the origami-like figures, the scientists had 88 participants try to spot actual knives, scissors and guns in x-ray images of cluttered suitcases. Again, there was a steep learning curve as specific targets repeated, with people eventually spotting 90%!!!!. I wonder how many industries would be satisfied with that performance? But as soon as slightly different guns, knives and scissors were digitally inserted into the image, scores fell – people missed three times as many novel items as familiar ones. Ironically, people did hardly better when they had to find one target category rather than three.

What does this tell us about people performing inspection tasks in manufacturing? Let’s look at specific industries. In the semiconductor industry, people should be quite good at inspecting unpatterned wafers, given all they are doing is looking for any anomaly on the surface. If there are different rules defining an anomaly for different regions on the wafer or they have to classify the anomaly, then their performance will fall off. This ignores other issues, such as the tediousness of the task, which also degrades performance. Certainly as feature dimensions of integrated circuits get smaller, people have more difficulty seeing critical anomalies whose dimensions are proportionately smaller. If we look at patterned wafers, the ‘‘busyness’‘ of the patterns will result in the performance of inspectors falling off. In other words, to assure yield and product quality, the unreliable performance of human inspectors requires machine vision-based products be used.

The applications in the electronic industry are more or less macro-level mirrors of the applications in the semiconductor industry. The bareboard circuitry is frequently very complex and people looking for specific things anywhere on the board will probably only be 90% effective in finding those things. Chances are they will miss entirely any anomaly that does not fit one of the classes of the specific things. In the case of assembled boards, the scene clutter further complicates the inspection task so it is likely an inspector will identify fewer than 90% of the conditions that would warrant a reject. Given all the acceptable scene variables an inspector is likely to be only 70 – 80% effective as was found in the abovementioned study. Clearly a case can be made to substitute machine vision-based technology.

Applications in the food industry probably come the closest to what airport screeners do. Inspectors are required to sort foreign objects from a sea of product passing in front of them on a belt at rates expressed in tons per hour. At the same time they are expected to sort irregularly shaped products, product with unwanted visual characteristics, etc., all while looking at the stream of acceptable product touching, overlapping, skewed, etc. At best, people can be no better than 70% effective in performing this inspection. In the fresh pack side, the requirement frequently calls for grading product by size, shape, color and blemishes. Given all the appearance variables that go into grading, it is amazing the people can perform this task at all. If the food industry is serious about food quality, machine vision is the only way to go.

In the pharmaceutical industry one has two critical inspection tasks – inspecting filled vials/ampoule for foreign matter and inspecting solid dosages. In the former case, one would expect people to be rather effective. The challenge, of course, comes with the inspection rates required. In the latter case, often inspectors find themselves looking at a sea of product passing before them on a belt. The application is very similar to that in the food industry, except one does not have to contend with all the appearance variables associated with Mother Nature. Nevertheless, there are many variables to contend with – touching, overlapping, skew, etc. Looking for foreign items, slightly deformed items, etc. in view of the variables is unlikely to result in better than 90% effectiveness in performance. In the pharmaceutical industry where quality is of paramount importance, machine vision should be easily justified.

In the print industry, one generally has to find flaws stemming from omissions or additions to the print pattern or due to misregistration issues. While in theory there is a fixed set of flaw classes, given the background that changes with each print batch, the performance of people to perform inspection in the print industry is likely to follow the 70 – 80% experience cited in the study above, especially since the flaws can fall anywhere in the pattern. Hence, to truly eliminate scrap the print industry has to turn to machine vision.

I can go on reviewing issues in each manufacturing industry, but the principles are the same. In most industries the performance of inspectors is exacerbated by throughput requirements. Studies have also shown that people performing inspection tasks are only effective for 20 minutes and beyond that they lose their effectiveness.

Years ago I read about a study done at a university in Iowa. People were asked to perform the visual sorting task of picking out a minority of black Ping-Pong balls from a production line of white ones passing at a rate of 120 per minute. They typically allowed close to 15% of the black balls to escape. Even with two operators they were only about 95% effective. With the process reversed – white balls in a sea of black ones – the results were the same.

People have a limited attention span, which makes them susceptible to distractions. People are also inconsistent. Individuals themselves often exhibit different sensitivities during the course of a day or from day to day. Similarly, there are inconsistencies from person to person, from shift to shift, and so on. The eye’s response may also be a performance limiter. Another issue associated with people includes their ability to adapt to changes. This can be either bad or good. People are flexible in that they easily move from inspecting one product to another. On the other hand, if in the course of inspecting a product produced in sheet form, the color, for example, changes slowly over the course of a day, depending on the subtlety of the change it is likely that color change could go undetected.

There are many reasons to explain the observations made on the airport screener performance and they are the same reasons to explain the poor performance of a typical inspector in a manufacturing setting. The bottom-line: one can expect far better performance from a machine vision system and that will result in improving a company’s quality and consistency of product produced and this produces positive bottom-line results.

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: