Machine Vision Metrology: Taking Nothing for Granted in 3D World
by Winn Hardin, Contributing Editor - AIA Posted 06/26/2012Metrology is the science of measurement. And with that in mind, most machine vision systems perform some metrology task when determining if an object or feature is manufactured to within specified tolerances.
However, most machine vision systems that are considered primarily metrology systems focus on high-resolution measurements in the millimeter to nanometer range. These systems use a variety of methods, from laser line triangulation to fringe projection and interferometry for the highest resolution systems.
But it’s not just about measuring an object. Metrology applications must know the limits of their measurement capability to decide if the system can be considered accurate enough for the task at hand. Throughput is also critical to all industries and the machine vision metrology systems. Machine vision systems do not want to be the bottleneck in a production process.
“In a court of law, one is assumed innocent until proven guilty,” says Michael Guzik, Senior Software Engineer at precision automation specialists DWFritz. “But when applying metrology, you must assume guilty until proven innocent. You can’t base a system on assumptions; you have to prove its performance.”
Demand Increases for 3D Metrology
“We’re not seeing as much demand for 2D metrology systems these days as we do 3D metrology,” says Joost van Kuijk, Vice President of Marketing and Technology at Adimec (Eindhoven, The Netherlands). “Instead of just doing X and Y measurements, you add height information, from which volume measurements can be calculated. So there’s always an algorithm behind 3D vision metrology, and it’s these algorithms where machine builders and customers differentiate themselves.”
Machine vision-based metrology systems generally use one of five approaches, listed in order of spatial resolution:
- Time of flight (ToF) machine vision systems that measure the time it takes light to travel to a surface and then back to a receiving camera;
- Absolute measurements where image-processing algorithms extract measurement information directly from one or more a visible image(s);
- Laser triangulation for applications requiring accuracies around one millimeter;
- Fringe projection systems for systems requiring micron accuracy; and
- Interferometric systems with sub-pixel accuracies in the nanometers that measure phase differences between two laser beams to measure a specific point.
When it comes to metrology systems based on fringe projection methods, Adimec’s van Kujik says the industrial camera maker’s focus is on developing fast cameras with resolutions in the several megapixels for high-accuracy measurements measured in microns for relatively large fields of view. Finally, when it comes to interferometric systems with accuracies in the nanometers that use cameras to record phase shifts between reference and probe-scanning laser beams, customers consistently request the fastest possible frame rate and uniform pixel-to-pixel response from the sensor to improve measurement accuracy.
When it comes to fringe projection and interferometry, “depending on the measurement method, customers want more accurate illumination triggering, the quality of CCD pixels and the speed of CMOS sensors, uniformity, and image correction,” adds van Kujik. “All of these things affect how accurate your metrology system is. If you’re developing a system that’s on the edge of what’s possible for a metrology system, the better electro-optical system you have, the more competitive advantage you have.”
While local flat-field correction with values stored in a camera’s lookup table (LUT) are fine for medium-accuracy metrology and general machine vision systems, customers who want to push the edge of machine vision-based metrology need to do image correction with the specific sensor and lens they will use in the final system, using a global flat-field correction to correct for artifacts, shading in the lens, etc.
“If you have five different lenses with five filters for a system that will measure features with nanometer accuracy, you need twenty-five separate global flat-field corrections,” concludes van Kujik. “The alternative is to use very large computers to do the corrections on the fly, and you will need a lot of computing power.”
Warped Wafer Inspection: Guilty Until Proven Innocent
Semiconductor wafer inspection is one application that continually pushes the envelope of machine vision metrology. For some wafer-grown products, such as print heads for ink-jet printers, the manufacturing processes will etch grooves into the fragile wafer. This can result in a significant “bow” or warp in the wafer, making 3D measurements of key features much more challenging because the warp effect will change from point to point.
DWFritz Automation has developed a system using multiple machine vision steps combined with point interferometry and calibration techniques to accurately measure warped wafers.
“We use traditional machine vision to locate fiducials and features on the die and to align the wafer,” explains DWFritz’s Guzik. “As part of the process, we calibrate the system using a built in target to determine camera tilt and skew and the wafer rotation around the Z axis so we know the geometric relationship between the cameras, laser and table at points and heights along the wafer.”
A high-resolution camera is placed above and below a precision X/Y table that supports the wafer. The camera on top locates fiducials and helps the system to position the wafer for the lower camera, which shares a Micro Epsilon laser profilometer ToF beam. The lower machine vision system with the single laser probe measures the key features in 3D while the X-Y interferometry system measures current position of the wafer, all while in motion.
Why the combination of techniques? Guzik explains, “The features we’re looking at aren’t very machine vision friendly for Z measurements, and we’re measuring at the sub-micron level. Imagine small narrow trenches that are virtually impossible to illuminate [for a machine vision imaging system] that are only couple hundred microns wide and deep. A laser can give us [height] measurements in one dimension, while we use the interferometer system with the X, Y table to give us measurements in the other dimensions. Integrating them all together gives us the accuracy and speed we need.”
To accommodate bow and warp in the wafer, both cameras are placed on Z-axis linear stages so the cameras can reposition to maintain focus along the changing wafer surface. The system also includes temperature and humidity sensors that will tell the system to recalibrate if ambient conditions change significantly.
“When you work at a small scale, no measurement system is perfect,” concludes Guzik. “You have to go through and demonstrate using repeatability and reliability-type tests to validate your system’s accuracy. You need to develop a system that not only makes a measurement, but also is aware of its own performance limitations and can adjust for changing conditions. When we can show that to our customers, there’s a high level of assurance that they’ll be satisfied.”
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.