- This article is filed under:
- » View All
Bringing Machine Vision to a 3D World – Big Time!
by Winn Hardin, Contributing Editor - AIA Posted 05/14/2003
Machine vision was initially designed to solve two-dimensional (2D) problems. Label and package inspection. The presence of a hole. The diameter of a hole. The length of a rod, etc. Complex 3D measurements were accomplished with expensive interferometers or microscopes in the case of small objects, or with hand held gauges and probe-based coordinate measurement machines in the case of larger objects. However, as production speeds increased, and manufacturing lines expanded the product mix, manual or off-line solutions were seen less as a quality benefit and more of a bottleneck.
The needs of a 3D world have lead to a convergence of automation technologies including vision, robotics and metrology. (See RIA article, ‘‘Machine vision and robotics’‘ in mid-may 2003.) For this reason, the Automated Imaging Association and Robotic Industries Association will address challenges and solutions for 3D machine vision and system integration during a special session on 3D Machine Vision on the afternoon of Wednesday, June 4th at the upcoming International Robots & Vision Show (IRVS) in Chicago, which runs from June 3 through 5.
3D laser triangulation
IRVS presentations on 3D inspection will focus on three different approaches: systems that use coherent or structured light with imaging systems, photogammetry or the use one camera with structured white light per region of interest (ROI), and stereoscopy or the use of multiple cameras with special lighting per ROI.
According to LMI Technology’s technical and marketing advisor, Walt Pastorius, 3D systems that use optical inspection methods – and specifically laser triangulation -- are more flexible than non-optical gauge systems. This flexibility is overcoming old prejudices and expanding the growth of 3D optical inspection. ‘‘There’s been a greater initial adoption of laser triangulation in Europe because, in part, their plants have a much greater model mix. But as North America moves more towards flexible assembly lines, we’re seeing greater interest on this side of the Atlantic,’‘ Pastorius said.
LMI systems place a laser scanner at the end of a robotic arm to create a work cell that determines 3D object coordinates while performing quality inspections. A specially designed sensor housing with CCD camera, complete with notch filters enclosures to eliminate the influence of ambient light, acquires images of the laser light as it reflects off the object under test. Distortions in the laser line reveal Z, or offset values for all points along the laser line. During his presentation on June 4th, ‘‘Integrating 3D Vision with Robots for Measurement and Guidance,’‘ Pastorius will discuss several issues related to these applications including a the need of a single coordinate system for CAD, vision and robot data; calibration and temperature compensation among others.
Vision and robotics – integrating the two
Like LMI, Perceptron uses laser triangulation to guide robot systems and inspect products in the automotive industry. During his presentation on the afternoon of June 4th, ‘‘Practical Applications of a Robot Scanning System,’‘ Perceptron’s marketing manager, John Kidd, will discuss a new system that replaces a manual gauge with a laser scanner at the end of a 6-axis robotic arm. The system checks the shape and weld integrity of hemmed automobile doors. Door ‘hems’ refer to the point where the outer sheet metal is welded to the door’s support structure, usually located on the inside surface of the door within one inch of the perimeter.
According to Kidd, this application illustrates several issues common to integrating 3D inspection systems with robotics. The Perceptron system establishes a single coordinate system for part position, scan head position and CAD data by acquiring images of the locator holes, bolts and other fixtures used to hold and attach the door. ‘‘A lot of scanning systems can align the scan data with the CAD model with a ‘best fit’ approach, but ‘best fit’ approaches can disguise some errors. Whereas, if you use the same locating features as the manufacturing fixtures, then you’ll get the exact alignment that you will have when you hang the door in the physical world,’‘ Kidd said. The imaging system is designed to handle +/- 25 mm variation at a standard 200 mm stand off, which is critical for 3D applications where part location can vary based on manufacturing tolerances, or even thermal expansion of the robotic arm holding the sensor.
Structure without coherency
In addition to coherent laser light, an off-the-shelf white light source with a periodic optical grating can provide accuracies on par with laser scanner systems, according to ISRA Robotics vice president of operations, North America, Jordan Merhib.
ISRA provides photogammetric and stereovision systems for large object 3D measurements to the automotive industry. ‘‘The decision between the two approaches is based on the size of the object and accuracy. If you’re trying to find a large object in space and need 1-mm accuracies, then we opt for photogammetric. If the applications require finer accuracy with smaller fields of view, we might lean towards stereoscopic. Also, with photogammetric methods, when you’re determining the offset, the accuracy is constrained by the relationship of the tolerances built up within the manufactured object. For a typical car body, the photogammetric system anticipates the relationships between the parts and car body, and the car body and the camera based on CAD data and the built in tolerances. Whereas, with stereoscopic systems, we are finding the true location of these features in space,’‘ said ISRA’s Merhib.
Photogammetric systems use four or more cameras to image different regions of interest on a single large object, like an automobile. The car’s location in 3D space is determined by combining offset data from all the cameras, the known positions of the cameras and CAD data. Stereoscopic systems use two cameras per ROI and determine the absolute position of the ROI based solely on triangulation of the two cameras with the object surface. Merhib will go into detail about ISRA’s systems as part of his 3D machine vision presentation, ‘‘High-performance 3D Stereo Robotic Vision – Case Studies from Production,’‘ also on Wednesday afternoon.
When dealing with robotics in particular, Merhib said that ISRA often pursues a stereoscopic approach because the accuracy of the imaging system needs to be 10-times better than the robots reproducibility to allow the system to compensate – through automated calibration and other software measures – for thermal expansion of the robot during operation. ‘‘By mounting a small calibration plate at the point of the tool, we can guide the robot to where we said to go, and do a final calibration of the robot during each cycle. It’s a processing intensive approach, but makes for a very robust system.
Process feedback steps like these are helping 3D vision guide robots to ‘‘best fit’‘ and custom cutting applications such as the mounting plates for the front end of vehicles, windshield installation and many 3D applications once considered beyond the ability of PC based automated vision systems.
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.