- » View All
Through the Looking Glass: Differences Between Large and Small 3D Vision Applications
by Winn Hardin, Contributing Editor - AIA Posted 09/21/2011When Alice needed to fit through the rabbit hole, all she had to do was sip a potion or nibble on a cookie to return to her original size. For machine vision designers developing 3D solutions, there is no magic potion that will resolve the differences between a 3D design for a large workcell or part with gross features, versus a small workcell, or precision assembly. The answer is more than simply a matter of optics and resolution, according to vision experts.
A 3D vision system with large field of view needs to consider whether the process is a robotic workcell, take into account mass and momentum differences between large and small applications, choose the best method for acquiring 3D data clouds that match the resolution and throughput of the application, and put it all together in a package that is easy to reproduce, install, and maintain.
Vision-guided robotic workcells are common industrial applications that require 3D vision – and whose sizes can vary greatly. “When you talk large versus small robotic workcell applications, I think of racking or palletizing/depalletizing for large applications and assembly operations for small applications,” explains Greg Garmann, Software & Controls Technology Leader at Motoman Robotics Division, of Yaskawa America Inc. (Miamisburg, Ohio). “Certainly there are large assembly applications, but many robotic assembly processes require a vision sensor to locate a part that has a variable resting position and report the position. This allows the robot to accurately pick up and place the part to fit into another assembly. These part locations can vary by a few inches in any direction with a variable angle of position. Accuracy is obviously a big differentiator between these applications. For palletizing, we’re usually talking 3D accuracies in millimeters versus less than a few thousandths of an inch for assembly.”
LMI Technologies Inc. (Delta, British Columbia, Canada) has served the 3D vision space for more than 30 years, but, until recently, has generally focused on large-area 3D vision applications in vertical markets such as lumber, tire, and road inspection. “But the market is coming back to us saying, ‘We have a need for simple 3D with a higher resolution in a smaller field of view,’” says Barry Dashner, Vice President of Marketing at LMI Technologies Inc. “We’ve recently introduced the Gocator line of 3D profilers, which includes 5 models ranging from 14 mm fields of view to 1260 mm. Our near future plans include expanding the Gocator product line to even smaller fields of view while maintaining the ease of use and scalable tools that come with it.”
Structured light solutions such as the Gocator are well-suited to high-precision 3D imaging applications, as well as for isolating hard-to-image features such as the bottom of a screw hole where the pitch is so steep that it’s difficult to accommodate the different views from a two-camera stereovision solution.
Motoman Robotics’ Garmann prefers to use single-camera 3D solutions whenever possible, typically mounted on the robot arm itself, depending on whether the workcell is small or large. This provides a flexibility to use vision in multiple locations within the system.
“In a small space, and even in some large space applications, it may be better to use fixed cameras to locate parts in the work area,” he says. “A fixed camera mounted above the workcell can image multiple parts at a time, or you can use a single image from different cameras to generate 3D location information on a single part. For palletizing systems, we also look at using time of flight [ToF] cameras to give us low-resolution 3D location data. Or, we can mount these cameras on the robot itself if precision movement is required: taking a series of images as the robot arm approaches the part to refine the offset.”
Automotive assembly is an example of a “hybrid” application that is both small and large, where a robotic arm must assemble a part to the moving car frame with high precision while tracking the position of the large automotive frame. The downside to such an approach is throughput because the vision system must take and process multiple images to guide the robotic arm to a high-precision location to pick up the part and complete the assembly.
“Adding a second camera in a stereovision approach speeds up the process by taking two images at one time, but [it] increases the cost of the overall solution,” explains Garmann. Throughput is critical in a hybrid application like this because the robotic arm is relatively large, and needs to move the parts some distance to the drop-off location. However, the result is a slower throughput for the workcell than, say, a gantry robot in a printed circuit board (PCB) application where the robot is moving very small parts a short distance very quickly.
Quickly collecting 3D information has been a selling point for LMI’s Gocator product line. “In general, let’s say the Gocator runs about 400 to 500 Hz in wide field-of-view mode,” says LMI’s Dashner. However, if you have an object of interest whose position doesn’t vary much, you can window the field of view and increase the scan area up to 4 to 5 kHz,. Probably the most common range is in the 1 to 2 Khz range for many industrial applications.”
Keep It Simple
Whether it’s assembling a car or a pacemaker, 3D vision is a critical part of the engineer’s toolbox. With so many options – single camera versus stereo or structured light versus modulated RF time of flight cameras, for example – making 3D vision simple is an important part of any solution.
“Our research shows that there are three main customer types: high-end engineers that understand machine vision and can design the system they need; low-end operators; and a very large group in the middle that knows about 2D sensors or PLCs, but not as much about 3D vision,” says LMI’s Dashner. “While an SDK is available for the high-end user, we designed the Gocator with that large middle group in mind, giving the end user the flexibility to choose different vision tools through an intuitive interface and simplifying sensor network designs through our FireSync network technology. It’s not just about 3D vision, but how to synch multiple sensors within microseconds and do it simply and transparently. That’s what 3D vision users need today, whether it’s a large area or small area application.”
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.