• Font Size:
  • A
  • A
  • A

Feature Articles

Vision and Motion: Better Together, Part II

by Kristin Lewotsky, Contributing Editor - AIA

Figure 1: A vision system guides a robot “arm” to pick-and-place paint cans from a pallet.  This is an example of the second kind of vision-robot interaction.  Credit:  Faber Industrial Technologies.Systems combining motion and vision can be grouped into four classes. In the simplest marriage of machine vision and motion control, the motion components assist the vision system, for example by presenting parts for inspection, positioning a camera to permit multiple views of an object, or building up an image by precisely moving items across the field-of-view of a line-scan camera. In these applications, coupling between the two systems is negligible.

At the next level of complexity, the vision system assists the motion system by helping it decide where to move (see figure 1). The vision system may recognize a fiducial on a part and send a command to a servo-driven conveyor to let it know when to stop, for example. Successfully executing these types of tasks requires a degree of interaction between the two systems (see figure 2). “It involves transforming the coordinate system of the vision system to that of the motion system so that when the vision system says, ‘It's over there,’ the motion system knows wherever ‘there’ is,’” says Ben Dawson of DALSA (Billerica, Massachusetts).

In high speed pick-and-place projects, the absolute position of the conveyor needs to be synchronized with the vision system and the motion elements. That need for precise timing places new constraints on control and communications. “The various control components want that piece of data within a millisecond of a specific timeframe,” says Rick Tallian, consumer industry business development manager at ABB Robotics Inc. (Auburn Hills, Michigan). Depending on the process speed, such requirements can make sampling data at intervals of even 50 ms insufficient. A line running at a relatively leisurely 120 parts per minute, for example, has a total cycle time of just 500 ms. “When you're performing an entire pick-and-place operation in less than 500 ms, 50 ms can be 10 to 25% of your overall cycle time,” says Tallian. “We have to sample our control system more frequently to ensure accurate product positioning.”

Moving up the integration chain to even more tightly coupled applications, we have applications in which the motion system helps control the vision system, such as positioning and triggering a camera at the right moment to capture an image. Examples include inspection of printed circuit boards or semiconductor wafers.

Figure 2 - The vision system computes the cans' openings to guide the robot's "hands" to pick up 26 cans at once.  Credit:  Faber Industrial Technologies. The final, most tightly coupled class of applications, the most challenging of all, is visual servoing. In these applications, the vision system provides feedback to the motion system to allow it to adjust its position in near real time. Essentially, such systems use vision in the way that a person might, seeing an object and reaching out to pick it up or touch it based on real-time feedback (see figure 3). 

“We find the location of the target in the field of view and measure the offset in X and Y and theta to the gripped part. Based on that input, we calculate where the robot has to be to match those two points,” says Brian Powell, vice president of sales and operations at Precise Automation (San Jose, California). “We're closing the PID loop on the motor with the encoders but we’re also closing the motion loop with the vision system as our sensor to drive the part to the target. In the space of a second, we will end up iterating 12 to 15 times (see figure 4). With the friction and stiction in the system, typically the robot will overshoot a little bit but [with the iterations] we will basically zero in on it.” 

As promising as it is, visual servoing has one major constraint: The application must allow the vision system to see actuator and target in the same field of view. The technology is particularly well suited to applications like flip-chip bonding or microelectronics assembly. 

The need for speed
Each of the four classes imposes different constraints on the vision and the motion systems. In the latter two cases, the increased coupling between the two systems makes communications a priority. The issue is not just bandwidth, but latency. 

“If the camera is supposed to be triggered at certain points going across the product and that trigger is not always there at the right time because of latency issues, now you’ve got a system that doesn't work,” Perry West, President of Automated Vision Systems (San Jose, California). “Likewise, if the vision system isn't providing data deterministically, in a path-following mode, you really have problems tuning your control loop and keeping your motion control optimal. You ultimately have to de-tune the system and slow it down until that latency variation isn't an issue.”

Figure 3: - Visual servoing requires both target and motion element to be visible in the field of view of the vision system, as in this microelectronics application. Photo credit: Precise Automation Industrial Ethernet is steadily improving but for an application can still lack sufficient determinism for motion control, depending on the protocol and the requirements of the applications. This can be a particular issue for visual servoing, which requires that data pass from the vision system to the PC to the motion controller to close the loop, multiple times per second. 

“It takes 2 to 3 ms for data messaging to take place,” says Powell, “plus you still have to tack on the image processing time, you still have to tack on the motion time and do it all fast enough to have as many as 18 iterations per second.” Ordinary Ethernet isn’t up to the challenge so companies look to proprietary protocols. “Most people these days are just using standard TCP/IP which doesn't have a guaranteed latency. You have to do something special to pull off this kind of visual servoing.”

Making it practical
Even the best technology won’t see broad acceptance if it’s too expensive to provide timely return on investment. In the past, machine vision had a significantly higher price point than a force sensor or encoder or any other type of feedback device. In addition to capital outlay, machine vision required engineering skill and programming hours. In the case of OEMs for whom time to market was critical, those drawbacks were enough to steer them away from the technology. Today, the cost has dropped significantly, making vision systems more widely adopted in industry.

One factor has been the reduced cost of CMOS image sensors compared to CCD detectors. CMOS image sensors don’t offer quite the same performance as scientific-grade CCDs, but they can be easily produced using standard semiconductor processing methods and have been commoditized as a result of the cell phone and video camera markets. Meanwhile, machine vision manufacturers have begun producing smart cameras that integrate camera, lens, and microprocessor in the same package. That doesn’t just save component costs, it reduces integration time.

Hardware is not the entire story, however. The dedicated image processors of the past are increasingly giving way to PCs, even for computationally intensive tasks. More sophisticated software has moved the burden of the processing away from the chip level to the PC, letting the software do the work while the sensor itself merely streams data over the PC into memory. With clock rates of 2 GHz or better, computers today are easily up to the task in most cases, although advanced applications like high-speed web processing may require more traditional hardware-oriented systems.

Figure 4 - In visual servoing, repeated iterations of the feedback/control loops will bring the two together (inset). Photo credit: Precise Automation To enable this approach, developers have produced software packages that simplify software development. A certain amount of programming is inescapable, but it is orders of magnitude less than before. Today’s cutting-edge vision software suites incorporate software libraries and interfaces that allow the user to quickly determine the algorithms they need to use and the data required. “It used to be that you'd think twice about whether or not you wanted to use machine vision in your system because of the expense and what typically would have been a lot of programming to make it work,” says Powell. “So much has changed these days, though, to make vision systems easier to use. Back in the good old days it might have taken days or weeks to code. Now it just takes hours."

So where is the market likely to go now? The next frontier seems to be stereo vision that would allow actuators to pick up randomly oriented parts piled together. Instead of paying for expensive fixtures to singulate or orient parts, users could just heap them all into a bin and let the motion system do the work. 

It sounds good in theory but it’s another of those tasks that is simple for a human and ferociously complex for a machine. “It's a really tough problem,” says Powell. “I’m sure somebody's going to figure out how to do it, and whoever does is going to be in line for a big award.” 

The solution, he predicts, will be software oriented. “That's clearly where the advances in machine vision are going to be coming from in the future,” Powell says. “The hardware is really not going to do much more because people have looked at IR, they’ve looked at bigger sensors, they’ve looked at all sorts of interesting lighting and tricks. Unfortunately, I think our bag of tricks is basically empty. Advances are going to come from intellectual property developed on a computer with some very smart guys thinking about how to manipulate the numbers and matrices.”

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: