• Font Size:
  • A
  • A
  • A

Feature Articles

Designing Vision for Reuse, Changeover and Scalability. . . .by R. Winn Hardin, Contributing Editor

by Winn Hardin, Contributing Editor - AIA

Machine vision systems have evolved considerably since the early 1980s when the semiconductor industry first tried the technology. Since then, machine vision image processing boards, which used to cost $20,000 or more for minimal functionality, have been supplanted by complete systems in a box for a few thousand dollars.

This movement towards commoditization has increased the appeal of machine vision until, today, new applications appear on a daily basis. The machine vision industry has spent a great deal of time and effort to make its technology interchangeable, compatible and as easy to program and use as a PDA or cell phone. To a large extent, these efforts have been fruitful, even transformational – making a niche industry into a healthy billion-dollar automation industry.

But machine vision is not a PDA or cell phone. Machine vision is not about taking a picture and making a measurement; if it were, the new Samsung picture phone would probably do the job just as well. Machine vision is about understanding and manipulating light, and digital representations of the same, and then using an ever-increasing library of low-level image processing functions and high-level logic to make some determination. Because this underlying complexity often is hidden from the end user thanks to the diligence of machine vision suppliers and integrators, end users question why one system cannot solve every problem? Heck, why not put a machine vision system on an overhead conveyor, moving the system around the plant while sowing quality and process improvements like a digital Johnny Appleseed? Or how about adding cameras to line B and C and running cables to the empty ports on the vision controller on line A? After all, the integrator did say this system could handle up to four cameras, right?

Reuse and changeover
The answers to those questions lie within the limitations of the physical world. From minute to minute, every environment changes, whether it is on a production line near a 1000 degrees C sintering furnace or a sorting system in a well-lit post office. Ambient changes in lighting, temperature, air quality, vibration, and hosts of other environmental factors can impact the vision system's performance. Despite these physical limitations, however, the smart user can plan for future growth or changes to a machine vision system – as long as the vision system's capabilities (and limitations) are fully understood.

‘‘A big part of the vendor or integrator's job is to educate the customer as to the capabilities of machine vision, and specifically the limitations of the imaging and lighting side of machine vision. It's quite rare to have such a generic imaging and lighting scheme that it could be applied to many more than one family of products,’‘ said vision systems integrator and president of Aptura Machine Vision Systems (Wixom, MI), David Dechow.

Reuse and changeover are similar in intent. Reuse refers to the retasking of a machine vision system after the product line it was designed to inspect has reached an end of life. One could say changeover is reuse with forethought and planning. Changeover occurs when several products are manufactured on a single line, and a single machine vision system is trained to handle each in a different way.

The difference between reuse and changeover illustrate the paths to success and failure for using a single machine vision system to solve a variety of applications. The bottom line is simple: a machine vision system is a complex system designed to solve a specific problem. If the problem involves one manufacturing line and changeover among 10 related products, then as long as the system was designed for that task, success is likely with the help of an experienced integrator. Taking that system off of one production line and putting it on another in another location, however, is risky and may require re-engineering.

‘‘If the end user does not set out retasking as a goal, but comes back later and says, 'we want to use that system on line B now,' they're probably in trouble unless it's a very simple application such as doing dimensioning,’‘ explained Brian Smithgall, president of Image Labs International (Bozeman, MT).

As Smithgall noted, vision applications beyond simple dimensioning, bar code or data matrix readers all require some level of expertise to turn off-the-shelf equipment into a successful vision solution. Can certain components of vision system be retasked to new applications? Certainly. Can a complete system be retasked without new optics, or light sources, programming, HMI and communication and storage procedures, not likely. To put the problem in other terms, machine vision hardware generally accounts for one third to one-tenth of the cost of a machine vision solution, with the remainder going to expert hardware and software selection, integration and programming.

Like reuse and changeover, scalability is best achieved by proper planning. Smart cameras or compact vision sensors that combine sensor, image processing hardware and software, power supply and communications are often presented as the ultimate answer to reuse, changeover and scalability. Proponents point out that these systems are compact, contain broad image processing libraries applicable to a variety of applications and are relatively easy to program through spreadsheet or other visual programming methods. At first blush, moving such a system or retasking it to another application would seem relatively straightforward.

However, while the capabilities of smart cameras are growing as their resolution and microprocessors improve, these systems are still limited in their functionality compared to PC host-based systems. For certain low-end applications, reuse and changeover are certainly possible given adequate planning and development. However, smart cameras still have to operate within the same physical world and have the same restraints as more expensive vision solutions (e.g. lighting, vibration, etc.) That means that moving the smart camera can require new lighting and optics and enabling new image processing techniques.

‘‘The promise of the smart camera as a distributed vision network is that you could presumably add additional cameras and scale up the inspection, but the reality is that to make the cameras work together requires a rework of the system – particularly in the context of user interface, data reporting, even the manipulation of pass/fail results,’‘ said Aptura's Dechow.

According to Gerald Budd, president of Phoenix Imaging (Livonia, MI), scalability has been a driving force behind the development of both PC host-based and compact/smart camera vision systems. Initially, Budd said, frame grabber manufacturers were less concerned about maintaining a development environment. ‘‘When someone tried to have a sophisticated application with multiple cameras, it wasn't generic and it wasn't expandable – it was written for that application. So when the customer came back and said, 'I want two more cameras,' you told them, 'yes I can do that, but it's not as simple as just sticking another tool there,'‘‘ Budd said.

This ongoing dilemma led vision suppliers to contemplate a single, integrated vision solution that could be easily plugged into RS422, Ethernet or other data network. However, the cost of a smart camera system with some form of advanced functionality is only slightly below the cost of low-end PC host-based system that sports the power of a full Pentium. Early versions of distributed smart camera networks sometimes suffered from an inability to prioritize communication among vision cells, which could be problematic for high-speed manufacturing lines or for vision systems used in safety applications, Budd said.

Similarly, PC host-based systems were limited in the number of cameras they could accommodate – typically up to 6 - and also had to work under the timing constraints of the Windows operating systems in most cases. In both cases, bottlenecks could arise as data streams increased by scaling upward the number of sensors.

Despite these limitations, systems can be scaled upwards successfully – as long as adequate planning takes place. ‘‘You need to have a handle on how much information is on your data channel. If you're simultaneously pushing images from multiple sources, you have to have a way to do that. You have to manage your hardware choices,’‘ explained Image Labs Smithgall. ‘‘You need to know up front if you intend to scale the system, and if you're going to scale it, you make different design considerations.’‘

Of course, proper planning requires customer education. ‘‘The customer has a dilemma here,’‘ explained Aptura's Dechow. ‘‘They can either pay up front to have all the future capability and only implement some of it in hardware initially, or they can go for some low-end inexpensive approach, but very little of it will be reusable. They're not gaining any economy of scale by just putting in a temporary solution.’‘








There are currently no comments for this article.

Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:

Browse by Products:

Browse by Company Type: