• Font Size:
  • A
  • A
  • A

Feature Articles

Technologies Impacting Machine Vision – Vision Engines. . . .By Nello Zuech, President, Vision Systems International, Consultancy

by Nello Zuech, Contributing Editor - AIA

In the case of the vision ‘‘engines’‘ themselves, clearly they are becoming faster, cheaper, better as they continue to adopt the advancing microprocessor, DSP and FPGA technologies upon which they are based. By embracing these advances they are able to provide more functionality at a lower price. Included in this functionality is greater ease-of-use or in general lower cost-of-ownership.

By vision ‘‘engine’‘ we mean any hardware that is the basis of a machine vision installation: vision sensors, smart cameras, embedded vision processors, frame grabbers, intelligent image processing boards, vision development systems and general-purpose machine vision systems. Apparent trends include more functionality in each package, ability to perform processing faster, more ‘‘canned’‘ solutions, improved graphic user interfaces making underlying technology ever more transparent and overall improved price/performance, greater ability to handle color and 3D.

To gain insights into what is happening in each of the disciplines input for this article was canvassed from all the suppliers of ‘‘vision engines’‘ of one type or another. The following contributed to this article: 

  • Gustavo E. Vargas, Vision Product Manager - AROMAT CORP.
  • Amalia Nita, Product Manager – CyberOptics Semiconductors
  • Randall Henderson, VP - EDT
  • Mike Kelley, Director, Smart Camera Business – JAI Pulnix
  • Marc Sippel, Vision and Bar Code Product Manager – Omron Electronics, LLC.
  • Joe Germann, VP Business Development - Sky Computers
  • Endre Toth, Director Business Development – Vision Components
  • Vic Wintriss, President – Wintriss Engineering

Gus Vargas makes the following comments reflecting the direction his company is taking to reduce the cost of ownership of a machine vision system:

What is happening in the underlying technology used by vision engines of one type or another?
It will be rather impossible to describe all changes taken place within machine vision and their engine’s technology, nor would I pretend to know every possible new technical advancement made on these types of products.

I can, however, provide information regarding our company, the Aromat Corp., a subsidiary of Matsushita Electric Works, Ltd Japan (MEW), which has taken gigantic strides toward developing a simpler, user-friendly and less cumbersome approach toward machine vision.

By the end of 2003. MEW-Aromat will be introducing a revolutionary approach to machine vision.

It will offer an ultra compact, smarter, and easier to setup ‘‘vision sensor’‘ named LightPix. This device will be capable of detecting color (7 color), gray edges, and area for predetermined fields-of-view (FOV).

Essentially this new product will be composed of three basic components:

  • A color CCD camera (fixed focus lens) with a built-in RS485 connection for remote access to a master control unit
  • A compact controller unit, which may be connected to multiple cameras via RS485 up to 32 units per controller
  • A removable viewfinder unit that can be connected to the controller unit for setup of each camera

Certainly a compact and network-ready machine vision system is not something new to the ever-shrinking electronics of the machine vision market. What is innovative about this product is the fact that it covers a gap in the machine vision sector, which may be defined as the area where a typical sensor is no longer capable of providing reliable results and a machine vision system is simply too costly in terms of integration, implementation, and maintenance.

Let's face it, although the hardware of machine vision systems has consistently come down in price, the inherent integration, implementation, and maintenance costs are still as high and present as ever before. For certain applications such as:

  • detecting or measuring the color of a container or bottle
  • detecting and measuring the size and presence of a label within an area
  • detecting or measuring gaps, and width on a part or container

The use of a vision system simply does not justify its implementation costs, or the need for a full time engineer to support the product.  For these types of applications which require multiple units, inspecting or measuring color, size, presence, a simple easy to use ‘‘smarter sensor’‘ offering:

  • built-in white light source (no additional expensive lighting or lighting hardware studies needed)
  • easy to setup (view finder allows the user to ‘‘physically see’‘ the area of interest without the need of expensive monitors or PCs)
  • built-in digital I/O will provide feedback signals to the main control unit
  • RS485 network providing industrial communications, as a proven and robust communication network for multiple units using the same controller unit with multiple cameras
  • one-button programming (no complex software, or expensive license agreements or dedicated IT personnel to support PC's).

Essentially a smarter sensor, ready to inspect and provide quality control. It is this type of one-push button technology, with multidrop capability and easy to setup interface which will define and revolutionize the machine vision market of the future.

How will this affect vision engine performance?
As I see it, the current trend in machine vision has emphasized creating a high-performance, multipurpose machine vision system. In other words a piece of hardware capable of providing a variety of tools, to be used for several tasks (some simultaneously), such as OCR, object position tracking, color inspection, etc. Although the market for advanced machine vision certainly merits the development of such systems, these devices are based on a specific software platform and are operating system dependent.
Such multipurpose machine vision devices are designed to provide a high level of flexibility in their programming and inspection tools. This approach is quite advantageous especially for scientific instrumentation, modular machines used for multiple stages and for end-users with a high level of experience integrating and developing vision applications.

The main drawback for such device is, with each degree of programming flexibility, a degree of complexity is added to the system's programming, integration and maintenance. For the industrial manufacturer and facility manager this translates directly into maintenance, programming, integration, and other additional costs associated with the actual implementation of such ‘‘advanced multipurpose’‘ machine vision systems.

These costs are not justified to perform typical (mundane) tasks in industrial and manufacturing processes such as color quality inspection, color sorting, edge or gap measurement, etc. In this type of simple inspection a high end or medium ‘‘Swiss army’‘ machine vision system falls short of the mark.

It is this gap between machine performance and implementation costs, which will drive the machine vision industry in the near future. Due to the current economic situation and global market competition, end-users are not only looking for performance and low cost hardware, but also for easier to implement, maintain and integrate systems.

It is this trend which will transform, drive and directly affect the vision engine and hardware of newer machine vision industry.

How will these advances ultimately impact the machine vision industry?
See #2

Amalia Nita makes the following observations on movement from analog to digital:

What is happening in the underlying technology used by vision engines of one type or another?
A very strong trend toward digital image capture appears to make a difference for vision.  Camera OEMs adjusted very well to a reality that could be summed as follows: ‘‘Since as a consumer I can purchase a digital (consumer grade) camera quite inexpensively, why would I, as a vision professional, purchase an analog (industrial grade) camera?’‘ 

System integrators, end-users, and OEMs alike seem to have gotten (at least mentally) on this bandwagon especially because connectivity for digital vision engine components has simplified things enormously.  As a part of this trend I was able to observe two behaviors.  First, digital systems (whether from components or standalone) are evaluated for new applications more so than it was the case in the past.  Second, when charting relative price points for frame grabbers it appears that there is a lot more price pressure on digital grabbers than on variable scan analog ones (i.e., it is possible to buy a digital grabber for less than an analog one).  In essence, the dream of many vision integrators to put together a digital system for under $1,000 is becoming reality.

How will this affect vision engine performance?
Having access to digital quality data is in itself a performance requirement for many users of this technology.  As connectivity (Firewire, Camera Link) and bandwidth (PCI-X) issues are addressed I expect vision engine components and vision systems to deliver to the ‘‘real-time’‘ requirement both in term of raw data acquisition and in terms of processing and reaching a meaningful decision.  Once this point is reached, we can talk about vision-based widgets that fulfill a certain task and are able to deal with the n-degrees of variation specific to that task.

How will these advances ultimately impact the machine vision industry?
Digital vision will become more prevalent because of technical improvements and because of cost effectiveness.  Eventually, most (if not all) vision systems will be based on digital technology.  The industry will have to emphasize digital products from OEM to end-user and ultimately the ‘‘food chain’‘ between these two points will lose some of its current complexity and interdependencies.

Randall Henderson while not answering the questions specifically makes the following comments reflecting on advances in bus speeds:

From a high-resolution image capture standpoint, host computers, bus speeds and capture card technology continue to be a significant factor in what can be achieved with machine vision systems.  When high-resolution/high-speed is a requirement, CameraLink or AIA cameras and frame grabber cards are utilized.  For continuous high-speed acquisition, DMA cards of the type that stream data directly to host memory are the most versatile and economical way to acquire images from these cameras. But they can only push data into the computer as fast as the host's PCI bus can take it. So higher bus speeds mean higher continuous camera-to-host computer image transfer rates.

Up until a couple of years ago, 33Mhz PCI bus was the norm. But more and more ‘‘standard’‘ computers are coming equipped with 66Mhz busses.  Similarly, many of the older capture boards were stuck at
33Mhz regardless of system bus speed, which limited vision systems to using those cameras that had correspondingly lower data rates.  66Mhz boards such as EDT's line of PCI, PMC and compact PCI CameraLink capture boards, can provide maximum transfer rates up to nearly the theoretical maximum supported by the 66Mhz bus specification of 220 megabytes per second while staying within the smaller 32-bit form factor. 

In the real world this translates to, for example, 47 fps for a 2048x2048x8-bit camera, or 716 fps for a 640x480x8-bit camera.  This is a significant improvement over what was achievable with 33Mhz systems/PCI boards, and can have a direct impact on the speed and efficiency of vision systems in which they are installed.

In the future, the 133Mhz PCI-X bus will no doubt start to become ubiquitous, and leading edge  companies such as EDT are already developing new-generation cards to take advantage, yielding a corresponding increase in the potential speed and resolution of high-performance machine vision systems.

Fiber-optic technology has also found its way into high-performance vision systems.  It is often desirable to locate the camera further than the 10 meters or so from the host that is the limit for CameraLink or AIA cabling.  Examples include missile launch site imaging, remote submersibles, robotic cameras that go into hostile environments such as nuclear reactors, and assembly line vision systems.  Remote solutions such
as EDT's PCI RCX system can provide solutions for coupling cameras that have standard digital IO to a host computer via fiber optic cable that can be several kilometers long, with the added bonus of electrical isolation.

Mike Kelley shares insights on neural net-based smart cameras:

What is happening in the underlying technology used by vision engines of one type or another?
A trend has been to put more-and-more processing horsepower into the camera itself. Some of the limitations become the size and number or components. Another critical issue has become the heat dissipation requirements of traditional chip sets. The usual tradeoff is a lower capability processor running slower in the smart camera.

JAI PULNiX has approached this problem differently. Using a Zero Instruction Set Computer (ZISC), an entire neural network architecture has been built and packaged into the ZiCAM smart camera. IBM and Silicon Recognition developed the ZISC chips specifically for the purpose of performing super-high speed pattern recognition. JAI PULNiX has added to the basic hardware with a Multi-Media Recognition Engine (MUREN(tm)), which performs real-time feature extraction and manages the ZISC's data flow.

How will this affect vision engine performance?
By implementing a hardware version of a neural network processor in silicon, a massively parallel architecture can complete processing in approximately 850 nanoseconds. This equates to the ZiCAM processing a whopping 2.2 giga-instructions per second! What's unique about this is the scalability of the architecture. Additional neurons (ZISC chips) can be added in parallel, without affecting the processing time of the camera.

How will these advances ultimately impact the machine vision industry?
What's significant about using a massively parallel, neural network architecture is not the high performance, but instead, the characteristic of ‘‘Teach by Example.’‘  The ZiCAM is the first smart camera that is fully ‘‘show-and-go.’‘  The ZISC allows for ‘‘overlapping’‘ application domain knowledge that cannot be separated by traditional grayscale, gradient and threshold techniques. If desired, the ZiCAM can be allowed to continue to refine the trained model while the system is online.

As the marketplace adopts ZISC, there are logical applications and extensions, such as color and megapixel sensors, which are a perfect fit for the technology. ZISC technology truly enables the ZiCAM to ‘‘think like a human, and work like a machine.’‘

Marc Sippel observes the impact improving compute power has:

What is happening in the underlying technology used by vision engines of one type or another?
The expanded use of parallel processors and available memory in vision sensors is becoming a widely used technique in their vision engine to increase performance and capability. This is becoming feasible due to the continued lowering cost of processors and the memory they use or the memory contained in the processor.

How will this affect vision engine performance?
For Omron, using parallel processors in our vision sensors means off-loading tasks not critical to the actual measurement to increase performance speed and allow a greater number of measurements to be performed in a shorter period of time. Tasks such as communications, I/O control and video monitoring can be handled separately now to minimize any effects on the measurements. Increased memory in our processors also allows for more information to be processed from multiple measurements, increasing measurement capability.

How will these advances ultimately impact the machine vision industry?
For vision sensors, the use of multiple processors and increased memory will continue to increase their capability while lowering the cost of vision. With the basis of vision sensors being easy to implement and use, an increased shift in vision sensor users from PC based technology will continue. It will also increase the acceptance of use in new applications inside and outside of the tradition vision markets.

Coming at it from the high-end requirements perspective, Joe Germann observes:

What is happening in the underlying technology used by vision engines of one type or another?
In SKY's application areas, there is an emergence of several key technologies:

  • The architecture of the computers we build today are naturally decomposed into the two key elements that high-end imaging applications require to be successful in a scaleable and cost effective manner:
    • A Data Acquisition Server (DAS) which is responsible for the hardware I/O interface devices, the processors to control them as well as Application and System level controls, and the buffering and data routing necessary for efficient image computations.
    • An Image Processing Server (IPS) that takes it data feed from a DAS or multiple DAS's and performs the algorithmic intensive computations on the image segments.
  • The Imaging Computers that we build and sell are generally used in a high-value situation where computer up-time and reliable operation are mandatory.  Applications such as Airport Security, Medical Imaging, and Industrial Imaging demand a high application availability and low cost of ownership.

DAS and IPS elements are integrated in an imaging application solution utilizing a hardware architecture based upon highly reliable Infiniband interconnects and a software architecture based upon a High Application Availability model built upon a Linux foundation.

How will this affect vision engine performance?
High-end imaging applications can now scale either at the input data stream and/or at the computation level.  It will allow systems to be built with ‘‘just the right amount of hardware resources’‘ needed to solve the problem. Additional resources can be available ‘‘hot on-line’‘  to  be available shall a fault occur in any processing element.

How will these advances ultimately impact the machine vision industry?
High Value imaging applications will require an unprecedented level of application reliability and up-time.  This must be designed into the system up-front and not added as an afterthought.

The need to address both sensitivity and throughput in our application spaces demand that we deliver systems with a high degree of scalability and a low cost of ownership model.

Endre Toth provided the following insights from the perspective of smart cameras:

As evidenced by significant growth in both quantities and dollar volume, smart camera technology has demonstrated its value to the machine vision market. This success, I believe, is due to the following:

  • Large information content in images is a problem for systems sold in high volumes for applications that can be found widely in one or more manufacturing to the industries.
  • Commercial and industrial systems must be cheap and this is a challenge with the large memory and high data transfer rate requirements of many machine vision applications. One answer to this dilemma is data compression.

Smart cameras are one answer to this dilemma offering a ‘‘vision engine’‘ with incredible functionality in feature extraction camera-like boxes at low-cost and low power without sacrificing data. Smart cameras process images as they are captured without storage or just limited temporary storage needs. With this architecture there is no need for signal transfer or conversion requiring wide bandwidth, and as operations are ‘‘on-the-fly’‘, generally there is no need to store the images. This fact transfers into tremendous cost advantages.

The use of smart cameras is broadening creating more and more niches. In many applications, the fact that a smart camera is behind a system solution and the heart of a system is totally transparent. Smart cameras appear as application-specific solutions; such as optosensor, optosorter, data matrix reader, Postnet reader, color verifier, surveyor unit, etc. In some fields such as optosensing, smart camera solutions are a substitute for traditional optosensors.

The embedded software inside of the unit determines the type or nature of the smart camera. Generally most companies offer their smart camera product with certain embedded software. In reality, its software features define the smart camera.

Vision Components offers its smart cameras as open systems, offering full flexibility for OEMs. The availability of the development system has also opened the door for software companies to develop and offer many different solutions on the same hardware platform. This appears to be a unique approach.

New higher integrated processor chips and the progress in FPGA technology guarantees that smart cameras will perform at least as fast as and possibly faster than a PC-based machine vision system. The market demands smaller, less expensive and more compact systems operating at less power. The underlying DSP and FPGA technology employed in smart cameras will continue improve in price/performance leading to similar improvements in the smart cameras themselves. Smart camera products such as the new, low-cost units recently announced by Vision Components eliminates the argument that machine vision is too expensive a solution for an application.

Vic Wintriss made the following general observations:

Overall the smart camera is revolutionizing the machine vision business. As smart cameras evolve and as ‘‘Wintel’‘ continues to take over the world, in 10 - 15 years one can visualize a complete smart camera on a chip with an imager section offering 10,000 x 10,000 pixel resolution and supporting integrated circuit electronics with compute power capable of performing the image processing and analysis, programmable for many applications. All this and at an acceptable price point.

The second major development revolutionizing machine vision is the LED. Advances in LED lighting from the color and brightness perspective have been dramatic recently. Already super bright LEDs are available.

Brightness continues to improve, as does efficiency. Many advances are emerging in response to the demands of other markets - automotive lighting, household lighting, traffic lights, etc. Machine vision will be able to make effective use of these advances, taking advantage of the low prices as the new products address high volume applications and drive down prices. Cost effective LED arrangements can be custom designed to satisfy many machine vision applications. Wintriss is offering a super bright linear LED light for web scanning applications.

On the sensor side, as with the lighting side, solid-state technology is having major impacts on price and performance. Another direction appears to be the elimination of the frame grabber in many applications as digital cameras become more widely used.


 

 

 

 

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: