• Font Size:
  • A
  • A
  • A

Feature Articles

Machine Vision Trends – 2006

by Nello Zuech, Contributing Editor - AIA

As I have noted in the past, the foremost trend stems from the progress that has been made in reducing the size of integrated circuits. And one can say this progress has been a direct result of the progress that has been made in machine vision. A bit circuitous but, nevertheless, that’s the way it has been. With machine vision to provide alignment, monitor work-in-process and perform the actual inspection associated with process diagnostics and yield management, it has been possible to improve manufacturing processes leading to finer and finer line widths.

At the same time, it is these finer and finer line widths that have resulted in imaging sensors with more and more photosites leading to better resolution along with the additional compute power required to handle the output from these more dense imagers. With advances in UV and DUV and ultimately e-beam-based processing, integrated circuits will continue on the path described by Moore’s Law.

Beyond these trends, what else is happening?  The benefits of improving compute power, either in the form of microprocessors, DSPs or FPGAs, as well as faster interfaces will:

  • Enhance the performance of configurable machine vision products – intelligent frame grabbers, smart cameras, embedded vision processors and even vision sensors.
  • Enhance the performance of application-specific machine vision systems.
  • Reduce the need for embedded board products as more and more of the compute power is in the PC.
  • Reduce the need for proprietary image processing/machine vision hardware.
  • Make machine vision more of a software game than a hardware game.
  • Integration will continue to get easier and less risky.
  • More applications will be addressed.
  • Applications that have already been addressed will become commoditized.

Let’s look at each aspect of machine vision.

Computers
In addition to processing speed continuously improving and migration to 64-bit processing, PC buses are getting faster all the time. In addition, DSPs and FPGAs are getting easier to use so more machine vision functions are being performed in the camera itself. In the most compute intensive applications, these easier to use compute technologies are resulting in ever more intelligent frame grabbers. The result will be better performance as new software tools will be developed using algorithms that could not be considered before because they were too compute intensive.

Frame Grabbers
More of the simple applications will be addressed with smart cameras, however, the frame grabber will not disappear in the near future. It will continue to find use in the more demanding applications – high-resolution cameras, line scan cameras, multiple cameras, etc. Surviving designs will likely incorporate more compute power. Frame grabbers will continue to migrate to newer, higher speed buses (PCI-Express, fiber optic) and beyond as they emerge.

Cameras
CMOS will take increasing market share from CCD sensors. Driven by demand in consumer products the properties of CMOS will continue to improve and make them more viable for more machine vision applications. Again, driven by demand in consumer products, the resolution of imaging sensors will continue to improve and megapixel cameras will give way to super-megapixel cameras. Faster clock speeds will make it possible to operate these super-megapixel cameras at 15 Hz and faster frame rates in the future. Higher resolution cameras generally result in better performance in many applications and often result in the use of fewer cameras in some applications.

It is not clear which of the camera connectivity standards will be the winner. For many applications speed and resolution are not critical so consumer driven connectivity standards such as FireWire and USB 2 will be quite adequate. Where speed, higher resolution and bit depths greater than 8-bits are required Gigabit Ethernet and CameraLink will be competing. In all cases, these options have been made possible by the development of standards. With the availability of the standards, products have been developed and, where appropriate, the respective connectivity standards will be embraced. In all cases they make implementing a machine vision system easier.

Helping the migration to frame grabber free solutions will be cameras that will not have the full functionality of a smart camera but will nevertheless have the ability to perform application-driven pre-processing on the raw image data at realtime rates as well as have the capacity to move other frame grabber functions into the camera.

What will tomorrow bring? Fiber channel interfaces may become increasingly important where multiple cameras and long distances are required and wireless cameras will undoubtedly become suitable for more applications. Infrared cameras will ultimately have the properties one finds in visible cameras today. These infrared cameras will find more machine vision applications as thermal and IR artifacts are recognized as key parameters for process diagnostics in process intensive industries. Cameras with even greater sensitivity in the ultraviolet will probably become the camera of choice for critical dimension measurements.

Optics
More optics with appropriate resolution and distortion properties will be developed leading to addressing more and more requirements, given the higher resolution imaging sensors emerging. This is true of both area and line scan imagers. Telecentric optics are well recognized as the optics to use in applications involving critical dimension measuring or where environmental issues like vibration can yield magnification errors. Larger and larger format telecentric optics will continue to emerge, hopefully, at reasonable costs. Driven by consumer products, better and better plastic optics will emerge yielding lower cost and better optics for machine vision applications.

Lighting
LED-based lighting is clearly the trend of the future. As they have become more efficient and brighter they are finding their way into more and more machine vision applications. Useful LEDs with ultraviolet output will make it possible to make more accurate dimensional measurements. LEDs will make it easier to configure cost effective single application-specific arrangements to optimize performance. More lighting arrangements will come with automatic calibration to assure consistency throughout the life of the application. Blue lasers will also yield gains in dimensional accuracy of 3D laser scanner-based systems.

Software
Software has become more portable. It will continue to become easier to use even with more high-end functionality. With the advances in compute power it will become possible to do more rigorous image processing at less cost, both in terms of dollars and compute time. Hence, software will become more “canned” – fixed suites of software targeted at specific applications. The net result will be that users will need to know less and less about the underlying image processing algorithms that are used to address specific applications. One already sees this, as third party software is now available for robot guidance, even sophisticated 3D robot guidance.

Software able to map the results of 3D-based machine vision scanning of a scene to CAD data files will be running on ever faster computers making it possible to handle online dimensional measurements of complex geometric parts with a wide range of specularity in realtime. With the higher resolution of super-megapixel cameras, dimensional measurements consistent with accuracy requirements of more and more applications will be possible.

More systems will embed neural net and fuzzy logic-based decision-making software with the ability to better match the performance of inspectors in applications entailing decisions based on subjective scene analysis.

Overall
More flexible and powerful vision tools will be developed. This includes more powerful color-based and 3D-based processing making machine vision more like human vision. It is only a matter of time until standard machine vision will incorporate both color and 3D routinely, as both will essentially be free, both in terms of cost and compute power. Integration will become ever more transparent. Machine vision will be solving more and more applications in the X-ray and infrared spectrum.

With more intelligence capacity in servers today we may see a leveling off in the market for smart cameras in a distributed process approach to the use of dumb cameras or semi-smart cameras (those with some image preprocessing capability) and the final processing being done in a centralized server able to handle multiple cameras.

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: