- » View All
Machine Vision Technology Tackles Outdoor Applications
by Winn Hardin, Contributing Editor - AIA Posted 10/05/2010
Traditionally, outdoor applications have not been the specialty of machine vision systems. Machine vision excels under controlled conditions when illumination sources, position, and other factors are fairly constant. Unfortunately, in natural environments, change is the only constant.
First, the sun is a fantastic light source, but it is extremely bright, delivering the light equivalent of a 10,000 to 40,000 W lamp. Furthermore, fog, clouds, rain, and other natural phenomenon can increase the lower range while adding airborne interference such as water and other particulates.
Despite these challenges, machine vision companies have made strides towards overcoming the huge dynamic range required to acquire images with sufficient detail to allow image processing algorithms to locate objects and make meaningful automated decisions. Examples include smart lenses, multi-sensor and multispectral cameras and new algorithms based on color models developed with outdoor applications in mind.
Too Big for 12 Bits
One of the ways to overcome the challenge of insufficient dynamic range for outdoor applications is to break the problem down into management bits.
“The primary challenge for outdoor applications is dealing with the Sun’s high light levels along with strong reflections and shaded areas,” explains Steve Kinney, Director Technical Pre-Sales and Support for JAI Inc. USA (San Jose, California). “The human eye has automated responses to deal with these conditions. An auto-iris lens can help, but in some cases, mechanical solutions are not the best option.”
JAI offers several ways to help the system designer working on a machine vision application in outdoor or otherwise unconstrained environments. Collectively, JAI refers to these as High Dynamic Range (HDR) applications.
First, JAI offers twin sensor cameras that use a prism, similar to 3 sensor color cameras. Unlike traditional color cameras which allocate each sensor for a specific color segment, the AD-081 camera splits the scene into bright and dark segments and uses machine vision software to fuse the two images together.
“A good machine vision CCD camera provides around 60 dB, or roughly 10-bits of usable dynamic range,” continues Kinney. “If we stack the output of two CCDs, we can theoretically double the dynamic range, but in reality, it’s closer to 110 dB because of the noise floor. We’re talking about linear dynamic range here, not nonlinear dynamic range where the CCD uses look up tables or logarithmic compression to increase the dynamic range of specific regions rather than across the entire visible spectrum of the image. You lose precision if you go with a nonlinear dynamic range camera. You can display the image on a monitor with a nonlinear output, but you don’t have the raw values underneath to use with image processing algorithms.”
JAI also offers sequence triggering, which uses the automated gain and shutter features on their cameras to capture multiple images with different gain and shutter values with a few milliseconds in between exposures. Software then combines the images into a single image with greater contrast range across the entire image. Alternatively, multiple images captured by sequence triggering can be left “unfused” and can simply be analyzed separately to support outdoor applications where lighting conditions create shadows or reflections necessitating multiple exposures of the same scene. Rather than forcing the user to find a sub-optimal “middle ground” for a single image, the Sequence Trigger mode lets users capture a set of images with the proper exposure for the area being inspected. Triggers can be generated in response to objects as they pass, or can be used in multi-step inspections where the camera moves over the object in a pre-determined route.
A third method for dealing with outdoor lighting situations is to coordinate multiple auto exposure capabilities, such as auto gain, auto shutter, and auto iris, into a single function that more closely mimics the response of the human eye. JAI has productized this concept into something called Automatic Level Control (ALC), and has enhanced the feature with programmable settings such as when to switch between different capabilities, how fast to react to changing light levels, and other features. Today, the company offers several cameras with Automatic Level Control (ALC) built in, but it also offers it as a post processing function within its software development kit (SDK) that enables ALC to be utilized with all of its Gigabit Ethernet enabled cameras.
“In addition to outdoor machine vision applications, there are applications such as surveillance and traffic monitoring where light levels change drastically throughout the day and the camera may or may not be controllable from a remote location. This is why we built programmability into the ALC program so you can preset ALC parameters based on various conditions and have the camera react to conditions autonomously,” said Kinney.
Pentax Imaging Company applies similar techniques to its PENTAX Atmospheric Interference Reduction (PAIR) lenses. The lenses, which are designed to work with many different machine vision cameras, include image-processing circuitry installed directly within the company's lens housings with specific step-by-step signal processing for each lens characteristic, according to company officials. PAIR technology can greatly enhance images in surveillance applications where interference from solid particles such as snow, smoke, or sand or by liquid particles such as fog and rain degrade the image. (View video of PAIR technology at work.)
Turning Bright Scenes to an Advantage
Bright, sunlit scenes provide a tremendous amount of information that can now be utilized using technology looking at broad spectral ranges. By dividing the intense signal into many small segments rather than just 3 RGB channels engineers and scientists achieve excellent data capture while maintaining high signal-to-noise ratios. These cameras, referred to as multispectral and hyperspectral cameras, provide detailed spectrum for each pixel in the image. Real-time data from these cameras can be thought of as data “cubes”; essentially a stack of images of the same scene, with each layer of the stack “seeing” a specific band of the spectrum.
AIA member, Resonon Inc. (Bozeman, Montana) specializes in hyperspectral imaging hardware and software for government and commercial interests. “Within the broad range of outdoor devices and systems using Resonon technology, there is significant interest in agricultural applications,” explains Steve Harvey, Operations Manager at Resonon. “Many of these are airborne applications that involve monitoring plant health. Hyperspectral data can be used to identify plant stress well before it is visible to the naked eye. Our technology has also been used for outdoor applications such as identifying invasive weed infestations, monitoring melting of the Greenland Ice Cap, and aerial identification of marijuana in Central America. From sorting plastics to satellite calibration, Resonon’s Hyperspectral technology provides solutions across a broad field of applications both indoors and out.”
Once considered a technology exclusively for governments because of the high price of the systems and processing hardware, Harvey says that hyperspectral imaging has become cost efficient thanks to advances in imaging hardware, software, processing, and I/O.
“Hyperspectral imaging generates images with spectral resolution that are orders of magnitude greater than traditional machine vision systems,” Harvey says. “This enables our customers to identify subtle changes in color that are critical in applications such as identifying disease, ripeness, or damage; all major factors in quality and costs of production. In addition to the imaging hardware you need software that can classify all that data and deal with the immense size of the data cubes. That’s where the breakthroughs have come in recent years. Improvements in every step of the process from CCDs, multicore processors, and I/O have all made this possible…innovations in our software and the algorithms used for analysis of the image data enable real-time classification in outdoor applications.”
Flexibility – the ability to sense and react to changing contextual conditions – is critical to all machine vision applications, but particularly outdoor applications in uncontrolled environments. While models such as the Buluswar and Draper daylight illumination models have gone a long way to give machine vision the chance to succeed in certain outdoor applications, more intelligence is needed before machines can function with the same visual acuity in uncontrolled environments as a human taking a leisurely stroll in the park. But it’s just a matter of time.
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.