• Font Size:
  • A
  • A
  • A

Feature Articles

Cameras in Machine Vision Applications

by Nello Zuech, Contributing Editor - AIA

Cameras are to machine vision what eyes are to human vision – the input sensor to the computer processor. The camera selection for an application is often critical to the successful deployment of the machine vision system. One of the major challenges of the machine vision industry in the early 1980s was the absence of choice for cameras. While solid-state cameras were available, they were very expensive. Hence, many of the early systems were delivered with vacuum tube-based cameras. These were prone to drift, only came in RS-170 formats and otherwise posed challenges for reliable performance in a machine vision application.

It was not until the solid-state camera became a commercial item and volume sales emerged for the camcorder market did their prices come down to the point where they could be considered viable for machine vision applications. The security industry became an early adopter and further drove the prices down. The challenge for the machine vision industry, however, was the absence of any features consistent with the requirements of many machine vision applications. By the late 1980s several camera suppliers had identified the machine vision market as one that merited requirements assessment.

This then resulted in cameras emerging with features consistent with the requirements of many machine vision applications: higher resolution, asynchronous scanning, exposure control, etc. The net result was that by the early 1990s machine vision systems were being deployed and performing reliably. Hence, much of the success and growth of the machine vision industry can be directly attributed to evolution in the cameras.

Today the camera companies are continuing to develop product refinements targeted at the machine vision market. These include digital cameras, cameras with software-based controls, cameras with higher resolution, etc.

What follows is a review of camera issues based on a “round table discussion” using the Internet. Questions were forwarded to all the leading suppliers of cameras for the machine vision market and the responses of those that did respond are reflected in what follows. Those that participated in this email-based roundtable discussion included:

  • Joel Bisson - PixeLINK
  • Greg Combs – Redlake MASD, Inc.
  • Boris Doncov – Sony Electronics
  • Dave Gilblom – Alternative Vision
  • Bill Mandl - Amain Electronics
  • Terry Zarnowski – Photon Vision Systems

1.  How would you segment the camera technology used in machine vision from a technology perspective not market perspective?

Dave Gilblom suggests the following segmentation:
A. Geometrically - Line Scan, Area Scan, TDI
B. Spectrally - Monochrome (one band - X-ray, UV, Vis, IR), Multispectral (color, more than one band of other types)
C. Sensor Technology - CCD (Frame, Interline, etc.), CMOS, CID, Muxed Array (InGaAs, etc)
D. Interface - Analog, Digital (CameraLink, parallel, FireWire, etc.)
E. Optical format - 1/10 inch to 35mm and others
F. Speed - Snapshot, standard video, fast
G. Level of integration - simple to smart
H. Environmental tolerance
I. Physical form - camera on a chip, board, box “

Jeanne Miraglia: “Monochrome/color, analog/digital, standard-scan/progressive, imager size, imager type, etc.”

Joel Bisson: “Smart vs. Dumb: Smart - ability to communicate and control the functionality of the camera via software and NOT processing within the camera. Dumb - non-controllable via software.  Most of all the analog cameras and some low-level digital cameras are in this category. Segmenting by photon gathering technique (i.e. line scan, area, etc.) only useful as they lend themselves more towards applications vs. others.”

Terry Zarnowski: “This is a fairly open question. Sensors technology: we do see a drive to use CMOS sensors instead of the traditional CCD for many reasons, the chief of which most CMOS sensors do not 'bloom' or 'streak' with excessive over light conditions as CCD's tend to do.  This allows machine vision systems to better tolerate environmental lighting extremes as well as more easily accommodate changes in luster or reflectivity of the object being inspected.

Frame grabber: we also see a strong drive to get rid of the added cost and complexity of the frame grabber for mid to low-end machine vision tasks.  The current trend for these systems is to use Firewire interface and we see that switching to USB 2.0 in the next 12 months.
Smart Cameras: smart cameras are becoming better and cheaper, sometimes eliminating the need for a host processor altogether for simple tasks.  As time goes on however, these cameras are becoming smarter and able to handle more complex tasks.”

Bill Mandl refines this from the perspective of the imager itself: “The predominant technologies used for visible light machine vision are typically monolithic sensors using photo gate CCD (charge coupled device), photo diode CMOS (complimentary metal oxide semiconductor) and photo gate CMOS; CID (charge injection device) photo gate.

Monolithic sensors are also used. At a much smaller level. These technologies are also used with hybrid sensors, image intensifiers for low light or special sensor materials, e.g. HgCdTe, for infrared and other materials for UV and X ray. All of these technologies use a frame-based sampling that captures an analog picture value over an interval of time. They are analog in the sensor array and read to the edge of the array in analog form. CMOS cameras may have analog to digital conversion, A/D, on the sensor chip next to the array. Usually for CCD and CID the A/D is on a separate chip. These cameras then are classified as digital. The same sensor arrays without the A/D on chip or in the camera box are classified as analog. From a technology perspective, however, analog or digital cameras with the above technologies are the same and provide no difference in image sensitivity within the sensor. There is a fourth technology, MOSAD, multiplexed over sample A/D, that is digital in the sensor array. Each pixel responds with a digital value, in simplistic terms, this is photon to digital conversion.

MOSAD cameras are inherently digital. As with the other technologies, MOSAD can be build in monolithic or hybrid form for sensing in different spectral bands.”

Greg Combs: “Camera technology may be broadly categorized into three components- sensors, cameras and lenses. Requirements for lighting, resolution, dynamic range, sensitivity, frame rate, form factor, connectivity, and price are affected by these components.”

2.  What advice regarding cameras would you give to someone investigating a machine vision application?

“Buy/use quality camera since it is truly a key component of vision systems.”

Bisson:  “Number one - does the camera meet your needs?  Number two - more specifically, do you want to control the system?  If so, use cameras that are controllable.”

Zarnowski:  “First and foremost, don't be shy!  Provide as much detail as you can regarding your application and camera requirements to prospective camera suppliers. Work with them to choose the best product they have to meet your needs. If they do not have a solution, ask if they know of a competitor who may have.  Nobody likes to turn away business, but it is good practice to help someone along the way to their success.”

Mandl: “Most machine vision applications may use controlled lighting for scene illumination. Under these circumstances, the typical camera will be photon shot noise limited. A camera with 50,000 electron well capacity and 50-electron noise floor may claim 10-bits dynamic range by the manufacturer. Reality is that the camera is photon shot noise limited to less than 8-bits over 50% of the dynamic range. So if the user really wants 10-bits over dynamic range he had better check well capacity for 1,000,000 electrons. This limitation is probably the least understood by the end user. Fixed pattern noise is the next area of concern, especially if looking for low cost CMOS cameras. A manufacturer claiming 10-bits at the pixel may not specify pixel-to-pixel variation, nonuniformity, which can limit the camera to below 7-bits. These are some of the more predominate issues that may not be resolved by reading the manufacturers fly sheet.”

Gilblom: “Figure out what you need to sense at the focal plane and then pick a camera consistent with that. This is not trivial and the camera manufacturers rarely answer all the questions posed by such an analysis so just get some cameras and test.”

Combs: “Try it before you buy it.”

3.  What does one applying machine vision have to know about their application (scene) that can influence the selection of the specific camera? Or How does one decide what camera type is most suitable for a specific application?

Combs: “From the camera’s perspective, everything matters. The biggest factors are the objects themselves and the criteria by which they are judged. These items directly impact decisions concerning lighting, motion and color, which in turn affect sensor, camera and lens requirements. Environmental aspects such as dust, vibration and heat, or mechanical constraints such as working space and distance greatly impact what works.”

Mandl: “The three most significant conditions a user must understand about his problem are lighting, scene spectral range and target velocity. Red, green, blue and 30 Hz are historical camera characteristics to closely match the response of the human eye. Machine vision based on silicon sensors can cover UV to NIR (300 nm to 1,000 nm). The spectral response of the targeted objects to be identified should set the color spectral response for the camera. If the temporal response of the moving target exceeds 15Hz, then the frame or sample bandwidth must be higher than 30 Hz. Finally, more intensive light is always better. If the camera can provide the well capacity and dynamic range, objects will be more identifiable in brighter light. Spatial resolution, pixel count, can provide better object definition, but at a data volume cost. A camera with the smallest pixel count to just identify the targeted object is ideal. After all, a snail only requires a hand full of pixels to recognize danger or food.”

Zarnowski: “Experience here counts a great deal in order to pick the attributes of a camera that will best meet the needs of the application.  Questions that require an answer are: Color needed?  Resolution needed? Total expected Variance in background scene lighting? Total expected variance in object lighting and reflectance? Field of View Size?  Best lighting type?: reflective, background, oblique, etc.? Best wavelength(s) to use? Accuracy of measurement? The answers can be used to pick the sensor resolution, sensor type, Bayer pattern color, 3-chip color, or monochrome, responsivity (Sensitivity) at the target wavelengths, and S/N.”

Gilblom: “Spectral requirements, light level, speed, dynamic range, resolution, structure, depth of field.“ as well as answers to the following questions:

  • What do you have to see?
  • How often do you have to see it?
  • Will there be parts of the scene that might affect the whole image (specular reflections, etc.)
  • How does the object move?”

Miraglia:  “Is the object moving or still? What is the smallest element inside object (line, dust, etc) and how does it compare with total image size when specific lensing (angle of view) is used.”

4.  What are the properties of cameras that can influence the selection of specific machine vision camera arrangement? How important is S/N, minimum light sensitivity?

“Anything.  Could be ruggedness of the lens coupling or any other tiny issue. Depends entirely on what needs to be measured.”  Joel B. adds, “Application dependent but in general, SNR (signal-to-noise ratio) and sensitivity less important than resolving power and speed.  In microscopy for example, SNR is super critical whereas in machine vision, it's less critical.  Also, lighting environment is controlled in machine vision.  But resolving power and speed are super critical.  Resolving power to see smaller and smaller features (very important in semi and electronics, which is most of machine vision) and speed for throughput.  If machine vision is roughly defined as 'computers that see to enhance the productivity of the shop floor', then I would say that speed and resolving power are key.”

Combs: “The level of importance of camera features such as dynamic range, fill factor, temporal and spatial uniformity, bit depth, and sensitivity generally increases along with the need for accuracy and/or discrimination of low contrast detail. Frame rate ultimately decreases as the improvement of these image quality features increases. Many times, learning what minimum quality specifications will satisfy an application requires evaluating the camera in the real-world application.”

Mandl:  “Machine and medical vision generally have access to controlled lighting. The typical operating room in a hospital is illuminated to a level where on the order of a billion photons are available. The shot noise on just 10,000 photons will be 100 electrons. Most cameras readily achieve better than this noise floor. A camera with 100,000,000 electron well capacity can provide a true 13-bits dynamic range with a 10,000 electron noise floor. Astronomy, on the other hand, is starved for photons when viewing remote galaxies. So scientific cameras providing 2 or 4 electron noise floor are important. In a vision application with illumination, S/N is not important but well capacity is. Virtually all CMOS, CCD and CID cameras push S/N and low light because they have small well capacity, usually 20,000 to 50,000 electrons. This is a sales gimmick and not a technical solution, except astronomy or clandestine surveillance. MOSAD provides low noise but it also provide for very large well capacity. A MOSAD camera can easily capture the photons in an operating room.”

Miraglia: “Foremost is resolution and imager size (determines lensing). S/N is more important than minimum sensitivity but both are secondary when external controlled illumination is used (almost always today).”

5.  What is the relevance of camera properties like gamma correction, AGC, selectable/adjustable gain, auto/manual white balance, anti-blooming to machine vision applications?

Miraglia: “Gamma of 1 is preferred in M/V but 0.45 can also be used, AGC is normally detrimental, adjustable gain is used in low light situations (usually set on max), white balance use depends on the application in most cases, and anti-blooming is a critical feature especially if near -IR light components are present in illuminants.”

Combs: “Most vision applications benefit from some type of initial field calibration, especially if the proper lighting cannot be achieved. One-time manual gain adjustments may be used to improve contrast. White balancing will increase color accuracy. If possible, automatic adjustments should be avoided since consistent image analysis becomes more difficult. After making adjustments, be certain to record the camera settings!”

Mandl: “The truly important local camera control is anti-blooming. This is because if pixels bloom due to glare caused saturation, other pixels are destroyed and information is irrecoverably lost. This is generally true regardless of the camera technology. CCDs have been the most prone to blooming in the column direction due to excess residual charge in saturation conditions. The other controls, if required, can be done elsewhere, provided information is not lost. However, the typical CCD, CMOS or CID with small well capacity can readily saturate without some feedback gain control. MOSAD can provide sufficient well capacity to minimize or negate the need for gain control. GAMMA correction does not have to be done at the camera and is most important for CMOS cameras to correct pixel nonuniformity. MOSAD is linear and very uniform and except for detector nonuniformity GAMMA correction is not needed.”

Gilblom: “Automatic things are rubber rulers and should be avoided.  Controllable things are valuable because loops can be closed around the processor and lighting.  Anti-blooming is often a crutch for badly designed lighting.  Any part of an image that is bright enough to bloom can also generate veiling glare in the optics that affects the whole image.  The exceptions are direct detectors (no optics or non-imaging situations).   Gamma correction is irrelevant - other linearization may be necessary with some sensors.  White balance is also a crutch for improper color correction matrices and poor control of lighting color temperature.”

Zarnowski: “Gamma correction is only important when viewing images on a monitor to make the system 'see' as a human does, not needed for image processing.   AGC and gain are similar, they do nothing to improve S/N, they only make an image amplitude to be in the range of a) the monitor for viewing, and possibly b) put the video signal in the A/D range of the frame grabber or camera A/D converter.  White balance is good for color cameras to correct color but again really does not help S/N.  Anti blooming is very important if doing measurements, as blooming can distort an object shape.”

6.  What is relevance of imager size and/or pixel size to machine vision applications?
“Imager size: related to sensitivity, less relevant if you have enough light. And Pixel size: resolving power given that the lens can deliver.”

Zarnowski: “The issue for most machine vision applications is the accuracy with which to capture the image, and this relates more to the optics for imager size.  The larger the optical format, the more accurate image acquisition tends to be, as there are less optical aberrations.  Similarly for pixel size, too small, (around 4um or less) optic quality becomes an issue.”

Gilblom: “Dynamic range, depth of field for a particular working distance, size of optics, size of camera, cost of camera, optical constraints (resolution, back focal length, mount type).”

Combs: “The larger the pixel size, the more dynamic range a CCD sensor can offer.
This results in an increase of camera performance for low contrast applications, such as surface inspection. Additionally, using the sensor in binning mode increases the pixel size by combining the well depths of adjacent pixels. Resolution is not only affected by the sensor size, but more importantly it is affected by the lens.”

Mandl: “The issue of pixel size is most important. The pixel should match the optics blur circle for optimum resolution. In a single chip multicolor camera the combined size of the multiple colored detectors should match the blur circle. If the pixel pitch is slightly less then the optical blur circle, resolution can actually be lost. The image size, camera resolution, should always be kept to the minimum necessary for object identification. Larger arrays can be used to identify more objects, but data volume and processing throughput can become a limiter.”

7.  When is exposure control required in a machine vision application?

Miraglia: “Usually when the object is moving and the use of electronic shutter is required to stop motion, assuming light strobing is not used.”

Combs: “For sensors incorporating electronic shutters, decreasing the integration time can effectively stop the motion of a moving part. A fast lens (low F-stop) and intense lighting may be required. Adjusting the exposure may also help counteract changes in lighting.”

Mandl:  “Exposure control is needed if large photon flux variations are expected that can exceed camera well capacity. If the camera dynamic range and well capacity are sufficiently large to absorb the variations then control is not needed.”

Zarnowski: “For most applications, the goal is to maximize the signal-to-noise.  Increased exposure can increase the signal.  However increased exposure time also increases time varying noise within the system such as that caused by dark current. The goal is to expose the imager to at least 80-90% of its saturation exposure at the maximum brightness within the scene in a short exposure as is reasonable in order to maximize the S/N.  So a combination of lighting, optics, and exposure are used to do this.  If an object is moving at high speeds, you want minimal exposure time to reduce motion blur, but more light is needed to maximize the signal to noise.“

Gilblom: “Line-scan apps - When the transport speed is not constant. Area-scan apps - Rarely if the illumination is done correctly.”

8.  When is standard analog camera appropriate? Where does one require or benefit from progressive scanning camera or camera with asynchronous reset?

Gilblom: “Simple things like presence verification when only one field is needed and when the camera has to follow the machine.”

Mandl: “A standard analog camera provides the best tape recording density for a low dynamic range video. Progressive scan and asynchronous reset fit well with raster scan video display.”

Miraglia: “Async reset is almost always required when motion is inherent in the process. Progressive scanning gives preferred square pixels and also permits 60 fps processing. Standard analog cameras may also be used if such specific performance characteristics are not important.”

Greg C. indicates, “Low-res applications can benefit from the lower prices found with analog cameras. Whenever capturing a moving target, you have the choice of using the shutter, strobe or both, to stop the motion. Using the shutter without a strobe to stop speed requires either using a progressive scan camera, or using only half the resolution (one field) with an interlaced camera. Resetting the camera to snap off a shot when the part is in place is standard practice. However, in cases where the reaction time of the camera causes problems, the camera can free-run in a darkened environment, letting the quick reaction of a strobe stop the action.”

Bisson:  “Analog cameras for cheap, run-of-the-mill systems; progressive scan camera for stop action, short cycle times.”

9.  Does camera influence selection of other components -- lighting, optics, and frame grabber? How?

Mandl:  “Optics mounting, back focal length, blur circle and field of view requirements must match with the physical size of the sensor array for optimum image capture. Lighting must match well capacity and minimum noise floor. The frame grabber I/O, controls and data volume handling capability must also match the camera output.”

Combs:  “Each vision component affects how well the others can work. The system must start with a good image. Logically, choosing a camera comes first. Improving lighting, optics, capture electronics, or using post image processing after purchasing the camera might not overcome image quality issues that arise from the camera capabilities. “

Gilblom:  “Lighting - Generically  - forces good lighting practice so camera output is easy to use (shading, speculars, etc. minimized). Optics - Image and pixel size drive optics choices. Frame grabber - Interface type, Speed, channel configuration, dynamic range (bit depth), setup file availability.”

Zarnowski: “The selected camera dictates the optical format and lens mount as well as the interface.  If the camera is CameraLink or LVDS, then a frame grabber is required.  If Firewire or USB, no frame grabber is required and cost can be lower.  The required wavelength(s) of light should also be matched to the camera sensitivity or quantum efficiency.”

Bisson: “Complicated but in general, speed, resolving power (camera & lens) followed by exposure, gain (lighting) then the frame grabber if needed.”

10.  Do optics, camera or frame grabber influence camera selection? How?

“Lighting - spectral content, intensity both define range of usable sensors. Optics -Where special optics are needed (telecentric, for instance) the optics will define the mount, image size, resolution requirements, maybe sensitivity.”

Combs: “Selecting any vision equipment is based on application needs, features, quality, pricing, familiarity and support. Constraints, needs and prior investments concerning these criteria are what affect purchases. A reliance on software can affect the frame grabber choice, which, in turn, could affect camera choice. In the case where a particular frame grabber must be used, insurmountable compatibility issues between a frame grabber and a camera that directly reduce application performance will affect camera selection. This happens in reverse as well.”

Miraglia:  “If optical format is already decided then the camera has to match in order that an appropriate field of view be obtained. FG's clock influences or limits camera selection to certain camera clock rates.”

11.  What resources are available to learn more about camera techniques and relevance to machine vision applications?

Gilblom:  “AIA articles, vendor literature, many websites, articles, SPIE sessions.”

Mandl:  “Both [AIA and SPIE] offer references, papers and symposiums covering most aspects of camera use and technology.”

Zarnowski:  “Most camera manufacturers offer a selection guide to start the process.  Also, System Integrators often have a great deal of experience working with different applications, and are often the best choice for custom systems. “

12.  Within the last year, have you introduced something new in cameras for the machine vision market? If so, please describe and give details.

Zarnowski: “PVS has released several OEM 'Smart' digital imaging cameras. Line Scan cameras to 30kHz line rate with Camera Link interface, and 1.3MP cameras supporting Firewire and soon USB 2.0 interfaces.  A large on-board FPGA can be programmed to perform on camera real time image processing and decision-making, effectively making the camera a stand-alone vision system in some applications.  PVS is also releasing and 8.3 MP (4 times resolution of HDTV) camera and sensor that operates at video rates.”

Miraglia: “Higher frame rate cams with partial scanning (XC-HR50/70/300) and UV cameras XC-EU50 and XCD-SX900UV.”

Gilblom noted Alternative Vision recently introduced their HanVision HVSOLO line scan cameras. This year they plan to introduce the first machine vision camera with a Foveon color sensor.

Mandl:  “Last year we introduced a small array camera based on the MOSAD technology. The array format is 320X240 with pixels on 16-micron centers. Each pixel has a delta sigma A/D converter that provides a linear response. Measurements taken at a nominal 60 Hz frame rate showed better than 11-bit uniformity and well capacity of 6,000,000 electrons. The MOSAD format is not conventional picture frame based as with the classic CCD, CMOS or CID sensors. The data is delta sigma single bit digital similar to that used in modern CD players. Like a CD player, the data can be constructed into frame-based samples with much higher linearity and dynamic range than that of the classical techniques. Presently we are developing a 4,000 X 3,000, twelve million pixel single color array with 8.5-micron pitch. This will be a single color electronic 35 millimeter camera with windowed 1,000 frame sampling and true 10-bits dynamic range, 1,000,000 electron well capacity.”

Combs:  “Redlake has recently added the following camera technology.
1) By merging with Duncan Technologies, Redlake now incorporates 30-bit color, 3-CCDs, multiple remote heads, line scan, and custom Multi-Spectral configurations.
2) Redlake has introduced the ES 4.0/E, which combines Camera Link connectivity with high performance -12 bits, 4.2 MegaPixels, 15 fps.
3) At 48 fps, the new ES 1020 embodies the fastest one-MegaPixel CCD sensor from Kodak. Its compact design incorporates a Camera Link interface to output two 10-bit channels.”

13.  What are advantages/disadvantages of analog vs. digital?

  “For machine vision, ultimately the signal is digitized.  Some analog camera and frame grabber combinations can have 'jitter' which affect systems accuracy, and most are limited to 60 fps or less.  Interlaced cameras can provide offset fields for moving objects, and progressive scan cameras can operate to faster rates.  Also of importance to note is that CCD-based cameras cannot effectively perform on-chip sub frame, so sub frame rate is not faster than full frame rate.  Some CMOS based camera provide high speed sub frame rate, providing more alternatives for machine vision designers.”

Gilblom:  “Analog - Good - standard monitors for display, cheap interfaces; Bad - hard to recover timing, typical camera has automatic things that are hard to turn off, limits range of operating modes. Digital - Good - Time-stable interfaces, designers tend to think of them as measuring instruments; Bad - Can't see the images without a computer (generally).”

Miraglia:  “Impetus for digital is no need f



There are currently no comments for this article.

Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:

Browse by Products:

Browse by Company Type: