• Font Size:
  • A
  • A
  • A

Feature Articles

Frame Grabbers and Machine Vision

by Nello Zuech, Contributing Editor - AIA

If the PC is the “brains” of most machine vision systems, the frame grabber can be considered the “heart.” Per the Automated Imaging Association’s (AIA) glossary of terms a frame grabber is “a device interfaced with a camera storing in memory, on command, sampled video converted to digital signals.” In fact, however, a frame grabber takes on many forms.  In order to gain some additional insights into frame grabbers and application issues, input was canvassed from all suppliers known to provide products specifically for the machine vision market. What follows is a summary of the responses received by email from the respective interviewees in a unilateral round table discussion fashion.

The participants who provided input for this article:

Kris Raghavan , Benchmark Systems - user
John Keating , Cognex Corporation- supplier
Philip Colet , Coreco Imaging – supplier
Sal D’Agostino , Computer Recognition Systems - supplier
Amalia Nita , CyberOptics Semiconductor - supplier
Mike Kelley and Frank Evans , Electro Scientific Industries - supplier
Fred Turek , FSI Machine Vision/Fork Standards, Inc. - user
Fabio Perelli , Matrox Imaging - supplier
Jason Mulliner , National Instruments - supplier
Don Thomas, Sentonics – user

1. What constitutes a typical frame grabber today? What functions are found on a typical frame grabber?

Phil Colet: “A board composed of two things, a camera interface and a host computer interface, with some connection between these two sections for the exchange of data.”

Jason Mulliner: “A typical frame grabber consists of a digital or analog front-end.  The responsibility of the front-end is to take the data provided by the camera and prepare it for streaming to onboard buffers and the PCI bus.  For an analog front end, analog to digital converters take the signal and convert it to digital bits.  A digital camera has already digitized the signal so the front end must order and arrange the data in order to pass it to the back-end circuitry.  In both cases the digital back-end must coordinate buffering and streaming over the PCI bus using DMA.  Functions such as dynamic buffering, gain control, camera control, and variable camera compatibility are typical for a frame grabber.  In addition, the provisions for triggering and output of digital signals are important especially in demanding machine vision applications.” 

Fabio Perelli: “There is a video input stage (which can be analog or digital), a data formatting stage, some local video memory, and a host interface (PCI, AGP or PCI-X). In addition there is logic to handle the video synchronization and control (for example, trigger inputs, exposure control outputs).”

Frank Evans:  “Must have functions are Grab, to catch one image and Live/Halt, to initiate/stop a live image display.  Additional functions focus on setting parameters like camera number, gain, offset etc.”

Amalia Nita: “The functions a frame grabber fulfills today include image standardization and buffering.  More modern versions of hardware contain DSP or CPU chips to unload image-processing tasks from the host CPU.  These have been given the name of image processors or image processing boards.  It is certain that by combining image acquisition with image processing in a single device offers significant economies.”

Sal D’Agostino: “What typically constitutes a frame grabber and what constitutes one today are two different things.  A frame grabber is clearly a device that can store a frame of information (picture) from a camera.  Slowly but surely this is becoming a core function of most computers with the continuing progress and ubiquity of digital imaging.  While today it is mostly a plug in card in the future it will be the PC itself.”

2. How do you segment the different types of frame grabbers that are commercially available today?

Perelli: ‘‘This segmentation goes along the lines of the video interface.’‘

  • standard analog (NTSC, PAL, RS-170, CCIR) video
  • non-standard analog video (progressive scan at various frequencies)
  • traditional digital video (RS-422, LVDS)
  • CameraLink digital video
  • IEEE 1394 / USB digital video
  • one or multiple switched video inputs
  • multiple synchronized video inputs
  • multiple independent video inputs

Colet:  ‘‘Multimedia frame grabbers are the least expensive frame grabbers on the market, but are also the poorest quality.  Typically these frame grabbers have no on-board memory and are used in applications where the loss of a frame is not detrimental to the overall application.  Additionally these frame grabbers offer no additional capabilities and are, therefore, utilized in very basic applications.

The second segment of frame grabbers on the market is those targeted at industrial applications in machine vision, and both color and monochrome medical imaging applications.  These frame grabbers offer on-board memory, to ensure data integrity and very high precision image digitization (for analog models) for maximum image quality. The best frame grabbers in this category also offer additional I/O and embedded timing circuitry that simplify the ‘‘trigger to image’‘ acquisition sequence.

The third segment of frame grabbers that are on the market are also targeted at very high end/high-speed industrial applications.  Examples might be wafer or flat panel display inspection, web inspection in machine vision and fluoroscopy in medical imaging.  In addition to the frame grabber capabilities mentioned in the second segment, these frame grabbers also offer extra on-board processing to assist the host processor in the execution of the overall algorithm.’‘

Mike Kelley: ‘‘We group FGs very simply into 4 categories: a) low-end, multiplexed RS-170 grabbers, B) High-performance multi-channel (simultaneous) analog grabbers, C) Digital grabbers based on LVDS technology (including CameraLink) and D) Specialty grabbers based on industry standards (IEEE-1394, USB-2.0, Hotlink, etc). Physical configurations for platforms such as PCI, VME, PMC, etc. don't matter.’‘

Nita: ‘‘A rather obvious way to segment frame grabbers is from the standpoint of the camera they interface with.  Thus we have analog and digital frame grabbers.  Also a distinction between ‘‘standard’‘ and ‘‘non-standard’‘ products is usually made with respect to the size of the image acquired and the speed at which images are acquired by the interfacing camera.  Although it is possible to speak of standard size images in the digital world, that is almost never the case.  Thus, the ‘‘standard’‘ category includes analog, NTSC or CCIR compatible products, while the ‘‘non-standard’‘ category includes everything else, both analog and digital.  These seem to be the most relevant differentiators and tend to create mutually exclusive categories that are not usually reduced to only a few components.’‘

Kris Raghavan provided the following breakdown:

‘‘(A) Dumb frame grabbers - basic types - no special features

(B) Frame grabbers with on-board display - helps to get rid of one slot with on-board VGA. For example, C. Analog machine vision frame grabbers - these are grabber cards that can handle following features:

  • Asynchronous reset
  • Programmable sync generator
  • Simultaneous grab from identical or non-identical inputs
  • Able to generate or accept triggers.
  • Able to sync to progressive scan, high-resolution cameras.

(C) Digital machine vision frame grabbers - there are different digital input standards such as RS-422, LVDS, CameraLink, FireWire (IEEE1394) etc and different requirements are needed for each. So typically there are different models for different standards. These boards typical have all the features of c above, except for analog acquisition.

(D) Image processors - required for high end SEM, image analysis and wafer inspection applications. Has on-board DSP to do the image processing, where PC processing is not sufficient. These boards typically have all the features available in items C or D above.’‘

Mulliner suggests segmentation is based on ‘‘image acquisition devices...and by interface such as analog, parallel digital, and CameraLink.  Also, in the analog world there are distinctions made between single channel boards and multi-channel boards.’‘
D’Agostino: ‘‘Frame grabbers segment by camera type.  Standard analog (RS-170, CCIR, NTSC, PAL), Megapixel (this is now a catch all category for cameras with resolution greater than standard video, and linescan (often defined by camera clock rates that now exceed 40 MHz).’‘

Don Thomas segments ‘‘by connectivity (camera formats supported), on-board frame storage, speed, hardware platform (PC or other), OS support and application software that is offered with the hardware.’‘

3. When integrating frame grabbers and other imaging board-level products, what are the ‘‘gotchas’‘? How does one handle those ‘‘gotchas’‘?

While virtually all suggested the cabling, Perelli provided an interesting list:

  • Analog digitization quality
  • tolerance to out-of-spec. analog video signals
  • channel switching speed between analog inputs
  • automatic gain control (AGC) deactivation for analog capture
  • analog filtering characteristics
  • amount of onboard buffering (affect reliability of transfer to host memory)
  • image reconstruction from multi-tap cameras
  • mating connectors / cable availability

Perelli: ‘‘In order to avoid some of these issues, one needs to carefully review specifications and evaluate the product to confirm it meets the application's specifications.’‘

Thomas: ‘‘Software is usually the biggest issue. Getting multiple boards in a single PC to run cleanly can be an issue. To resolve this can be time consuming if you get stuck with low level support.’‘ Fred Turek, also speaking as a user suggests, ‘‘The main gotcha is the equipment not working.  The fundamental causes are over-complexity due to bloating with unneeded features, being poorly  disciplined or overly casual with relation to version control, weak or non-product specific  documentation, and an overly narrow approach to scope of support considering that the frame grabber’s primary job is to successfully interface with other equipment.’‘

Evans:  ‘‘Many grabber designs are for ‘‘normal video’‘ and do not tolerate the rapid changes in intensity, frame to frame, and pixel to pixel that are encountered in high-speed automation designs.  Make sure that the analog front end can respond to changes rapidly without leaving artifacts.  When changing from one camera to another, the driver software supplied with low-end grabbers may take several frames to switch the multiplex and re-acquire the image.  This can create slow systems.  Finally, all software drivers are not created equal.  The driver and how it is implemented can make or break a high-speed application.’‘

Colet: ‘‘The way to avoid interfacing ‘gotchas’ is to look for a frame grabber with the greatest amount of integrated capability.  Does the frame grabber come with extra I/O, with a trigger input, with separate horizontal and vertical sync inputs, with a strobe output?  Will all of the control be through the frame grabber (and therefore designed by a company that understands the world of industrial frame grabbers), or will you be forced to integrate cards from several different manufacturers?  On the host side, the biggest question will be the software development toolkit.  Does the company you are dealing with offer a well rounded API to control the card, are they actively supporting the API, is it well  documented, is there an upgrade to higher processing libraries available.’‘

John Keating:  ‘‘When the frame grabber and software are provided by the same manufacturer, there are very few ‘‘gotchas’‘.  With good QA for software and hardware, integration issues with operating systems can be found far in advance of shipments to users.  With the acceptance of high-speed, large format cameras, there can be the gotcha of overwhelming the PCI bus with too much data.  As with most application gotchas, this can be caught well in advance by working closely with the machine vision vendor so they design a complete system.’‘

Raghavan: ‘‘Compatibility is a big issue. Not all frame grabber work with all PCs.  Make sure you test your PC to confirm that it works, before you standardize on that. But again, motherboard models get obsolete so fast that this process has to be an on-going one.’‘

Fred Turek: ‘‘Simplicity, a tight and non-casual approach to version control, complete product-specific documentation and tools.  Ruggedness and reliability.  Product design and readily available technical support which acknowledges successful interfacing to cameras and PC electronics as being the mission of the frame grabber.’‘

D’Agostino: ‘‘Gotchas, so to speak, can exist from incompatibility across motherboards primarily from BIOS differences, OS driver compatibility, registry issues, and unique combinations which are noise (very low volume) in the PC world.  This means that vision vendors are the ones that will have to do the due diligence of making this work rather than expecting vendors to have vetted every permutation of equipment that is out there.’‘

4. What are the important features of boards specifically for machine vision applications?

Evans: ‘‘High-speed multiplexing or switching for camera to camera changes.  Driver software that can stream the data and make it available for processing at the earliest possible time (low latency).  Support for sending the data directly to the display system (for live display) without using the CPU to do the copy. ‘‘

Nita: ‘‘Frame grabbers for machine vision applications should accomplish image standardization and buffering with little hassle for the type of cameras they are designed to work with.  Ease of interfacing the frame grabber with a camera (via readily available cables and software configuration utilities) becomes more of a feature than an advantage because of the economies in development time.  Since a vision system has a software component, any assistance the manufacturer provides with software development is a worthy feature to consider.  Also, the board's ability to unload time-intensive tasks off the host system is a useful feature, especially in manufacturing environments where computer horsepower is or becomes limited.’‘

Thomas: ‘‘Supported vision tools, processing power (speed), camera support (connectivity), and extensibility.’‘

Raghavan: ‘‘Most important features are: A) Programmable acquisition or sync generation and B) Async reset or restart/reset.’‘

Mulliner: ‘‘The ability to control cameras is crucial.  Typical applications require dynamic and programmatic changing of camera parameters.  Being able to do this through a simple cable and through the same API where image acquisition is occurring is convenient.  Also, knowing that the image acquisition device can support the range of the camera is important; as a user you may want to also consider future imaging needs to make sure the image acquisition device will be able to scale to your next solution.’‘

Perelli summarizes: 

  • Quality of video acquisition
  • determinism of video capture to host memory
  • full complement of synchronization and control signals (for example, trigger inputs, exposure control outputs)
  • high MTBF (mean time between failures)
  • accessibility of support
  • easy-to-use SDK that still maximizes board control
  • interoperability with other vision components’‘

Colet: ‘‘High quality digitization, a front end which is flexible and will interface to a variety of cameras.  Are there more than one input, complete synchronization inputs, timing inputs and outputs for controlling external devices?  You may want to look for a frame grabber that offers through the board camera power supply (for lower system cost, easier cabling).  Many cameras these days are controlled via an RS-232 port through a camera vendor’s software API.  Does the frame grabber have a serial port? Is it mapped to a system COM port?  Can the camera be controlled through the port? With the camera vendor API?  Cabling is also an important point to consider, is the cable interface secure?’‘

5. What kind of errors does a frame grabber introduce? Are they the same for frame grabbers that operate with digital and analog inputs?

Nita: ‘‘The jitter specification that comes with any analog frame grabber is an indication of an error introduced by the user's desire to synchronize to analog devices.  A larger jitter specification is indicative of large errors introduced by the frame grabber as it captures the images acquired by the camera and offers them as real-time and faithfully as possible.  The jitter specification is a measure of the delay with which the frame grabber can synchronize to the camera's output.  If the board itself introduces delays in standardizing the image we can only expect another compounded delay introduced by the PCI bus (usually busy doing other transfers of information).  A digital frame grabber operates with no jitter, as does an analog board with a pixel clock input. However, a digital board has its own collection of ailments as soon as one attempts to build a cable.  The lack of standardization among digital cameras has led to numerous ‘‘noisy’‘ implementations of camera interfaces that have created the window for concepts such as FireWire and CameraLink.  Imagine soldering a line to the wrong pin and then spending a few days troubleshooting the problem.  It's happened and it's happening still.’‘

Perelli: ‘‘Pixel jitter (analog), reduction of dynamic range (analog), aliasing effects (analog), and clock to pixel alignment (analog and digital).’‘

Mulliner: ‘‘Digital frame grabbers will not introduce any errors to the data coming from the camera. Digital solutions are very robust in this case, but of course they tend to be more expensive.  Analog solutions are a different story. For best image quality, it is important that the analog circuitry on the front end of the frame grabber be well designed. A poorly designed board can certainly affect the quality of your image. One way to ensure quality is to buy a frame grabber that is factory calibrated before it is shipped. This is especially important when deploying multiple systems, because it guarantees the frame grabbers in all of your systems will output high quality, and just as important, consistent images for all systems. This will prevent the integrator from having to ‘‘tune’‘ the processing for each deployed system based on the variation in fame grabber output.’‘

Colet: ‘‘Let's start with the analog frame grabbers first.  There are two types of sampling errors introduced by analog frame grabbers, timing errors and digitization errors.  The former are introduced by variations in the pixel clock compared to the sampling in the camera, and the frame grabber’s Analog to Digital converter (ADC) introduce the latter.

Are these errors present with digital frame grabbers?  Well generally not since digital frame grabbers receive a pixel clock frequency from the digital camera (therefore no pixel jitter) and digital frame grabbers have no ADC by definition.

Does this mean that digital frame grabbers are more accurate than analog frame grabbers?  Not necessarily so.  The process of digitizing the analog signal (even a digital camera signal starts out as analog from the sensor) occurs in the camera and not the frame grabber.  Digitizing at the camera end avoids noise induction in the cable, and to a large part pixel jitter. What customers gain in quality however they may also loose in flexibility since analog frame grabbers generally allow more signal manipulation than their digital counterparts.’‘

6. What are the benefits of 10 bits vs. 8 bits?

Mulliner: ‘‘Put simply, 10 bits offers more digitizing resolution.  Essentially, you are able to extract and resolve more information from your camera on a pixel level.  There are many cameras which have the dynamic range to fully utilize the 10 bit capabilities of image acquisition devices.’‘

Perelli: ‘‘Ten bits provides greater dynamic range for better image contrast.’‘

Colet: ‘‘No system is better than the weakest component.  So if the camera and cable are generating less than 8 bits of data, digitizing to 10 bits, gets you two bits of noise. But let's assume that the camera and cable generate images with better than ten bits of noise, are there classes of applications requiring ten bit digitization.  Certainly and these are mostly surface inspection applications.  When inspecting the surface of very flat (in gray space) objects such as metal, paper, processed metal such as lead frames or deposited metal substrates, then the added 2 bits of accuracy can make the difference between rejects and acceptance.’‘

7. What are present bus interface standards? What are the pros and cons of each?

Keating: ‘‘PCI is still the most available and cost effective bus interface standard available today.  The standard 32bit/33MHz bus, however, can be overwhelmed with the amount of data coming from high-speed analog and digital cameras.  The 64 bit/66 MHz bus provides plenty of bandwidth for the majority of high-speed applications.  As the 64/66 bus becomes more widely available, it should become more affordable.  The other two high-speed buses, PCI-X and PCI-Express are finding their market niches.  PCI-Express is becoming more accepted, and there is a very good chance that this will start to enter the mid-range PC market within a couple of years.  The PCI-X bus, while high-speed, seems to be targeted at the server market, and may not become a highly used bus for machine vision. The VME and cPCI busses are reliable and proven interfaces, but are not nearly as popular as the PCI interfaces.  They have speed limitations as well, and tend to be more expensive solutions.’‘

Perelli: ‘‘Each has pros and cons.  For PCI, the pros are:  ubiquitous, relatively inexpensive.  The con: limited to 132MB/s peak .  For AGP, the pro is higher bandwidth (up to 1GB/s with AGP 4X).  The cons: limited to one slot per system, requires an integrated graphics controller, not present in industrial PCs.  For PCI-X the pro is higher bandwidth (up to 1GB/s).  The cons: not yet ubiquitous, still relatively expensive.’‘

Colet: ‘‘In the world of Wintel dominated frame grabbers, there are four common bus architectures, all described by a common standard known as PCI v 2.1.

  • PCI-32 - 32 bits, 33 MHz. maximum theoretical throughput is 132 Mbytes/sec
  • AGP - 4x 32 bits, 66 MHz
  • PCI-64 -  64 bits, 66 MHz., maximum theoretical throughput is 528 Mbytes/sec
  • PCI-X - 128 bits, 133 MHz. maximum theoretical throughput is 1 gigabyte/sec

PCI-32 is a relatively mature, and commonly available bus architecture. Given the fact that it has been on the market for 7 years, this standard lives up to it's promise.  For most applications PCI-32 is sufficient as an interface standard.  Where it falls apart is when multiple frame grabbers are being used, each requiring a time slice of PCI bus bandwidth.

The AGP bus standard was originally designed exclusively for VGA cards, but manufacturers such as Coreco Imaging have adopted this standard for frame grabbing applications as well.  The largest advantage is a much higher bus speed and independence from other busses in the system.

PCI-64 is currently the leading edge bus standard for high-speed applications.  With four times the bandwidth of the PCI-32 architecture, this bus standard avoids some of the limitations of its younger cousin.

PCI-X, while it offers very high performance is a very new technology.  To date there are very few systems available and they tend to be expensive. Additionally this technology has yet to be proven in industrial applications.’‘

8. What are the benefits of CameraLink over other digital camera standards?

Colet: ‘‘The biggest benefits of the CameraLink standard over RS-422 and LVDS are a common cable design and the capability to accommodate very high bandwidth cameras.  Pre-CameraLink standard, camera and board vendors had their own connectors, pinouts and cable designs. By standardizing everything with CameraLink, customers and vendors alike are spending more time on applications, instead of cabling.’‘

Perelli: ‘‘Available bandwidth, standard connector and cable, wide industry acceptance (large selection of cameras), supports frame and line scan cameras and room to grow.’‘

Mulliner: ‘‘Until recently, parallel digital cameras were the only type of digital interface available. However, parallel digital cameras have no clear physical or protocol standards, and interfacing to digital acquisition devices can be challenging. Parallel digital cameras often require custom cables to connect with image acquisition boards. Also, you must be certain that your camera is compatible with your image acquisition device. Fortunately, a large base of parallel cameras exists on the market for almost any imaging application. CameraLink is an interface specification for cables that connect digital cameras to image acquisition boards. It preserves the benefits of digital cameras--such as flexibility for many types of sensors--yet it has only a small connector and one or two identical cables, which work with all CameraLink image acquisition devices. CameraLink greatly simplifies cabling, which can be a complex task when working with standard digital cameras.’‘

Keating: ‘‘To look at the benefits of any of the standards, you have to look at what market each is intended for. For applications with high-speed, large format acquisition, CameraLink is a very attractive digital format. CameraLink has the benefits of a standard cable, which is lower cost and more readily available than the very expensive and large custom camera cables. It allows one frame grabber to handle both line scan and area scan cameras, which gives the user a lot of acquisition flexibility. However, the size of the connector limits the practical size of the cameras as well as the number of cameras per frame grabber, and the cable length limit can be an issue for some applications.’‘

9. What are the pros and cons of host-based processing, including the elimination of the frame grabber based on Firewire cameras?

Raghavan:  ‘‘The pros: the Intel processor is powerful, a lot of image processing can be done on the MMX core and lower cost. The cons: no real time OS is available today other than WIN-CE, which could be a limitation for many vision applications, where multiple hi-res cameras or line scan cameras are used, PC processing may not be sufficient, and depending on application, cycle time and throughput, host based processing also loses out because of issues such as OS latency, PCI bus latency, IRQ allocation and other similar integration issues.

Firewire is a good standard but is not going to replace other existing ones. In some cases, customers can get rid of a frame grabber and use FireWire. But FireWire support for imaging is not available straight from the OS.’‘

Thomas: ‘‘The OS can be your friend or kill you with host-based processing. Image acquisition and processing speed limitations are also an issue.’‘

Mulliner: ‘‘Speed of processing is the most obvious advantage of host-based vision processing.  The speed scales with the PC processor speed, as well as with the number of CPUs in the system.  The embedded processors tend to lag the state-of-the-art, and often, a user can have a more cost effective solution by dedicating a PC to the machine vision application.  While there is a speed advantage to host based processing, it is much more difficult to make a host based system deterministic. Although Firewire appears to not require a frame grabber, many PCs do not come standard with Firewire, and a frame grabber of some type is usually needed.    Now that USB 2.0 is available on most PCs, it appears to be a likely candidate for frame grabber-less acquisition.

Being able to leverage the latest PC processor is a definite advantage especially for imaging applications which require a large amount of processing time. PC technology is continually improving with speed increases every 6 months.  Software is also improving with new revisions of the most popular software typically being released on an annual basis.’‘

Perelli: ‘‘The pros: less expensive (little or no additional hardware required), it’s based on a standard general computing platform and it’s the easiest to program.  The cons: it could monopolize key system resources (CPU, memory, bus), not enough performance and standard video interface is not fully suited to machine vision.’‘

Colet: ‘‘With the advent of the PCI bus and faster host processors more and more image processing functions have been migrated away from propri



There are currently no comments for this article.

Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:

Browse by Products:

Browse by Company Type: