- This article is filed under:
- Chemical Manufacturing
- Consumer Goods
- Food & Beverage
- Medical Devices
- Medical Imaging
- Miscellaneous Manufacturing
- Traffic Systems
- » View All
Reduce Image Data Effectively: Concentrating on the Essentials
Silicon Software GmbH Posted 08/18/2017
Imaging is trending toward ever smaller devices processing ever greater amounts of data ever more quickly due to, as one example, ever increasing production speeds. In this article we discuss, with the aid of several examples, which procedures in the image processing chain come into use to structure, compress, and finally to reduce data. What was always one of image processing’s primary tasks, namely, the extraction of relevant information using data reduction, now presents unexpected new possibilities against the backdrop of available components, bandwidths, and memory resources.
Information reduction begins long before image data is stored in memory. A portion of the image data is already reduced by using appropriate lighting, optics, and camera electronics. Following image acquisition, it is essential to correct the digital data delivered from the sensor’s side, for example by re-sorting the transferred sensor geometry, commonly known as “tap geometry sorting”. In so doing, the image data arrives in the correct sequence for further processing; however, this process increases the data volume.
Here, before data leaves the imaging device, is where image processing’s “ace in the hole” for data reduction comes into play: image preprocessing directly in the camera or sensor, or on a graphic card’s processors or on frame grabbers. In this processing step, variables such as brightness, contrast and noise are first corrected using appropriate algorithms and an image is reconstructed, to name one example in computed tomography. With binarization of the image limited to several gray values (for example, bit depth reduction from 256 to 16 grayscales) and the generation of an edge map, a large portion of the incoming image information is removed in conjunction with blob analysis to concentrate on the essentials and ease the later image analysis by the CPU. Performing preprocessing in parallel to image acquisition greatly relieves the host PC’s CPU in later processing steps.
Further image preprocessing steps serve to identify objects in the images, to analyze, and to extract significant object characteristics to achieve a classification rating using quantitative and qualitative methodologies regarding the object, such as size and color. Here, however, only relevant image components are useful. Following object segmentation, their geometries, proportions, positions, structures and patterns as well as their movement are described exactly. In so doing, the geometry, for instance, is determined based on the ratio of contour length and surface or roundness to a Hough transformation, and the position based on location of a bounding box or the pixel grid’s center of gravity.
Reduction Using Smart Preprocessing
Using blob analysis, contiguous pixel areas are separated from each other as individual forms and objects as well as from the background. Thus, a large portion of the image segmentation has already taken place during preprocessing, even before the data is transferred to the host. Moreover, intelligent selection of image details using ROI (region of interest) offers the possibility of disregarding larger areas in the image, thus reducing algorithm processing time, – for example, during print image inspection for barcode detection in order picking with moving packages via dynamic ROIs. Hardware preprocessing offers an additional benefit in the potential reduction of computers in a total system.
For multiple camera applications, it is necessary to differentiate images meaningfully from one another. Applications such as High Dynamic Range (HDR) and 3D laser triangulation already reduce data by sizeable amounts via preprocessing. If data reduction in HDR consists of combining several brightness gradients per pixel from several images into one, then height information in 3D profiles can be batched from pixel images during laser triangulation. Since only one relevant point per sensor gap on the laser line is decisive, and not the complete image, then multiple relevant points allows the calculation of 3D profile data and only transmits the required data. This enables a very high acquisition rate, saves processing on the host computer and allows for cost-effective system design using, for example, a GigE interface connection.
Simplified FPGA Programming for High Performance Algorithms
Preprocessing directly in the camera and sensor is necessary since camera interfaces such as GigE Vision can only transfer limited amounts of data. Moreover, preprocessing is mandatory for low processing-capacity embedded systems on a production line where they need to evaluate data such as product, quality, and process information in real time, prepare it for further use, and report the results. If preprocessing is carried out by high-performance, programmable frame grabbers, then high load calculations such as filter operations and color space conversions are accelerated, thus greatly relieving the host CPU. With embedded systems as well as frame grabbers, users can develop as many algorithms and applications as desired directly on the devices’ FPGAs (Field Programmable Gate Arrays), via an easy-to-use graphical development platform such as VisualApplets.
VisualApplets’ development environment enables implementation of application-specific image preprocessing tasks directly in the frame grabber’s or camera’s FPGA, sparing the complexity, time delay and expense of VHDL programming by hardware specialists. This step profits camera and sensor manufacturers as well as end users who lack the necessary expertise in FPGA programming. They can respond to custom application demands with flexibility and without having to develop a whole new camera. To program the camera’s FPGA using VisualApplets, a dynamic IP core is implemented in the FPGA as a compatibility layer using VisualApplets Embedder. This allows manufacturers of imaging devices, following a one-time implementation, to develop as many applications as desired and to transfer these onto other devices, opening the development of proprietary applications to their customers. If programmable frame grabbers (such as Silicon Software’s V Series) are installed, further complex preprocessing can be executed, reducing image data even further.
Efficient Processing Using Data Compression
Part of the image preprocessing chain can also include rapid image data compression. Along with data reduction, image compression offers the advantage of further processing steps using compressed image data and thus using some image processing operators more efficiently, as seen during the run-length encoding (RLE) of binary images. During compression, it is subsequently possible to reconstruct the original image clearly and error-free using decompression.
As the preprocessed data is transferred to the host computer for final CPU processing, the data will have been reduced to such a point where there should be no problem with bus bandwidth or storage space limitations. The suitability of individual procedures along the image processing chain is heavily application specific as is any further processing. The appropriate algorithms for processing the image data must be carefully selected and implemented accordingly.