• Font Size:
  • A
  • A
  • A

Tech Papers

Embedded GenICam moves closer

Silicon Software GmbH

At its latest meeting in Hiroshima, the IVSM (International Vision Standards Meeting) Committee specified more precisely how the existing GenICam (GENeric Interface for CAMeras) standard will be further developed for embedded Vision, on the path to a global embedded GenICam standard for uniform data exchange with embedded image processing devices. Various subsections of GenICam are being enhanced to meet the new demands of embedded technologies.

GenICam operates on the principle that software accessing devices via API is always the same, independent of the device’s camera interface. For that reason, the goal of the planned embedded GenICam standard is to ensure standardized software-side access to embedded image processing devices such as cameras and vision sensors – using identification and interpretation of the entire assembly – as well as ensuring their parametrization and to consistently describe their complex data outputs such as streams and data formats, together with all processing results.

Currently owing to the complexity of embedded image processing devices, the situation is very heterogeneous: quantity, positioning, and quality of the processing modules (processing units), based on FPGAs, SoCs and other processors, are diverse. In some cases, the modules may be mounted in the camera head or in the device. In other cases, the data formats of the camera sensor and the output stream are often different and, quite possibly, requires consolidating various data formats when several cameras co-exist. The data transmitted from the processing modules are complex and, alongside image data, also include events, signal and metadata such as object lists of contours, histogram results, segmentation data or classified object data. The XMLs from various manufacturers’ processing modules represent another level of complexity and a significant challenge. The individual working groups within the IVSM Committee have worked to overcome the variety of processing modules, data formats, and XMLs.

Data processing exampleDescribing Input and Output Data

As an example, to identify processing modules for individual image preprocessing steps, or to address and configure an FPGA’s functions, a uniform description of inputs and outputs is necessary, i.e., capacities as well as input and output formats for data transport. With the concept of custom processing modules, these are individually configurable, where for example an input image in the RGB color space is output as a binary image. This is implemented with connection and synchronization of the processing modules using GenICam-compliant descriptions of input and output data for all camera interfaces. GenICam SFNC (Standard Feature Naming Convention) already specifies how the input of several images or regions of interest (ROI) from one or more sensors is handled. Now, how do we proceed with multiple output data?

For output data, which can consist of raw data, preprocessing data, result data and metadata, combinable data structures are needed that can be dynamically altered within certain boundaries. One implementation concept for metadata are abstract data formats known as chunk data, for example, information in addition to the image data, such as bounding boxes, which are described using position coordinates (width and height), that are bundled to dynamic data structures as object lists. Via XML, the chunk data and/or object lists must be described in such a way that they can be interpreted using GenAPI. For transferring metadata, the chunk concept can also be used. Preprocessing and result data are described, like metadata, as chunk data in XML as well. Data from processing modules that are described in SFNC can then be interpreted correctly. XML descriptions of output data should be integrated into GenDC (formerly GenSP). In this method, processing of the input image data can be switched on and off.

One further challenge exists in aggregating the XML descriptions of image formats, various processing modules, or image processing devices to a single addressable processing module. In this fashion, the embedded camera’s preprocessing or the FPGA functions should be interpretable as one unit. If, for example, the resolution is changed, then this must be communicated to the downstream processing modules. To implement such dependencies and the possible conflicts resulting from them, the XMLs will be unified. The XML parameters are set only once using XSLT (XSL Transformations) in the XML. The resulting unified XML is incorporated into the GenICam repository as a tool and made available. This should be implemented by the next IVSM meeting.

XML description and parameter graphicDynamic Data Structures with Consistent Descriptions

Using a concept similar to chunk data, dynamic data structures can be implemented. Now that the concepts of processing modules and unified XML are available; the next step is adoption and implementation. With this flexible processing of system description parameters using GenAPI will be guaranteed. With a few adaptations, the GenICam standard can be modified for embedded Vision.

For manufacturers and users, an expanded GenICam standard for embedded Vision devices simplifies faster launches and, when needed, parametrization of their devices for which they can guarantee performance capacity at a certain level. Since devices will be described according to the standard, better differentiation is possible vis-à-vis competitors’ offerings. New camera interfaces such as NBASE-T and 10 GigE are then directly compatible. Standardized access to image processing devices that is based on consistent descriptions opens up easier access to markets such as automation, robotics, transportation and medical technology for image processing.

 

Search AIA:


Browse by Products:


Browse by Company Type: