• Font Size:
  • A
  • A
  • A

Tech Papers

Software-Side Component Harmonization – Planned Standard Interface for Embedded Vision

Silicon Software GmbH

Embedded image processing systems require clear communication standards between hardware, software, and connections to processing systems in production and automation. The Embedded Vision Study Group (EVSG) is addressing the development of these standards and corresponding interfaces in three working groups that will then direct their recommendations to the G3 Future Standards Forum. GenCam and OPC UA specifications for embedded Vision are the first methods of choice. This article presents several current developments in the field of software connection of components.

Embedded Vision systems are complex, miniaturized devices consisting of components from different manufacturers. These heterogeneous systems still use incompatible data formats which impede data exchange within the image processing systems. The additional use of sensor groups (integrations of various sensor sources) generates more special sensor data that must be integrated. Rather than developing complex interfaces, the goal is to standardize communications to construct compatible components with consistent data and to equip embedded systems with more intelligence, guaranteeing their suitability for industrial applications and ensuring their efficient use.

In the industrial arena, embedded Vision systems are cyber-physical components consisting of FPGAs (Field Programmable Gate Arrays), systems on chips (SoC) including special processors, special realtime-capable micro-controllers, and highly specialized memory units as well as multicore architectures, and the systems are commonly equipped with intelligent algorithms. These intelligent yet complex systems communicate more and more autonomously as decentralized network units, in some cases already delivering completely prepared result data. Defined communications and interface standards are necessary to (a) process the data to control automated manufacturing and to (b) use it for strategic planning.

The EVSG has set the integration of this heterogeneous system architecture as its goal, presenting its report at the G3 Future Standards Forum (FSF) in the summer of 2015. In their report, three areas of technology were identified as Standard Candidates (SCs), for which three working groups will develop interface standards: modular construction with sensor boards and processor unit/system on a chip (SoC) and its compatibility (SC1); the software model (API) for communication with embedded components and their controls (SC2); and their integration into automation or processing environments (SC3). For SC1, the EVSG had previously made no recommendation. For SC3, the Group recommended introducing GenCam into a new, yet-to-be-created OPC UA companion specification for machine vision.

At the 2016 AUTOMATICA event, the VDMA Machine Vision Group and the OPC Foundation signed a memorandum of understanding to formulate an OPC UA Machine Vision Companion Specification.

Evaluation of Software Connections

In the area of software API (SC2), an optimized interplay of electronic hardware and intelligent software is the prerequisite for the use of image processing systems since embedded Vision systems exhibit particular characteristics. They consist of an arbitrary combination of components such as FPGAs, ARM-CPUs and GPUs (called processor modules representing a complete measurement program for image conversion or a single image operator such as a filter) to preprocess images internally. Inconsistent image data formats occur as a result, such as RAW images, center of gravity (Vector2), label (string), time stamp (date), event and encrypted data, etc, to name a few examples. Preprocessing of images can take place in several steps, whereby the processor modules’ communication must be precisely aligned. For that reason, they require a uniform description of the inputs and outputs and, moreover, must easily be recognized, addressed, and configured. In addition to the image data, other data formats such as objects, blobs and complex results can arise. This variety of data requires expanded generic description models of data formats and structures as well as their semantic information.

Thus, within the focus of a solution to be evaluated, the description, parameterization, control, and synchronization of the entire system stand front and center. One further aspect concerns the security mechanisms that must be addressed, such as data encryption and IP protection. An expanded GenCam standard for requirements in the field of embedded systems should harmonize the diversity of components and data, with an emphasis on support of generic data formats and processor modules.

Uniform XML Descriptions and Object-Oriented Data

In the development of a software standard, the EVSG working group reverted to the very similar requirements of 3D line scan cameras with regard to image preprocessing and processor modules. The range of processor modules from various manufacturers requires their generic description regarding capacities, consistent input and output formats for data transport, and uniform data formats, structures and their semantics (such as unit of measurement) to guarantee interoperability of the processor modules. Treatment of output data as objects (object-oriented data structure) as opposed to pixel-based regions is envisioned. In so doing, dynamic object sizes, lists, stream combinations and metadata can be considered. The processor modules’ topology of complex processing nodes should thus be overcome.

To realize the envisioned solutions, the working group suggested an expansion of the GenCam SFNC – Standard Features Naming Convention – for embedded systems. By expanding the convention, consistent description models for image data such as bounding boxes, regions of interest or center of gravity should be fixed and the manufacturer-specific XML descriptions for processor modules integrated (glued) to produce a set syntax and uniform semantics. For the integration of XML descriptions, there are two options currently under evaluation: in the first approach, the XML descriptions as well as parameter trees of camera and applications are integrated; in the second approach, called GenTP which is still being researched, the processor modules are individually recognized, addressed and configured by a PC host and the XML files are consequently read out separately.


Complex and heterogeneous architecture of embedded Vision systems

 

Search AIA:


Browse by Products:


Browse by Company Type: