• Font Size:
  • A
  • A
  • A

Tech Papers

Successfully Implementing Embedded Vision

Silicon Software GmbH

Image processing is playing an ever-increasing role in embedded systems, in electronic devices for the consumer goods industry, and in non-industrial applications such as medical technology, as well as in machine vision under the auspices of “Industry 4.0”.  Component miniaturization of sensors, FPGAs, chips and processors, and their increased processing performance, have led to a high level of hardware complexity, though the possibilities for its use are limited if standardization is lacking.  For Embedded Vision to attain its full potential, the interoperability of components and connectivity of a network’s process participants are critical in reducing complexity, in building decentralized vision intelligence, and to guarantee real-time processing of image and signal data.

The development of well-defined standards for Embedded Vision will continue to occupy both the imaging and automation industries for the next few years, since seamless communication between an embedded image processing system’s components and that system’s communication with automation is a prerequisite for data transfer in real-time and to assure the economic feasibility of embedded vision projects.  Embedded Vision systems are transforming into decentralized, intelligent, and independently acting participants in the fields of production and automation.  In the future, systems should be able to automatically register with networks and to know which network components require their results.  The automation network can then in turn retrieve specific information such as image and signal data from the device.

Based on their high computing capacity and intelligent algorithms, Embedded Vision systems will enable real-time localized analysis of data such as product, quality, and process information, then process it for further use, and report back results.  For that reason, systems do not simply deliver information but also control procedures to improve performance, efficiency, and quality of the production process.  Preventive maintenance, human-robot collaboration and flexible system control for one-to-one (individual item) production can be achieved more efficiently in this manner.

FPGAs provide parallel data processing in real-time

How might an Embedded Vision system look that can reduce the high load of growing bandwidths, process and analyze images locally and in real time, and simultaneously relieve computers of heavy processing at the same time?  FPGAs (Field Programmable Gate Arrays), found on frame grabbers and in Embedded Vision systems such as cameras and vision sensors, promise the largest leaps forward in raising process performance.  Many integrated circuit designs can be implemented in an FPGA that can then assume image processing tasks such as image optimization, image pre-processing, evaluation and generation of control signals.  Given their ability to process data with very high parallelism, FPGAs offer optimal conditions for real-time image data processing.  Thus, not only can barcode reading be implemented in a camera, but so can product code identification, as well as image improvements by subtracting undesired reflections, dirt, or even geometric distortions.

FPGAs control sensors, pixel formatting and processing, as well as interface transfers.  They enable image preprocessing to reduce image data bandwidth or to perform analysis’, in order to acquire metadata or control data.  This allows companies to incorporate their intellectual property (IP) expertise in devices as well as to address new applications and markets.

Whereas FPGA hardware programming was considered labor-intensive and costly in the past, today it is strikingly easier, using a graphical user interface via operators and data flow diagrams, and with no hardware programming experience required.  Since FPGAs are reprogrammable hardware, a great variety of special applications can be used in Embedded Vision systems.  FPGAs can be coupled with processors to form a hybrid architecture, where the microcomputer often consists of an embedded system on a chip (SoC), including special processors, special real-time capable micro controllers, high bandwidth memory and minimal power consumption, as well as multicore SoC architectures.

Heterogeneity calls for Embedded Vision standardsHeterogeneity calls for Embedded Vision standards

The Embedded Vision Study Group (EVSG) called for making this heterogeneous system architecture more transparent when presenting its report at the G3 Future Standards Forum (FSF) in Chicago during the summer of 2015.  By further developing standards such as OPC UA (unified architecture) and GenCam for Embedded Vision in several working groups, the EVSG is driving standardization forward so that recorded (image) data can be processed with no loss and evaluation results can be transported further.

In the EVSG report, three technology fields were identified as Standard Candidates (SC) for which the working groups will develop interface standards: modular construction with sensor boards and processor unit / System on Chip (SoC) and their compatibility (SC1 Group), the (API) software model for communication with embedded components and their control (SC2 Group), and their integration into an automation and/or processing environment (SC3 Group).

SC1: One concrete question here is, for example, how can a sensor be enhanced with processor intelligence so that it automatically feeds its image data to a network.  Proprietary software programs have been used thus far to control cameras and sensors or to transfer processing results, as well as for device recognition and control.  Interface standards need to be developed that are GenCam-compatible in order to efficiently embed image processing devices into the production environment.  To this end, the EVSG has evaluated different interfaces between sensor and processor / SoC, in particular MIPI, PCI Express und USB3; however, for reasons relating to long-term availability and technical limitations, the EVSG has thus far been unable to release any concrete recommendation for any of the technologies.

SC2: Embedded Vision systems preprocess images internally and in some cases, increase the resulting data.  This new type of data requires expanded generic description models of data formats and structures, as well as of their semantic information.  The EVSG recommends an expanded GenCam Standard encompassing demands for embedded systems, with emphasis on supporting generic data formats and processor modules that process the data.  One additional aspect are the security mechanisms to be observed, such as data encryption and IP protection.

SC3: The transfer of preprocessed and compressed image data into concrete information remains challenging.  Standardized interfaces are needed between embedded image processing systems and both automation technology as well as the control and planning levels.  RAMI 4.0, the Reference Architecture Model Industry 4.0, recommended as the sole solution approach the OPC UA standardization protocol within the framework of a companion standard that would offer a proven software model.  As a technology candidate, GenCam was evaluated as a companion standard with emphasis on semantics (acquisition of the GenCam SFNC - Standard Features Naming Convention), which the EVSG also endorses.  The image processing systems should then be directly integrated into the PLC software and, in doing so, into the production line.  In the future, there should be a real-time extension within the context of an OPC UA specification.

Flexibly program embedded FPGA hardware

If FPGA programming was reserved in the past for hardware specialists, then today, software developers and application engineers, along with system integrators, component manufacturers, and hardware developers are in a position to create image processing algorithms on FPGA hardware, thanks to innovative and easy-to-use technology via a graphical user interface, for example.

Silicon Software follows this approach with its award winning VisualApplets development environment which, with over 200 operators and over 80 preconfigured sample designs, enables the implementation of individual applications as well as control tasks at the signal level (i.e., trigger control).  The latest release, VisualApplets Expert, allows experienced users to take their existing hardware code, such as image processing modules in VHDL and Verilog, and continue using them in VisualApplets as graphical operators.  The modules are inserted as pre-synthesized IP core net lists and each IP core produces a single operator within VisualApplets.

VisualApplets Expert, allows experienced users to take their existing hardware code and continue using them in VisualApplets Expert

To program Embedded Vision systems with FPGAs and SoCs using VisualApplets, an interlayer, such as the VisualApplets Embedder, is necessary.  This merges an image processing device’s available electronics with VisualApplets’ interfaces within the FPGA as a central control processor, abstracted via a dynamic IP core.  In a one-time process, VisualApplets Embedder is implemented on an image processing device in just a few steps.

The VisualApplets Embedder integration process starts with the definition of the IP core’s interfaces and its integration into the overall FPGA design.  In order to be able to use an image processing algorithm created with VisualApplets on embedded systems, an IP core that can be filled as often as desired is incorporated as an empty black box in VHDL into the hardware platform’s FPGA design. The connection to the external hardware resources, i.e., sensor interfaces and memory controllers, takes place via glue logic.  To this end, manufacturers input data (such as the FPGA used) and specify the ports on the IP core (memory interfaces, image data inputs and outputs, register and GPIO interfaces).  Using VisualApplets Embedder, it is possible to flexibly combine interfaces that are scalable and configurable to your requirements (see illustration).

A netlist must be generated from the overall FPGA design and the corresponding constraints file must be created.  Finally, a hardware platform-specific plug-in for the VisualApplets programming environment is generated.  This takes care of your device support in VisualApplets.  The plug-in contains, alongside the IP core black box, all information on the hardware platform’s FPGA that are necessary to create an FPGA configuration bitstream.  For the software side connection of the new devices in VisualApplets, an installer is created via the plug-in that can be distributed to the VisualApplets users and which retrofits the new programmable image processing devices.  Following the final installation of VisualApplets and the plug-in, the new hardware platform is programmable using VisualApplets.

The interlayer is integrated once into the FPGA hardware, after which the resulting open platforms can be programmed as often as desired.  Moreover, applications and functions can be ported onto other FPGA devices, for example, to maintain a family of products or to carry out a porting across an entire product line, i.e., from frame grabbers to intelligent cameras.  Parts of the image preprocessing can be carried out efficiently via the FPGA programming directly in the device, reducing data load and system costs.  New applications and markets can now be quickly and easily addressed with the help of VisualApplets.

 

Search AIA:


Browse by Products:


Browse by Company Type: