- This article is filed under:
- Chemical Manufacturing
- Consumer Goods
- Food & Beverage
- Medical Devices
- Medical Imaging
- Miscellaneous Manufacturing
- Traffic Systems
- » View All
“4 Steps to a Smarter Camera”: Simple Image Processing Implementation on the FPGA
Silicon Software GmbH Posted 08/18/2017
VisualApplets Embedder is a graphical programming environment that allows FPGAs on various hardware platforms to be equipped with image processing applications.
While Machine Vision has grown indispensable to modern-day manufacturing, the demands on the cameras used continue to increase. Along with ever-growing amounts of data, real-time demands are now imposed on applications as well. To meet these demands, state-of –the-art-cameras not only capture images; they also provide image pre-processing.
Integrating VisualApplets Embedder into a camera equipped with an FPGA makes it possible to supply hardware in short time as often as desired with real-time-capable image processing algorithms as desired and, once created, to port an algorithm onto different hardware designs in just a few steps.
FPGA Programming with VisualApplets
Using the graphical user surface, FPGA programming is accessible to software developers and imaging specialists with no VHDL experience as well. Comprehensive sets of hardware-based operators allow the design of complex image processing tasks on the FPGA. These operators can be ordered and connected with each other as needed to fulfill concrete image processing tasks. Arithmetic und morphological operators (i.e., for pixel manipulation), logical operators (such as those for object classification), operators for complex color processing, statistical analysis, logic functions, and format conversions as well as segmentation and object classification can be used to design real-time applications. Implementation of control tasks at the signal level (trigger control, for example) is a further core feature.
One commonly occurring requirement, for example, is 3D image processing, during which profile data from a laser triangulation or a focal point must be determined directly. Designing a robust high-speed laser triangulation for determining an object’s focal point is very easy, using a combination of various predefined operators, and can be implemented on Baumer LX VisualApplets cameras:
Originally developed for graphical programming of FPGAs on frame grabbers, VisualApplets Embedder allows programming tools to be used on FPGA-based hardware platforms as desired. To enable use of an image processing algorithm created with VisualApplets on embedded systems, an IP core is integrated into the FPGA design of the hardware platform as an empty black box.
In a one-time process, the implementation of VisualApplets Embedder (into a camera, for example) takes place in just a few steps. Essentially, the integration process requires definition of the IP core’s interfaces and integration of the IP core into the overall FPGA design.
The VisualApplets Embedder IP Core Interfaces
Connection to external hardware resources, i.e., to sensor interfaces and storage controllers, occurs using glue logic. With VisualApplets Embedder, a very flexible combination of scalable, configurable interfaces as needed or desired is possible.
The VisualApplets Embedder IP core interfaces in detail are:
Clock: a two-phase clock system with simple and double clock frequency, Slave IF: register slave for transferring run-time parameters, GPI, GPO: general purpose I/O signal to exchange signals, such as trigger or control signals, with the IP core, ImgIn: interfaces for input image data streaming, ImgOut: interfaces for output image data streaming as well as MemWr/Rd: interfaces connecting to external memory modules.
VisualApplets Embedder Integration
VisualApplets Embedder integration means, along with the generation of an embedded IP core that can be programmed as often as desired, creation of a platform-specific plugin for the VisualApplets programming tool as well. Along with the IP core blackbox, this plugin contains all the hardware platform’s FPGA information needed for generating an FPGA configuration bitstream. The required files are generated in an automated workflow. Integration takes place in the following main steps:
Integration of VisualApplets Embedder in 4 Steps
1. Specification of the hardware platform by the platform manufacturer (information on the FPGA used, logic resources, I/O requirements, etc.)
2. Automatic generation of the IP core black box (VHDL), as well as VHDL test bench for simulation purposes, with the VisualApplets Embedder tool.
3. Integration of the generated VisualApplets Embedder IP core black box into the VHDL design of the hardware platform’s FPGA. At the conclusion, a netlist of the entire FPGA design is generated and a constraints file is issued.
4. Automatic creation of the plugin for VisualApplets, based on the previously generated FPGA netlist and constraints file.
Programming the New Hardware Platform
Following integration, a plugin for VisualApplets is available for use. The plugin is generated as an installer that can be shared with other VisualApplets users.
After the plugin is installed, the new hardware platform with VisualApplets is programmable.
Now, a new image processing algorithm can be generated directly for the specific hardware in VisualApplets, or one can port an existing VisualApplets design onto the new hardware. Incompatible interface models are represented in a porting as dummy blocks and can simply be replaced with operators from the platform-specific library.
In this manner, as many platforms as desired can be made programmable using VisualApplets.
Parameterization During Run Time
To access the image processing algorithm’s parameters during run time, various automatically generated, design-specific files are available.
HAP file. During the applet synthesis, a file (*.hap) containing all necessary information for parameter access is created. The applet parameters can be accessed via the API accesses of the corresponding run time environment.
GenICam XML code for GenICam API. With the connection of the target platform using a GenICam-compatible interface, seamless integration of the parameters into the software using an automatically generated GenICam XML code occurs.
Generic C API code. VisualApplets can generate portable ANSI-C code for parameter access during run time. Read and write access to the VisualApplets Embedder IP core register slave (Slave IF) is realized using callback functions that are transferred when the software initializes. Parameters can be addressed by name.