Embedded vision systems differ from traditional machine vision systems in various ways. From their design to their use in non-traditional applications, they built to be used in entirely different ways from machine vision systems that typically sit inside a factory, in a highly structured environment, and capture high quality images.
Embedded vision systems need to be highly compact and function in highly challenging and unstructured environments, while still delivering high quality images. Because of this, their processing architecture differs from what is found in most machine vision systems. While embedded vision is still an emerging technology, to date there are typically two main types of processors used in embedded systems – field programmable gate arrays (FPGAs) and graphics processing units (GPUs).
What are GPUs and Why are They Used in Embedded Vision?
GPUs are widely used in embedded vision systems because they are capable of delivering large amounts of parallel computing potential. This may even include accelerating key portions of the processing pipeline that deal with pixel data. This is particularly useful in high resolution or high speed applications where enormous amounts of image data is generated. General purpose GPUs (GPGPUs) are among the most common form of GPUs as they’re built to meet the needs of a wide range of applications.
All GPUs leverage software for imaging algorithms. This has many benefits, as it allows end users to tweak or change imaging functions as needed in the field. This allows for great flexibility for one system to complete multiple types of imaging functions, as well as fine tuning vision systems in the field – an important thing in many embedded applications that occur outdoors and away from PCs.
What are FPGAs and Why are They Used in Embedded Vision?
GPUs are a popular option, but in recent years FPGAs are gaining favor as an image processor. Their main drawback has always been, and still is to some degree, the fact that FPGAs lack the flexibility of GPUs. FPGAs leverage hardware representations of algorithms, meaning it takes significantly more time and resources to reprogram or fine tune the image processing of a system leveraging an FPGA.
However, hardware is much faster than software. FPGAs have been gaining popularity because of their extremely low latency levels. They also provide significantly more processing potential with far lower energy consumption, and they can accelerate several portions of a computer vision pipeline, where GPUs can only accelerate one.
Each form of processor comes with its own advantages and disadvantages. If your application requires a high degree of flexibility, then GPUs may be the right answer. If low latency and speed is of the utmost importance, FPGAs may be the best processor for the application.
Regardless of which type of processor is being used, embedded vision systems are disrupting the traditional vision industry and adding vision capabilities in applications that never could have leveraged older machine vision systems.
To learn more on this topic, read our article, “Embedded Vision Systems Fundamentals: A Basic Overview of Design and Functionality.”