• Font Size:
  • A
  • A
  • A

Feature Articles

Smart Cameras Get Smarter

by Winn Hardin, Contributing Editor - AIA

Demand for smart cameras is bigger than ever. The global smart camera market is expected to expand at a CAGR of 18.2% between 2017 and 2024, when it will reach $6.2 billion. The Asia-Pacific region and North America will primarily drive the growth.

The reason for this substantial growth is the versatility of smart cameras and the opportunities they bring to systems integrators. Unlike a traditional machine vision camera, a smart camera combines an image sensor with a processor and I/O in a single compact housing, often no bigger than a standard industrial camera. These products are usually offered with a set of software tools — either from the camera manufacturer or a third party — that allow a systems integrator to program the camera though a graphical user interface (GUI) to enable a number of computational tasks.

The definition of exactly what constitutes a smart camera, however, is rather subjective. Virtually all cameras sold into the industrial vision marketplace sport some form of intelligence, whether or not they are marketed as smart cameras. This is due to the inherent need to perform important preprocessing functions on the images before they are transferred to a PC for further processing. This firmware preprocessing is the key differentiating factor between cameras that feature the same sensor.

To perform these preprocessing functions, most industrial cameras feature field programmable logic array (FPGA) hardware which, in combination with firmware, enables many functions such as gain correction, noise pattern correction, decimation/binning, and compression to be handled within the camera — and without putting an additional processing load on a PC. Such cameras also regularly carry their own RAM, which is used for interim storage of the images. This increases data stability and is an important factor, particularly in applications that require many images in a short time and thus a high bandwidth.

Figure 1: Demand for smart cameras is bigger than ever. The global smart camera market is expected to expand at a CAGR of 18.2% between 2017 and 2024, when it will reach $6.2 billion. The Asia-Pacific region and North America will primarily drive the growth. Image courtesy: Research Nester.
Figure 1: Demand for smart cameras is bigger than ever. The global smart camera market is expected to expand at a CAGR of 18.2% between 2017 and 2024, when it will reach $6.2 billion. The Asia-Pacific region and North America will primarily drive the growth. Image courtesy: Research Nester.

Complexity Increase

From these humble processing beginnings, however, camera designers have continued to add further processing power to their cameras in many different ways. One early approach was to integrate a low-cost multicore processor into the camera. While these cameras may have had restricted computing power, they found a home in simple stand-alone applications such as gauging and barcode scanning.

According to Fabio Perelli, Product Manager at Matrox Imaging, the main differentiator between the two leading processor architectures used in smart cameras — namely X86 and ARM — is that X86 processors are built on complex instruction set architecture (CISC), while ARM processors follow a reduced instruction set computer (RISC). RISC instructions are typically simpler and execute faster (i.e., within a single clock cycle). Meanwhile, CISC instructions are more complex, requiring multiple CPU cycles to execute each instruction, though with the benefit of needing fewer commands to perform much more complex tasks.

The choice of architecture a camera vendor selects may have as much to do with the existing base of software supported by the company than any hardware consideration. Matrox Imaging, for example, leverages the performance of X86 architectures in its smart camera offerings, as the company is more compatible with its substantial existing programming expertise and resulting software that have been developed and honed over decades. As enticing as it may appear, however, engineers looking into switching hardware architecture must consider the cost of redeveloping and re-optimizing the software. The gains in performance, efficiency, and cost must be resounding if they tip the balance away from a computer architecture that it’s currently designed in.

The predominant architecture found in today’s smart cameras incorporates a CPU and an FPGA, whether that be an X86 or an ARM processor, says Raymond Boridy, Product Manager at the Industrial Products Division, Smart Vision Solutions, at Teledyne DALSA. The combination of the CPU with an FPGA enables a developer to incorporate a DSP core into the FPGA fabric, as well as one or more dedicated customer-centric IP cores, which can perform specialized image processing functions in real time in conjunction with the DSP.

Having said that, the smart camera architecture that may be general purpose enough to serve a plethora of applications may not be the one that is optimally suited to one or more highly specialized applications. In some instances, a user might find that after careful performance analysis, an FPGA + DSP system might be able to offer a significant performance increase over a CPU/DSP or DSP-only configuration. The application-specific software that is best suited to run on a specific hardware architecture may not be ideally suited to run on a more general-purpose one.

Figure 2: For AI applications that leverage deep learning technology, the combination of a GPU and an FPGA within the smart camera may be best suited to the task. However, some vendors already offer the capability to perform functions such as automatically categorizing image content using deep learning techniques that run on mainstream CPUs, eliminating the dependence on third-party neural network libraries and the need for specialized GPU hardware. Image courtesy: Matrox Imaging.
Figure 2: For AI applications that leverage deep learning technology, the combination of a GPU and an FPGA within the smart camera may be best suited to the task. However, some vendors already offer the capability to perform functions such as automatically categorizing image content using deep learning techniques that run on mainstream CPUs, eliminating the dependence on third-party neural network libraries and the need for specialized GPU hardware. Image courtesy: Matrox Imaging.

Deep Learning Delves Deeper

Although there may only be incremental improvements in the performance and capabilities of smart cameras in the short term, the concept of deep learning is likely to make a significant impact longer-term. According to Matrox Imaging’s Perelli, the popularity of deep learning for machine vision is quickly becoming a market disrupter, with the driver being the dedicated logic on the system on a chip (SoC) required to run these deep neural networks.

Eric Gross, Principal Software Engineer at National Instruments, agrees. “Nearly every part of the technologies we use every day are leveraging machine learning under the hood,” Gross says. “All the major silicon vendors are racing to include dedicated circuits in their products for accelerating these operations as well. I think it is inevitable that smart cameras will adopt this type of technology to make them easier to program, as well as much more powerful.”

Matrox’s Perelli believes that the demanding nature of artificial intelligence (AI) applications can benefit from leveraging the power of graphics processing units (GPU), helping to accelerate processing tasks such as those used by neural network systems for deep learning. The challenge, he says, will be to create a GPU powerful enough to function within the confines of a smart camera. Because SoC devices present a range of integration options, when it comes to performing advanced image processing, the challenge will be to develop an SoC with the right functionality mix.

For AI applications that leverage deep learning technology, the combination of a GPU and an FPGA within the smart camera may be better suited to the task, says Teledyne DALSA’s Boridy. However, some vendors already offer the capability to perform functions such as automatically categorizing image content using deep learning techniques that run on mainstream CPUs, eliminating the dependence on third-party neural network libraries and the need for specialized GPU hardware.

 

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: