Machine vision systems are driving advancements in robotics, drone technology, and more. One might think the systems are complicated, taking years to master – but in truth, some are simple and others are complex. Every system relies on a few basic components. Let’s learn about them now:
“Smart Cameras” Versus Multi-Sensor Systems
You can classify all machine vision systems based on the types of sensors they have:
- Smart Camera: A smart camera has a single embedded image sensor. They are usually purpose-built for specialized applications where space constraints require a compact footprint. You might find them in microscopes or in barcode readers, for example.
- Multi-Camera Vision Systems: These systems require a separate component for image processing. This is usually a PC, though on-board image processing is used in high-end systems. The ability to deploy multiple sensors increases the versatility of each system and allows it to collect more visual data.
A Closer Look at Camera Image Sensors
The sensor is the cornerstone of any machine vision system. Many key specifications of the system come from the camera’s image sensor. Chief among these is resolution, the total number of rows and columns of pixels the sensor accommodates.
The higher the resolution, the more data the system can collect, and the more precisely it can measure changes in the environment. This leads to a trade-off: More data means more processing, which can limit system speed.
Accordingly, monochrome cameras often provide faster, easier processing than their full-color counterparts. Color vision systems use a single chip or three-chip system architecture: The latter, common in broadcast applications, uses a prism for high-quality color reproduction.
The Lens: Crucial to Great Performance
The lens must mount to the camera, naturally, but it should also provide the appropriate working distance, image resolution, and magnification for your project. To calibrate magnification, it’s necessary to know the camera’s image sensor size and the field of view you want. Most full-sized vision systems today will use a “C-mount.”
Lighting Makes the Difference in “Real World” Applications
Lighting is crucial for consistently capturing data. Sadly, many techniques that work in the lab may prove inadequate in live applications. There are two basic approaches: Backlighting, where light sources are positioned on the side of the component opposite the camera, and front-lighting, where light reflects off the component. Front-lighting is common, since component features cannot always be discerned from a silhouette. In most cases, environmental conditions, such as lighting, can be adjusted to maximize the consistency and clarity of the image captured by the camera.
The Image Processor and Software: The Brains of the Outfit
The most common image processor is a personal computer – this has limited the way engineers can interact with the system, but is rapidly adjusting to mobile technology. PCs are still preferred in many cases because they provide a wide variety of programmable software options, which can enhance the system’s functionality. Precision performance from today’s vision systems usually demands the ability to select the right software for the task at hand, in order to deploy custom code that helps the system function within a given application.