- » View All
3D Vision Standards, Technology Adapt to Changing Application Needs
by Winn Hardin, Tech B2B - AIA Posted 12/09/2020
“There are no 2D machine vision applications,” says Thor Vollset, founder and CEO of machine vision camera and software supplier Tordivel AS. And he is not wrong.
After all, the real world exists in three dimensions (length, width, and height); four if you include time. This inconvenient fact hasn’t stopped machine vision companies from using 2D imaging systems to solve a massive and growing number of 3D problems, however.
But cramming a 3D world into 2D technology poses many challenges for the machine vision system designer. Until the last decade, the low availability and high cost of computer power meant that solving 3D problems usually required throwing some useful data away, such as color information and rotational data around each axis. And until recently, the growing body of machine vision technical standards also left 3D camera and software providers to develop their own closed-loop systems, constraining adoption and growth.
Today, the inclusion of 3D data transmission in the GigE Vision and GenICam standards along with massive amounts of cheap computer power are changing the way companies are designing and using 3D vision systems, bringing the machine vision industry one step closer to living in the real world.
GigE Vision Is Quietly Changing Your 3D Tech
Technical standards provide the compatibility that customers desire and the path to reliable profitability that suppliers need in return. In 2018, the EMVA and AIA — the machine vision trade associations for Europe and North America, respectively — released GigE Vision 2.1. This newest iteration of the standard defined a new multipart payload container specifically for transmitting 3D data from sensor to processor.
“The previous approach was typically a patchwork solution,” explains James Falconer, Product Manager at sensor interface and solutions provider Pleora Technologies and Vice Chairman of the GigE Vision Technical Committee. “They would take a RGB pixel format with 8-bits for each component, for example, and put the depth image in the R component, confidence levels in the G component, and something else in the B component. Their software would have to deconstruct this container to extract meaningful information, which resulted in closed, proprietary 3D systems rather than 3D cameras that can talk to processing software from various vendors.”
In just the past year, Falconer has seen several 3D camera and system manufacturers adopt the new 3D-defined-use camera specification in the GigE Vision and GenICam standards as part of their new product road maps. “This enables camera makers to build powerful embedded 3D cameras using technologies like NVIDIA’s low-cost Jetson that have a lot of power behind them,” says Falconer. “Add a few MIPI sensors and a nano board, then using our eBUS SDK libraries for transmitting the sensor data to the processor, and you can quickly build a powerful 3D camera solution.”
This trend toward custom cameras with 3D capability is likely to accelerate, says Ed Goffin, Marketing Manager for Pleora Technologies, as Industry 4.0 pushes more machine vision to the edge of the plant network and beyond.
Leeks Point the Way
To illustrate how embedded machine vision technologies are helping users to push more automation to the edge of the plant, consider the simple act of trimming leeks. Cousin to the scallion, the high-end leek needs its root and greens trimmed to the proper height before packaging and shipping to local grocers.
“Leek trimming is typically done with bespoke machines rather than robotic solutions, but they still need a machine vision system to guide the blade to the root and cut with millimeter precision,” says Tordivel’s Vollset. “In the past, these custom 3D vision systems filled a large cabinet with an industrial PC. Now, all of that goes inside a single camera unit — the Scorpion 3D Stinger camera.”
Tordivel’s Stinger 3D platform is designed with full integration and flexibility in mind. In addition to housing multiple cameras and structured and colored LED lights for collecting 2D color and 3D point cloud information, the Stinger can include resolutions from VGA to the GenICam limit of 29 MP, offsets between stereoscopic sensors ranging from 35 to 220 mm, with or without the IP69K food-safe washdown enclosure, as well as encoders for conveyor tracking. The ability to collect both 2D color images and high-resolution 3D point clouds makes the Stinger 3D particularly useful for logistics applications such as depalletizing, where the user wants to both dimension each box for sortation as well as read identifying marks and codes.
One of the strengths of the adaptable Stinger platform is Tordivel’s approach to 3D calibration. While some systems will use nearby calibration targets, Tordivel evaluates each application and 3D prints the best calibration target for that specific application. “This allows us to do one-button calibration for even the most demanding applications,” says Vollset.
Advanced Illumination, Control Aid Bin Picking
The ability to combine multiple illumination types for enhanced 3D machine vision functionality is also an important part of Omron Automation America’s forthcoming 3D bin picking solution (due out Q2 2021). Based on their FH Vision System, the new bin picking solution will include Active One-Shot (AOS) learning technology that projects multiple patterns simultaneously on the target, giving the system a rich feature set for fast 3D point-cloud data extraction.
At the heart of the system is a forthcoming FH-3D camera and software that integrates sensors, light sources, and controllers into a compact footprint for mounting on the end of robotic arms. “The FH-3D’s compact size gives customers the flexibility to acquire images in multiple angles for more challenging objects and be able to pick all parts from a bin correctly,” explains Fernando Callejon, Product Manager — Vision & Laser Marker at Omron. “This also allows for a reduction on the cell size since you don’t need to install a camera on the top of the cell and build the structure around it. It saves material cost and space for the cell and also adds more flexibility for multibin applications where the robot can just move the camera on top of each bin.”
Advances in 3D machine vision solution know-how, supported by standards that give customers the freedom to choose their preferred 3D solution, is helping machine vision to gain a stronger foothold in the 3D “real world.” As deep learning classification and other advanced functionality are added to the 3D machine vision solution portfolio, few applications will remain out of reach for 3D automated solutions.
|The Omron FH Series is a compact vision system that enhances the flexibility of 2D inspection and 3D bin picking applications.||Pleora’s eBUS SDK enables image capture, display, and transmission, providing developers with a feature-rich platform that simplifies both 2D and 3D application development along with receive and transmit capabilities to streamline end-to-end data delivery between sensor devices and host applications.||Scorpion 3D Stinger™ for Robot Vision is designed to solve manufacturers’ classic challenge: Picking parts from a conveyor belt, a pallet or a crate. Scorpion 3D Stinger™ captures images, identifies and locates the product and sends the id and 3D location to a picking robot.|
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.