• Font Size:
  • A
  • A
  • A

Feature Articles

3D Vision: The Rise of the Service Robots

by Winn Hardin, Contributing Editor - AIA

When industry combined traditional 2D machine vision with robotic platforms, it gave birth to the first vision-guided robot and opened the door to greatly expanded sales of both technologies.  Machine vision allowed the robot to work in less structured environments, such as pick and placing parts randomly placed on a conveyor, or racking automobile components.

Recently, 3D vision has enabled industrial robots to move into less structured environments in agricultural and food processing, among other commercial industries. Unlike the first vision guided robotic applications, these visual servoing applications anticipate that the part or product will not just move in 2 dimensions (2D), but in 3 dimensions (3D), although not with six-degrees of freedom. These applications, such as the butchering of meat or milking of cows, use a combination of mechanical fixtures and 3D vision-guided platforms to perform tasks that just a few years ago were beyond the ability of computer-based automation to accomplish. These applications are often described as some of the first fully autonomous service robotic applications, which the International Federation of Robotics defines as “a robot which operates semi or fully autonomously to perform services useful to the well being of humans and equipment, excluding manufacturing operations.”

While the food industry has used service robots like those mentioned above for several years now, the larger service robotic industry has yet to emerge. Why? For many reasons: some technical, others economic. But as 3D vision and computer processing continues to gain capability while falling in price, one thing is for sure: service robots are on the rise, and coming to a room near you.

Can Microsoft Be Wrong?
Machine vision is generally considered to be one of several critical enabling technologies to the nascent service robot industry. According to SRI Consulting Business Intelligence’s 2008 study entitled, “Disruptive Technologies, Global Trends 2025,” service robotics has three main components: hardware technologies, of which machine vision is one of several necessary sensing technologies; software platforms; and cognition and artificial intelligence (AI). In each case, technologies have to get cheaper and better before the service robot industry will flourish.

“Service robotics needs to prove its technical capability, and in addition we need to see  an easy and convincing economic benefit to service robot applications before we can expect widespread adoption,” says Adil Shafi, vision-guided robotics pioneer and founder of ADVENOVATION, Inc. (Houghton, Michigan). “Right now, these innovation factors are compounded by an economic uncertainty in the marketplace,  and therefore a temporary hesitation in investment.”

What a difference a few years can make. In 2006, a short recession was over, and money was cheap. So much so that Bill Gates chose 2006 to release Microsoft Robot Studio and make the prediction that service robots would be the next PC, finding their way into every home. According to Gates, the PC of 30 years ago used to be too expensive for home use, and suffered from a lack of standards and easy programming environments – just like robots today. To solve these problems and give service robotics a turbo boost, Microsoft released its Robotic Studio only to shelve active support of the product line in light of global economic conditions.

TOF Cameras and 3D Vision
As the world rids itself of a nasty fiscal hangover, the good news for vision companies may be that – unlike past revolutions where machine vision has been the tail on the dog, benefiting from electronics and sensor technology developed for other purposes – machine vision is actually ahead of the curve when it comes to enabling service robots.

“One of the coolest things going forward is what 3D vision is going to do within the world of robotics,” explains Steve Prehn, Senior Product Manager, Material Handling and Vision at FANUC Robotics (Rochester Hills, Michigan). “A robot generally interacts with 6-degrees of freedom, so finding an object in a 3D world is becoming more of requirement when parts don’t always sit flat on a surface. We’re seeing the need for 3D vision systems increasing faster than 2D systems. For more and more applications, you need to find the location of the object in 3D space to orient the robot gripper to grab the object, and that takes a lot of processing capability. Combining 2D with lasers is a nice way to quickly find objects, but lining up lasers to fall on small surfaces takes time. As processing power continues to increase, 3D position computation is going to get more practical, and that’s where I think we’re going to see the next big jump in service robotics.  Time-of-flight [TOF] sensors are very interesting; stereoscopic approaches are also good, but the big problem for these systems is distinguishing common points when viewed from different perspectives especially when the parts are rounded or contoured, where this issue doesn’t affect TOF cameras.”

FANUC Robotics 710i Rotary Mate robot cutting ribs in an automated food processing application. FANUC Robotics is already seeing vision-guided robot applications that lie in the grey area between industrial and service robotics. Most recently, robots have been used in industrial food-processing environments to pick eggs, apples, tomatoes, ribs, and other organic products, and either inspect, manipulate, or cut the products for final packaging. While these applications take place in the controlled conditions of processing plants, a more recent application, involving the cleaning of cow udders and automated milking fall squarely in the service robot application area.

Ram Mechanical developed the vision-guided robot to disinfect cow teats after milking. The system uses a FANUC M-710iC Rotary Mate and TOF camera to verify that the milking cluster has been removed from the cow udders, enter the stall between the cows rear legs, and spare iodine disinfectant to stop cross contamination from potentially dirty clusters. “A stationary camera, located a few stalls ahead of the robot arm, tells the computer (and the robot) if there is a cow in the stall. But more important, if there is a cow in the stall it records whether there is a milking cluster still attached, what the cow’s leg placement is and udder height,” said Frank Dinis, herdsman for all four Ahlem dairies.

Once the computer knows the cluster is off the cow and things are clear, the robot enters the stall while the camera identifies cow teat placement in 3D for teat postdip application. The robot arm is equipped with four nozzles to apply the teat dip and sprays two at a time on each side of the udder. Immediately after pulling out from under the cow, a squirt of clean water washes any iodine off the camera lens. The spray operation takes about 6 seconds per stall as the carousel turns at 8.5 seconds per stall. The robot arm moves with the carousel. Click here to see the video.  A newer version of the system (click here for video) also uses 3D vision to guide the robot to fix the milking clusters to each teat. 

The $2.5 Billion Pipeline
While agriculture is one industry looking for domestic help from vision-guided service robots, it may not be the largest potential market. SRI Consulting predicts that domestic applications, including animal husbandry and house cleaning, will likely fall behind military and healthcare applications when it comes to driving the service robot industry. Both the military and healthcare industries have already invested significant capital investigating service robotics, from the semi-autonomous robots that explode mines and improved explosive devices (IEDs), to NIST’s HLPR Chair, which uses 3D vision to help move patients from wheelchairs to beds and back without help from medical staff.

“The defense sector is well funded, has a real, tangible need, and is challenged to save soldier lives especially in IED (Improvised Explosive Device) environments,” explains ADVENOVATION’s Shafi. “Today, soldier’s teleoperate robots, but they want more automatic, hybrid controls so that mundane and dangerous jobs can be done by robots with minimal human oversight.

“Vision is a part of that, but just one part,” continues Shafi. “If you look at Stanford University’s winning solution to the DARPA Grand Challenge for autonomous vehicles, you see that they use many sensors – LIDAR, triangulation, vision – and combinatorial techniques with a rapid switching capability that uses the best technique and sensor for the terrain. Service robotics have yet to move in that direction, but they will.”

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: