• Font Size:
  • A
  • A
  • A

Feature Articles

Machine Vision and Robots

by Nello Zuech, Contributing Editor - AIA


Machine vision and robots developed along parallel paths for a while and converged in the mid-70s when it was recognized that for many robot applications rigid application-specific fixturing did not lend itself to the requirements of applications that required adaptability. As the cost of both robots and machine vision came down, applications for vision-guided robots increased. In addition, standards have emerged which make connectivity easier and standard practices have been developed for calibration.

While many of the early technology adoptions were driven by the automotive industry, today one finds both 2D and 3D vision guided robots applied in virtually every manufacturing industry. In addition, one finds them being deployed at distribution centers for parcel sorting and handling applications.

Actual implementations for robot guidance vary. In some cases, X-Y and theta positional feedback is provided based on information gathered in a 2D image. In some cases, scale or a structured light arrangement provides 3D data. In the most straightforward applications, the vision system provides positional information back to the robot serially. In the most complex applications, feedback is derived while the robot is in motion resulting in iterative visual servoing or positional feedback and correction in virtually real time.

Most of the known suppliers of vision-guided robots or suppliers of machine vision systems known to offer software targeted at vision-guided robots were invited to participate in this article. What follows are insights into integration issues. The following contributed:

Mark Sippel, In-Sight Principal Product Marketing Manager, Cognex
Edward Roney, Development Manager, Fanuc Robotics America
Walt Pastorius, Technical and Marketing Advisor, LMI
Jeff Noruk, President, Servo Robot
Adil Shafi, President, Shafi, Inc.

1. Can you describe some generic robot guidance applications in which your machine vision products are embedded? Are they 2D or 3D robot guidance applications?

[Mark Sippel] In pick-and-place applications, fixed mounted In-Sight Vision Sensors are used to acquire images of an area on a conveyor carrying objects for packaging, palletizing, or assembly. The vision sensor finds ands computes the location of objects on the conveyor, converts the location into x and y coordinates, and reports these coordinates to the robot/motion controller to be picked up by the robot. These applications are generally 2D.

Other vision-guided robotic applications are what we call 2[1/2]D. Layered bin picking, for example, presents objects in random positions and orientations in a stack of trays. When a camera views a stack of parts, the top object appears smaller as the stack gets shorter and its distance from the camera increases. In these types of applications In-Sight uses the apparent change in size to calculate the top tray's height. Then a robot can continue to add or remove parts from the stack.

[Ed Roney] The most generic application of robotic guidance is the location of an object in 2-D or 3-D space (depending on the application requirements) so that the Intelligent Robot can pick up the part and present it to another device or machine. These objects can be presented non-fixtured, loosely located, stacked or piled and they could be supplied in racks, bins, tubs or on pallets or moving conveyors.

Regardless of the presentation, an Intelligent Robot, which uses machine vision as one of its embedded senses, can use 2-D or 3-D to locate an object for further processing. We see this generic application of robotic guidance applied in industries such as automotive for the location of power train components, sheet metal body parts, complete car bodies (or features on the body), and parts used in assembly.  Other general industries such as food, pharmaceutical, glass and every day products have applied intelligent robotic guidance technology to their applications for the same generic purpose - to locate a non-constrained object for pick-up and processing by a robot.

[Georg Lambert] ISRA has addressed the following applications as examples:


Rim wheel alignment for painting, deburring, stacking
Ceramics alignment for deburring and polishing
Machine loading and unloading
Press loading and unloading
Bin unloading
Palletising and de-palletising
Loading of machines
Rivet robot guidance for aerospace parts
Robot assembly
Conveyor picking coupled to machine and press loading


Glass insertion
Roof mounting
Bulkhead plate assembly
Front-end assembly
Automatic fuelling
Automated sealing
Windshield assembly
Roof assembly
Door assembly
Automated fuelling
Robot guided engine assembly and inspection
Robot guidance for sealing

[Walt Pastorius] LMI applications are primarily in automotive assembly, guiding part placement to automate assembly operations previously done manually or with hard-tooled automation. These applications generally require 3D vision.

[Jeff Norvuk] Servo robot 3D technology addresses: 3D welding (laser and arc), cutting and scoring, and palletizing applications.

[Adil Shafi] Products in boxes, bin picking, autoracking, grasping, vision servoing with objects on chains or hooks, cloth handling, electronics, conveyors (static, indexed, 2D conveyor tracking, 3D conveyor tracking), inspection, dynamic adhesive dispensing and inspection, CAD correct product path following, plastics (degating, deflashing, port docking), products in trays that are cheap, reusable, loose, warped and stacked. They are 2D and 3D applications.

2. Can you provide a general description of the approach you use in your machine vision systems to provide the required visual servoing?

[Ed] First, a vision camera(s) or sensor(s) is located in a fixed position above (typically) the robot¡¦s workspace, or directly to the robot¡¦s end-of-arm tooling so that it has a good view of the specific part. The vision camera is calibrated to the robot¡¦s working coordinate system. From there, the vision system is trained and used to locate the object in the robot¡¦s workspace and report the location (either in 2-D or 3-D coordinate space) to the robot controller. The parts can be either static or moving, depending on the application and industry.

[Georg] ISRA provides all the tools and methods necessary to solve industrial robot guidance tasks. This includes calibration, image processing, moving sensors, stationary sensors, photogrammetric 3D, stereo based 3D and triangulation based 3D. Most applications are based on a one-shot measurement strategy with calibrated metric coordinates sent to the robot. In some cases closed loop visual servo is of interest.

In this case the use of the techniques of non-linear stochastic prediction methods is needed setting up a closed-loop over image data and feeding the robot controller with displacement vectors to minimize the relative velocity between vision data and part, having a fast sampling rate which depends of course on the robot controller.

[Walt] Most current applications in automotive assembly are point to point, with offsets sent to the robot from the vision sensors.

[Jeff] We use standard laser-based 3D vision approaches employing a variety of cameras to match the application along with application-specific software modules.

[Adil] Truly general visual servoing requires that the object in space be moved randomly in 3D space. This requires multiple cameras to see the entire space. In addition, the communications loop with a robot must be fast enough to give truly real-time updates to follow the object and to act with/upon it.

3. What are some challenges in integrating machine vision with a robot?

[Georg] We understand integration in two aspects. First is to offer an easy to use coupling between a vision system and the robotic world. This implies a fast and standardized communication interface to the robots, an easy to use user interface matched to the knowledge of a robot programmer and a slim and nice hardware solution. The second aspect is about real integration into the robot controller and/or the robot programming language. Vision tasks must become part of the robot programming, e.g. calibration, part picking with offsets and others. In this case the integration is to some extent depending on the robot supplier and his environment. Integration of the vision hardware in the robot controller is the goal.

[Walt] Basic integration is often straightforward. The real issue is creating a simple user interface for all of the elements of the system with maximum commonality and simplicity for the user.

[Jeff] Developing and maintaining optimum interfaces due to frequent changes in robot hardware and software.


a) Easy User Interface
b) Solid mathematical integration between the vision and robot space.
c) Solid and general-purpose (non-custom) vision algorithms
d) Ability to modify the process, products and dunnage with minimal re-work.

[Mark] In many cases, if the equipment manufacturers use proprietary-bus architectures, communications from the vision sensor to the motion controller can be very difficult to setup and use. In these instances, users must make sure that the communication port and data strings are properly formatted, or use a communications driver that provides seamless communications setup and use.

Cognex In-Sight offers configurable communication settings, and the ability to format communication data strings and drivers for robotics systems like Motoman. Many robot suppliers have also started to offer an

Ethernet interface. Ethernet provides an open standard and it costs little to connect equipment to an Ethernet cable.

Normally a system integrator would resolve any data incompatibilities. In many cases, engineers can program a vision system to communicate using the protocol--Modbus, Profibus, CANOpen, and others--a motion controller expects.

Calibrating the coordinates that the vision sensor detects to the robots coordinates can also be a challenge. A vision sensor must have a means to convert its coordinate system to match that of the robot. Cognex In-Sight provides a couple different calibration methods to do this based on what best fits the application.

[Ed] Integration challenges depend on whether the vision product is already integrated as a standard product of the robot, or if the use of a general-purpose vision system, supplied by another company other than the robot manufacturer, is to be integrated to the robot. In the latter, using a general-purpose vision system, several significant challenges exist. 

  • Communication (how will the vision system and robot exchange information)
  • Precision calibration (the matching of the vision image to the robot coordinate systems)
  • Robot math (making the vision data useful to the robot and using coordinate frames)

These integration concerns are large engineering tasks that are virtually eliminated if the end-user specifies the use of a vision system designed as a standard integrated product from the robot manufacturer. With standard integration, all the issues have been addressed and designed into the complete Intelligent Robot package that lowers cost, reduces risk and provides a more supportable automation solution.

4. Do you offer any standardized calibration approaches?

[Walt] The appropriate method for calibration depends on the application. For ¡¥¡¥relative¡¦¡¥ applications, where vision supplies offsets to the robot, it may need only simple verification of proper operation. Where absolute vision data in global coordinates is required, more complex methods are required, using either known dimension parts measured on a CMM, or artifacts placed at known positions, which are used for verification of system operation.

[Jeff] Yes, for robotic laser welding we use our own Servo Robot developed compensation package while for part measurement we use a Dynalog system

[Adil] Yes. Our calibration procedures are standardized, automatic, and easy-to-follow. Anyone who can follow a cell phone user interface can work with them.

[Mark] In-Sight provides several standard commands that can be used to calibrate the vision sensors pixel coordinates to actual world coordinates. In-Sight calibration methods can use up to nine pixel-to-world associations to solve for an eight-coefficient transformation, including: Translation in two dimensions; Rotation about three axes; Scale in two dimensions; Perspective distortion and Parallelogramming or skewing.

[Ed] Vision-to-robot calibration actually is about three steps. The first is robot calibration (or mastering).  The robot itself must be well calibrated in order to provide accuracy. Most non-vision robot applications are specified in repeatability, not accuracy, as the return to a taught point is the most desired requirement. However, when providing a location to a robot from a vision system, what is required is that the robot not return to a taught point, but go to the specified point accurately. This will fail to occur if the robot is not well mastered. An error in position can result. Calibration verification of the robot should be the first step.

The second step is the vision camera or sensor calibration to the robot. FANUC Robotics' vision systems offer standard tools in the vision product that are used, or combined, depending on the application and the desire for recalibration in the field. The two broad types are 'grid calibration' and 'automatic robot calibration.'

Grid calibration is where a specified grid of circles is located on a flat surface (FANUC Robotics provides a rigid card set with each vision system) and placed in the camera's field of view. Once the grid is in the camera¡¦s FOV, a single image is taken and the system will automatically determine the pixel to real world units and correct for other imaging perspectives. The robot software to train what is called a robot frame also uses the grid itself. By performing these two simple steps, the relationship between the robot and camera (sensor) is easily set.

Automatic robot calibration is another standard setup tool where the vision system and robot are calibrated together using the motion of the robot. The camera can either be fixed mounted or robot arm mounted (a selection made in the setup). The actual part or some other target is then used through a motion pattern that allows the vision system and robot to automatically calibrate. This is very useful for field re-calibrations as it can be automatically run anytime from a robot program.

The third step in calibration is overall vision to robot verification. Minute errors that may have occurred in the setup and training of the part model and the robot frames can be amplified in the application as offset error, especially when rotation of the object is part of the robotic application. Here another standard calibration tool is provided in FANUC's vision products to re-evaluate how well the overall calibration has been accomplished and to offer a correction for any errors that have accumulated. A visual presentation is provided showing the error determined from location of the part in various rotations. With the error determined, the vision system can automatically provide a correction to the calibration and robot frames, which greatly improves the overall robotic guidance accuracy. 

[Georg] Yes, robot vision without calibration is not usable. Automated standard calibration routines are delivered including calibration targets if needed. Calibration is very basic to 2D, 2 1/2D and 3D single shot measurement solutions. The standard calibration is based on modelling the physical geometry in subpixel accuracy including lens distortion and many other deviations from the simple camera model.

5. What has happened over the last couple of years to make integration easier?

[Jeff] Bus, Ethernet, DeviceNet - standard protocols have emerged, which make integration easier.

[Adil] Standard off-the-shelf VGR (Vision Guided Robotic) software packages are addressing all the issues that, in the past, systems integrators used to struggle with. Now, these solutions are readily available for implementation by the integrator with minimal training, development, or risk.

[Mark] Ethernet now offers the preferred communications channel between a vision system and a robot. Integration with robotic controllers has been made easier and more flexible with the addition of networked (Ethernet enabled) vision products. This allows the use of higher-speed Ethernet communications as well as the capability, under the right conditions, to network vision sensors to the robotic controllers.

[Ed] Integration becomes easier when robot manufacturers offer pre-engineered, highly integrated standard vision systems for their robot products. Already integrated systems provide standard features for communication, calibration, application tools (which make vision useful to the robot without custom engineering) and interface commands in the robot's programming language. Over the last several years, robot suppliers, like FANUC Robotics, have invested in providing highly capable, application-specific robotic guidance vision systems that make the integration of a robotic system easier to install to the customer's application.

[Georg] In the following several key aspects are listed:

  • Increase of robot controller processor speed
  • Migration of robot controller to the PC and MS-Windows based software.
  • Increased need for robot vision solutions due to cost issues, accuracy needs, quality needs and automation in general
  • Increased acceptance of vision as an established technology from the customer side
  • Higher performance computer technology
  • Faster interfaces
  • Increased experience of the users
  • More standardized applications that can be solved with an integrated system by the end user.

[Walt] Enhanced communications capability in current robots makes communications easier. Some early applications required complex, custom communications development, adding significant cost to the total system.

6. What do you see coming down the pike that will enhance the performance of vision-guided robots or make integration even easier?


a) Even more flexible and powerful vision tools; tools that can see flexing in a product, sense color and motion.
b) More accurate robots. Cheaper and easier absolute accuracy.
c) Lower cost and higher volume business models.

[Ed] Today, many robotic applications are setup and tested offline in the office, not on the factory floor.  Many aspects of robot-to-vision setup can also be accomplished and tested prior to first imaging at the customer site. We see more tools becoming available that will make the setup and integration easier.

[Georg] Standardization of the interface between robot and vision system, true integration of vision hardware and software into the robot controller. Computer performance is still an issue for vision due to the zero cycle time requirement and the amount of intelligence which still can be added to the systems. New applications and the 'visions' of the customers for automation will drive the development.

[Walt] As more robot manufacturers move towards PC based controls, integrating vision becomes simpler, and more generic.

[Jeff] Wireless technology, which results in a reduction of wiring and connections. Standard interfaces for vision, which requires less time to develop when new a new model robot is developed.

7. What is happening in the market that is fostering the adoption of more vision guided robots?

[Mark] Lower cost vision products and more seamless integration that does not require any programming are making the use of vision sensors with robotics more affordable and fostering more and more interest and implementation.

[Ed] There are three elements that encourage greater adoption of vision-guided robots: improved awareness, lower costs and high reliability. 

Many of the tools and capabilities available today in an application-specific robot guidance package have been in play for many years. In fact, in FANUC Robotics' case, many have been available since our first vision system back in 1982. With the addition of advertising and marketing by new entrant firms into the machine vision market, the awareness of machine vision as a reliable tool with robotics has greatly accelerated. Capabilities that have been known and used in some niche applications and industries are now being applied to a much broader base of companies.  

Further, the cost of vision has reduced significantly. Vision systems are a quarter of the cost they were just ten years ago, and with the higher levels of integration into the robot controller, the cost of implementation (engineering) has been significantly reduced.

Advanced reliability has also increased the adoption of vision-guided robotics. Vision tools like Geometric Pattern Matching, where the features of an object are identified and matched instead of a pixel comparison, have significantly added to the reliability of machine vision for robotics. For industrial robots that work in manufacturing plants that make parts or products from raw materials or in-process goods, this reliability has greatly increased the applicability of machine vision and reduced the maintenance concerns that were once associated with previous systems that were tremendously light change sensitive.

[Georg] The following reasons lead to increased adoption of vision-guided robotics:

  • Production demands for higher speed and higher flexibility
  • Increased use of vision guided robots has become a standard part of production equipment and are designed into the application right from the beginning
  • The need for more flexibility
  • The need for higher automation to save labor costs
  • The need for higher accuracy in production
  • The increasing acceptance of the technology
  • Decreasing investment for simple applications
  • More user friendly MMI

[Walt] In automotive assembly, the trend to flexible lines, capable of assembling a mix of different models, makes vision guided robots more valuable. Flexibility makes hard tooling complex and expensive, and manual assembly often does not provide the required quality levels and can have high cost of worker injuries.

[Jeff] Easier use, easier integration, lower costs and a greater focus on wanting to employ lean manufacturing where flexibility and high quality processes are at a premium.

[Adil] Four Things:

a) Labor cost structure (especially in the US versus overseas).
b) More consistent throughput requirements vs. manual 'line pacing' operators.
c) Rising medical costs, insurance and liability costs.
d) Desire to use the same dunnage (racks, conveyors, trays, boxes) to eliminate new costs when products are modified or changed.

8. Which industries are embracing vision-guided robotics? And For what applications?

[Ed] Today, there is no single hot spot. Application of robotic guidance is occurring in a broad range of industries. The automotive industry is using guidance for the assembly and processing of engine and body components. The food industry is using vision-guided robotics to pick products from conveyors for packaging into individual containers or cartons. The pharmaceutical industry is using vision and robots to locate medical supplies on moving belts for packing into shipping cartons. Metalworking industries are finding metal castings on pallets and loading CNC machines to make finished component products.

Industries of all types are embracing vision-guided robots because of the benefits the technologies provide:  lower costs, flexibility, reliability, and safety, just to name a few.

[Georg] Every industry with high production costs or critical production steps are potentially interested in vision-guided robotics. Critical production steps include potential injury of humans (e.g. paint shop, nuclear plants) or automation tasks, which are not suited for humans (e.g. due to weight or size). Industries are:

  • Automotive, automotive suppliers: part handling, assembly, gauging, robot guided inspection, logistics
  • Electronics: pick & place
  • Packaging: pick & place
  • Machining: loading + unloading
  • Press shops: loading + unloading
  • Aerospace, rivet-robot
  • Ceramics, part handling
  • Consumer goods
  • Packaging of consumer goods
  • Warehouse logistics and object handling

[Jeff] Shipbuilding for panel and stiffener welding lines, automotive for Body-In-White and chassis welding and earth-moving equipment for large weldments.

[Adil] Automotive, Plastics, Electronics, Food, Textile, Military/Security.

9. As a supplier of machine vision for robots what are some challenges that you face in the market?


  • Low prices for high performance
  • No standardized robot interfaces
  • Low education level of robot programmers for vision applications
  • Full integration of vision in robot controller
  • Targeting and defining standard robot vision applications

[Walt] Challenges include working with the customer to properly define system requirements, keeping expectations at proper levels and insuring that all parties involved in the project understand their involvement.

[Jeff] Addressing all of the niche applications. Unrealistic expectations, stemming from consumers buying low cost cameras for personal use that have been reduced in price 5X in the last few years. This is not easy to duplicate in an environment of niche applications for factory applications requiring industrially hardened equipment.


a) A lack of good US government tax/business productivity incentives.
b) A lack of good US government training incentives to help manufacturing.

[Mark] One of the challenges facing vision sensor manufactures is informing and educating the market on the use of vision integrated with robotics. Many new applications pop-up every day and many users do not realize the cost effectiveness and ease to which vision sensors can now be applied.

[Ed] The market is still plagued from many past failures. It is no secret that machine vision was oversold in the early 80's; practical experience did not exist and the technology was exciting and promising, but young. Many first adopters of machine vision became disappointed and are still hesitant today to trust in the technology. But in many ways this is still good, because the potential to set too high of expectations about what a machine vision system can do is still there; especially with the large variety of vision systems available today - from low-end sensors to premium 3-D laser based systems.

Working with these reservations, the cost of applying a vision-guided solution to a new application can be high. Many customers require that systems be consigned on site and that prototype applications be developed; proving out the feasibility before a system can be specified is part of the process. From practical experience, however, risk is minimized, costs are reduced and good vision-guided solutions can be provided.

10. What advice would you give to a company investigating the purchase of a vision-guided robot?

[Walt] We suggest that interested companies look for an integrated solution, and begin to involve the supplier early in the specification process to insure a suitable solution is delivered as the final product.

[Jeff] Choose an important but fairly easy first application and make sure to bring all the players together early for the Team kickoff meeting.  Do not spend all of the lead time trying to get the very lowest capital equipment price but instead spend the time putting together the strongest team possible.


  • Understand and be able to measure the variation in your process very well. It is critical to know the total variation if you are considering an automated solution as a retrofit to your current process.  If you are considering a new system, be able to know what your variation is. Make sure the vendor knows or measures this and has coverage for all of this variation at run-off time; otherwise the end-user and integrator will face a longer implementation and acceptance time in the plant where both lose profitability.
  • Do not try to program or develop a Vision-Guided Robotic solution on the project's timeframe. Buy proven off-the-shelf solutions. Too many people lose profitability by doing this.
  • Try to eliminate the technical risk up front. Ask for free pre-sale demos which cover the worst case range of variation in the product, lighting, and presentation to cameras and robots.
  • Work with integrators that have a proven track record. This will help with pricing and deployment since integrators with a track record in this area will be able to give good prices and deliveries.

[Mark] Take the time to research the basics of how communications will work, and the basics of machine vision in addition to the robotics itself. Regardless of the manufacturer that you elect to work with, this will help you to become familiar with many of the general terms and techniques used.

[Ed] Understand where the application success lies. Is it in the vision system, the robot application or both? Consider support from the company you are purchasing your system from. How much support can they really supply and for how long? If you are buying the vision system and the robot separately, who then will be responsible for the overall success of the vision-guided robotic solution? 

Are classes available, not just for the vision system or the robot, but on vision-guided robotics where vision and robots are taught together working together. It is important to remember that in vision-guided robotics; one does not work with



There are currently no comments for this article.

Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:

Browse by Products:

Browse by Company Type: