- This article is filed under:
3D-Based Machine Vision in Automotive Production Lines
by Nello Zuech, Contributing Editor - AIA Posted 12/18/2007
Back in the early 1980s, General Motors (GM) commissioned a corporate-wide analysis of potential opportunities for machine vision. They developed a taxonomy of generic requirements, which included: robot guidance 2D and 3D, metrology 2D and 3D, surface inspection and assembly verification. Having so segmented their requirements, GM then identified the companies that offered machine vision based on techniques and technologies most consistent with each specific set of requirements. They then proceeded to make an investment in all but one of those companies and provided funding for R&D to adapt and refine their respective techniques and technologies for one of the identified generic application sets.
The big carrot was GM’s announcement which suggested they had over 64,000 potential applications for machine vision across their operations. With this figure, several machine vision companies developed their prospectus for an initial public offering. While not as big as the telecom bubble, there was a time when these stocks were ‘‘hot’‘ largely because of GM’s claims and extensions of their totals to the worldwide automotive industry.
While things did not go as planned in the automotive market, nevertheless, this industry embraced the technology early on and has deployed their fair share of machine vision systems.
To gain insight into some current machine vision activity in the automotive industry, we asked a number of companies to participate in this ‘‘round robin discussion.’‘ To provide some focus for the article, we asked input from those companies specifically engaged in online 3D-based machine vision applications. The following were kind enough to participate in this article:
- Walter Pastorius - Technical Marketing Advisor, LMI Technologies, Inc .
- Don Manfredi - Marketing Manager, Perceptron, Inc .
- Peter Nilsson - Business Development Manager, SICK IVP AB
- Matt Collins - Engineering Manager, Vision Solutions International, Inc. (VSII)
1. Can you describe your 3D machine vision product line that has specifically addressed shop floor applications in the automotive industry? Please discuss how you differentiate your products that address these applications, if you offer more than one.
[Walter Pastorius – LMI] LMI supplies 3D vision sensors for dimensional measurement, process monitoring and robot guidance for applications in automotive body assembly operations. Sensors are available in a variety of sizes with different standoffs, fields of view and packaging to suit specific measurement needs. All sensors are individually temperature compensated during manufacturing to insure accuracy over the range of temperatures encountered on the manufacturing plant floor.
LMI also supplies 3D sensors to tire manufacturers for both in-process and final geometric inspection as well as sensors for use in automated wheel alignment at final vehicle assembly.
[Don Manfredi – Perceptron] Perceptron offers four major in-line products that serve the plant floor automotive market. They are:
- AutoGauge, which is used to measure complex sub-assemblies in line, using either a fixed sensor arch, robotic end effector mounted sensor, or hybrid configuration which combines both fixed and robotic.
- AutoGuide, which provides up to six-degrees of freedom offsets to industrial robots to guide them in doing tasks like loading closures (doors, etc) or windshields.
- The AutoFit product measures gap and flushness of sheet metal or painted vehicles on a stationary or moving line.
- AutoScan, which is our robotic scanning product, provides customers with rich scan data that can be used for 3D color mapping or discrete inspection point measurement.
Additionally, Perceptron offers several types of sensors that can be mounted on Coordinate Measuring Machines (CMM’s) and Portable CMM’s for offline scanning of complex assemblies. In each case, the nature of the applications that are being served by the products are the differentiator.
[Peter Nilsson – SICK IVP] We offer 3D smart cameras, suitable for standard applications like de-palletizing of goods, OCR/OCV reading of tires, and final quality inspection of parts. We also offer more flexible, higher performance 3D cameras for PC-based vision systems, typically used in more demanding applications, like bin-picking and high precision metrology tasks.
[Matt Collins – VSII] Our base platform is called VisionHub. It provides the basic functionality required to add 3D capability to image-based devices. This includes 3D calibration, model creation and matching, as well as locating functions. We then differentiate our products by adding an application layer that addresses specific needs such as sealant application, material handling, measurement, etc.
2. What specific 3D-based machine vision applications related to the automotive industry do your products address?
[Don] Our AutoGauge product is typically used for:
- End of line measurement
- Distributed/in-process measurement
- Robotic measurement
AutoGuide is applied to:
- Closure and roof loading
- Windshield and backlight loading
- Cockpit load
- Seam seal guidance
- Laser cut guidance
- Real time fitter-assist guidance
- Final area audit
AutoScan can be found in use during:
- Die hemmer integration
- Off-line closure panel inspection
[Peter] Our 3D-vision products are used in quality assurance of manufactured parts, robot guidance in de-palletizing and bin-picking applications, and tire manufacturing, to mention some.
[Matt] We are primarily involved in robot guidance. This includes material handling, sealant application, paint application, welding and general assembly. We are also getting more involved in gauging and inspection and robot mounted sensor applications. We see emerging interest in soft body trim operations and moving body assembly operations.
[Walter] Our applications focus on dimensional measurement and process control in automotive body assembly operations, both for large components such as the body-in-white, major subassemblies and other components. In addition, we offer products for in-process control and 3D error proofing in assembly operations.
3. What has been the most difficult 3D-based machine vision application in the automotive industry that you have addressed, and why? What were some of the specific application issues (throughput, appearance variables, specularity, position variables, line integration issues, etc.)?
[Peter] Random bin-picking is a great challenge. The reason is that the quality and resolution of the 3D data need to be very good, even though the object scene is very ‘‘noisy’‘, with lots of occlusion, specular reflections and variations.
[Matt] Our most challenging application is still in development. Whether it should be considered automotive or military is arguable, but I am going to use it anyway. The task is to automate the painting of camouflage patterns onto military vehicles when they are refurbished and redeployed, possibly to new arenas of action. Camo patterns are complex and the vehicle mix is much more extensive than that encountered by vehicle manufacturers. A vehicle such as a High Mobility Multipurpose Wheeled Vehicle (HMMVV) can come in configurations from ambulance to soft-top, and may even be fitted with in-field alterations.
The vision system must be able to identify and register the vehicle and various vehicle accessories quickly and accurately. Set up, including calibration and training of new models, must be simple and quick. This type of application does stretch the technology in new ways.
[Walter] Sensors for flexible measurement systems (sensors mounted on robots to provide programmable measurement capability in flexible assembly lines) have required a significant amount of development. These sensors must easily interface to many different robot controllers, must acquire data at very short cycle times, and must be able to make measurements on a broad variety of geometries.
[Don] One of the hardest would be to measuring the gap and flushness of a painted vehicle while it is moving. You have to measure gaps and flushness on multiple vehicle styles with differing heights, colors, and vehicle positions. This was a tough application to solve but we are now doing this type of measurement in facilities in North America and Europe.
4. Can you provide some insight into the principles embedded in your 3D-based machine vision products?
[Matt] Our systems use triangulation-based technology, both structured light and stereo. The object(s) of interest are represented as 3D feature sets. Matching, location and dimensional analysis are performed on the 3D representations of the object. This results in applications that are both more intuitive and robust when compared to 2D approaches.
[Walter] LMI sensors for automotive applications are generally based on laser triangulation principles, often combined with 2D measurement capability resulting in full 3D measurement capability on a broad variety of surface geometries.
[Don] Our principle method of non-contact measurement relies on laser triangulation to acquire the image. However, the real work is done after we acquire the image and apply our vast library of measurement algorithms. We have algorithms that have been developed over 25 years to address traditional measurement features as well as really complex ones.
Additionally, our calibration techniques allow for us to measure in part space with a high degree of system accuracy.
[Peter] The data acquisition is based on laser triangulation. This means that the camera is recording the position of a solid laser line applied over the object surface, at a known angle and distance from the laser. By knowing the angle and position of the laser line on the imager, it’s possible to calculate the distance between the camera and the object for every pixel column of the imager. The resolution of the 3D data is set by the field-of-view and resolution of the imager.
5. Can you provide some insights into the specific implementation designs of your 3D-based machine vision products?
[Don] Our current platform allows us to be a very flexible vision and gauging supplier. Our core platform is such that a customer can customize his or her application by adding software clients to a basic core platform known as the Vector Platform. For example, the customer buys the Vector Platform to do in-line measurement for gauging but later decides to add a robot guidance component to their system. Instead of having to buy a separate system, they can simply add our robot guidance module to their existing software platform and start sending offsets to robots. This flexibility is all made possible because of the unique way our software is architected.
[Peter] Our 3D-based vision products are based on proprietary CMOS imager technology. This technology allows for parallel on-chip signal processing inside the imager. We obtain the 3D data directly from the imager and the benefit is that we can avoid time consuming expensive read-out functionality that is required if the 3D information is calculated externally. And the result is that the products have excellent performance at a reasonable cost level. We have implemented a lot of low level signal processing and laser imaging know-how to optimise the quality of the 3D data acquired with our cameras. From customer feedback, we know that our products offer a very competitive combination of speed, accuracy and cost.
[Matt] We use a network centric system architecture in which resources can be added and accessed in a very flexible manner. Resources can include imaging components such as smart cameras, processing elements and motion controllers. This makes for a very scalable and configurable system.
All system access is through a web interface. Any device with a web browser can be used to perform system setup. Important setup functions, such as cell calibration and model creation, have been largely automated to minimize setup time.
6. What new 3D-based machine vision products or advances to your existing products have you introduced in the past year, targeted at the automotive industry?
In addition to this software, we have created a completely new measurement platform that can be implemented on stand-alone systems or on a central server that can talk to individual ‘‘pockets’‘ of sensors on a customer’s floor. This central server architecture helps the customer save on floor space, and gives them a central area to store measurement configurations, data, and anything else associated with their gauges. Things like software maintenance and upgrades only need to be done at the central server, no matter how many places you are measuring in your process.
[Peter] In the past year we introduced the IVC-3D, which is the world’s first 3D smart camera. This product is intended for the system integrator market, typically not experts in machine vision. The benefit for the customer is the same as with a 2D smart camera: easy to get started and providing sufficient functionality to solve standard applications.
[Matt] We are constantly expanding the range of image acquisition devices and motion control devices that we support, but in terms of application domains, we are expanding our metrology capability to more specifically address the needs of metal-forming applications, such as tube bending and stamping.
As noted earlier, we are moving into soft trim and moving body applications. To my knowledge, these are not being addressed today. Soft trim refers to objects that may lack sufficient rigidity to avoid deformation during a trimming operation. The trimming of flash on an injection molded part is an example. The interaction required between the 3D analysis engine and the motion control device is significantly different than that in registration applications. Moving body applications also requires a significantly different approach.
[Walter] LMI has introduced the SmartGage™ line of sensors for 3D error proofing in automotive and other assembly operations. Applications include dimensional verification, part or tooling position monitoring, fit certification and process variation reduction. It provides true three dimensional measurements for product size verification, fit certification and process variation reduction. SmartGage can be installed in-process, off-line or near-line. It is designed and configured for Vision System Integrators with a wide variety of measurement algorithms, adaptable into an even wider variety of applications.
7. What historically were the barriers to the adoption of 3D-based machine vision systems in the automotive industry, and what are today’s barriers to more widespread applications of 3D-based machine vision systems?
[Matt] Like most automation advances, the barriers to penetration usually come in the form of cost, complexity and reliability issues. Early systems were proprietary. Hardware and software were specialized and expensive. PC-based systems lowered initial hardware costs, but still required considerable engineering content. As systems become more open, integration options become more standard and cheaper, system components come down in price, and the cost of vision based systems become increasingly competitive with other alternatives.
[Walter] Cost of 3D vision systems is often an issue. Budgets are often limited, and vision must compete with other needs in the factory.
[Don] In the gauging world, the biggest early hurdle to acceptance was data confidence. Customers would do endless tests to make sure the data they were getting from their non-contact measurement system matched the data they were currently using. After data confidence, influencing the customer to dedicate people to turn the data into useable information, then to act on the information was the next barrier. Today, the reduction of skilled labor has made it tougher to get customers to dedicate resources to maintenance and support of their vision equipment. There is also a premium on space in the plants, and a strong desire to reduce capital investment.
In the robot guidance world, the biggest initial hurdle was proving that the applications would work and that the technology was robust and could be maintained by the plant personnel long term.
8. What advances associated with the technology infrastructure of your 3D-based machine vision products have lead to more rigorous performance (reliability, repeatability, accuracy, etc.) in your newest products (optics, lighting, vision hardware, vision software, cameras, etc.)? And, what are the specific advantages of these advances in terms of price/performance?
[Matt] Components are becoming more reliable. For example, disk drives can be replaced by flash, eliminating a lower reliability electro-mechanical device with a high reliability solid-state device. Our systems can now be delivered in all solid-state configurations. Networked architectures allow a reduction in the number of connectors, always a source of failure. Also, low cost components and networked devices allow the possibility of configuring systems that are essentially fail-safe with only a marginal increase in cost over a minimal cost configuration. In this context, fail-safe means that a system can detect a failure, but continue to operate until maintenance can be performed at a more convenient time.
[Walter] LMI has changed our approach to automotive applications implementation to that of an OEM approach. We offer the sensors and related software configured for easy implementation by machine builders and system integrators. With this approach, the end user can have vision technology supplied from any preferred supplier as part of the entire package. This simplifies the supply chain and allows for local support of the entire system, with single point responsibility.
[Don] At Perceptron, we have spent much of our engineering efforts on making our solutions even more robust and even easier to use. We feel that robustness and ease of use is more important to the end customer than chasing microns, as our systems are more than accurate enough for the applications we do. Things like self-teaching algorithms, navigation toolbars, and task wizards are ways that we are making our systems easier to use.
We are continuously improving robustness by providing RAID arrays and hot-swappable power supplies, eliminating single points of hardware failure. These types of sustainment efforts often go unrecognized but they are really important. Few people want to talk about cables, but if we were not continuously improving our cables, we never would have been able to put a measurement sensor on a robot.
[Peter] Apart from what was mentioned above regarding our core technology, without any doubt, our competence in development and design of 3D data processing is a key factor. This processing is implemented both in the hardware as well as in the software. The benefit of this is that we can provide 3D-based machine vision products at the same price level as 2D-based products.
9. What changes in the underlying 3D-based machine vision technology (vision engines, lighting, cameras) do you anticipate in the next 2 – 3 years that will yield even better performance and the ability to address even more automotive industry applications?
[Walter] We see 3D sensors for automotive applications increasing in capabilities such as smart sensors, with higher levels of image processing and data analysis located inside the sensor, based on our new modular, scaleable FireSync platform.
[Don] Size of the sensors and cameras, processing speeds of the computers used to process images, and cost of components have and will continue to have an effect on the proliferation of in-line vision. The real trick is being able to put all of these components together into a system that the customer can use easily and effectively.
However, if you can process images faster, you can perform multiple measurements in a cycle when troubleshooting or measuring after a robot guidance system finishes its decking job, for instance.
[Peter] We will be able to further improve the 3D data quality, allowing us to better address the most demanding applications with respect to shiny surfaces, color variations and the ability to handle varying ambient conditions.
[Matt] Changes that we anticipate are mostly evolutionary rather than revolutionary. More powerful, but less expensive imaging devices are expected. These include smart cameras and imagers operating in the non-visible spectrum, such as IR and ultrasonic systems, as well as others. We see motion control devices becoming more open. The other key aspect in expanding use of the technology is in better understanding the applications and devising appropriate solution methodologies.
10. How will those changes impact the automotive industry?
[Don] The changes mentioned above will allow more vision applications to be installed into customer’s processes. However, the negative impact will be that anyone with a couple of cameras can hang a sign and say that they are a vision company. This could lead to many customers getting sub-standard support and service, which does not do any favors for our industry.
[Peter] The industry will have the possibility to use robots in new applications, being more flexible than today. The cost for material handling can be heavily reduced and simplified. And last but not least, by introducing more quality assurance, the production quality can be maintained at a higher level with a smaller span.
[Matt] The automotive industry continues to struggle to reduce costs, improve quality and differentiate product. Flexible and adaptive automation is sure to be a part of the solution to these problems. 3D vision will have growing roles in inspection, metrology and motion control. Vehicles and components are inherently 3D. Product designs are 3D. Six-axis robot manipulators are 3D devices, yet most in-line sensing is done with point, line or 2D image sensing devices. 3D vision will fill this void enabling more flexible, adaptive automation.
[Walter] As sensors become more ‘‘stand alone’‘ in operation, smaller systems measuring limited numbers of points and not requiring external data analysis will allow more economical implementation, expanding areas of application.
11. What are some market/process changes that are taking place in the automotive industry that are driving the adoption of 3D-based machine vision systems?
[Peter] The market is getting more mature. 2D vision systems are now deployed as a standard technique. Now the customers look for solutions to the problems that the 2D technology cannot address.
[Matt] We believe the primary drivers for the end-user to use 3D vision are the need to lower costs and improve quality through automation, while concurrently dealing with the need to increase automation flexibility. Higher model mix and more frequent new model introductions are the bane of hard automation. Vision is a flexibility-enabling technology.
[Walter] Process monitoring requirements are being driven down the supplier chain to the Tier 1 and 2 levels. Maintaining acceptable quality standards require the use of vision throughout the supplier chain.
[Don] The advent of more flexible body shops coupled with the reduction in labor has the possibility to impact the industry the most. I believe that we will see more and more robot guidance involved in the flexible processes that are being engineered today. If you are trying to do more complex manufacturing with less people you have to use vision to assist in building the parts. Providing in-station process control after the assembly by measuring is also very important. For example, when we load a roof in a body shop, we do a quality check of the roof ditch in the same station, with the same sensors that helped load the roof. This provides true in-station process control for the customer.
Our customers are continuing to try to do more with less, while expecting to maintain and improve quality. These trends are all very positive for a company with a strong and proven track record like Perceptron.
[Peter] I would say that, historically, the reason was, as with any other new technology, a bit of suspicion and a bit of resistance. Today, one of the limitations is that 3D-based systems are not on the standard component lists specified by the car manufacturers - yet.[Don] One of the biggest breakthroughs we have been working on is a software product that turns our data into actionable information faster, and with much less human interaction. This groundbreaking software automatically detects variation patterns in a customer’s data, and alerts them to issues in their build process, then tells the customer what percentage of process improvement they stand to gain if they correct the issue. This software helps provide answers, not just point out problems.
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.