• Font Size:
  • A
  • A
  • A

Feature Articles

What’s Happening in the World of General-Purpose 3D-Based Machine Vision – Part 2

by Nello Zuech, Contributing Editor - AIA

The market for general-purpose 3D-based machine vision technology is expanding rapidly. By far the largest generic application is metrology. The non-contact data collection ability of machine vision-based implementations makes it possible to make measurements on objects at very high-speeds when compared to contact-based approaches. The compute power available today makes it possible to process the Z-axis data along with the two-dimensional data and map it directly to CAD files fast enough to keep up with production rates in some applications. The result is that 3D-based machine vision is rapidly migrating onto shop floors providing realtime process control. 

There are many different approaches to acquiring 3D data. All 3D-based machine vision systems ultimately acquire and operate on image data. Acquisition can be based on collecting the Z-axis data using linear, area or other image sensing techniques, laser radar or other laser scanning techniques and point detectors, or other approach. These systems incorporate the compute power to manage, process and analyze the data acquired, as well as make decisions relating the data to the application without operator intervention. This characterizes what is meant by the term ‘‘3D-based machine vision.’‘

The literature describes numerous approaches: triangulation, structured light, interferometry, moiré interferometry, phase shift interferometry, accordion fringe interferometry, confocal holography, laser ranging, stereo photogrammetry, etc. Some of the approaches are more suitable for small envelope, some for medium envelope and some for large envelope applications. Not all approaches are suitable for all measurement volumes. Some approaches are better suited for finer measurement resolutions – sub-nanometer, while some are better suited for applications requiring micron resolution. It is important that buyers understand their application requirements so that they map the most appropriate approaches to them.
Input for this article was canvassed from virtually all the companies known to offer 3D-based machine vision products. Part 1 reflected the input from companies offering a more general-purpose flavor while Part 2 (this article) reflects the input from suppliers of more of a solution. For this article the following responded and their input follows as responses to the respective questions posed.

  • Toni Ventura Traveset, General Manager – DataPixel
  • Lieven De Jonge - Metris
  • Tobby Y. Li – MIIC America Opton
  • Marty Chader, Product Manager – Konica Minolta Corporation
  • Iain Christie, Director R&D - Neptec
  • Mark Hoefing, Sales Manager - Perceptron
  • Rob Stoner, Vice President Metrology and Optical Systems and Ron Garden, Director of Machine Vision Products – Zygo Corporation

1. What are some specific 3D-based machine vision applications that your company addresses with your 3D machine vision technology?

[Toni Ventura Traveset - Datapixel] ‘‘DataPixel products are designed for high-accuracy applications, combining high-precision 3D sensors (0.010 mm accuracy) with automatic feature extraction for 3D data and CAD based inspection. Additionally, DataPixel products could be applied to robot guidance using 3D information and part alignment. DataPixel offers three families of 3D machine vision sensors:

  • H-Class: sensors to be integrated on Coordinate Measurement Machines (CMM) for high accuracy dimensional control.
  • R-Class: sensors to be integrated with industrial robots for 3D dimensional control and part alignment.
  • S-Class: sensors for in-process part inspection based on 3D scanning.
    Automotive, aerospace and electronic industries are the typical users of DataPixel products, providing the most accurate systems in the market. DataPixel products are used for 3D control of parts like car body components, motor engine components, plastic products, and soft part components, as well as for assembly control. With DataPixel solutions, the customer is able to perform high accuracy 3D dimensional control of parts in-process.’‘ 

[Lieven De Jonge – Metris] ‘‘Metris produces 3D laser scanning sensors which are interfaced to co-ordinate Measuring Machines (CMM's). The main application is dimensional quality control in the automotive and aerospace manufacturing industries as well as reverse engineering in the same industries.’‘

[Tobby Li – MIIC America] ‘‘Reverse engineering and inspection.’‘

[Marty Chader – Minolta] ‘‘Reverse engineering (modeling from physical objects, sizes from coins to cars), inspection of forgings, castings, sheet metal stampings, stamping dies, etc.’‘

[Iain Christie – Neptec] We are under contract to NASA to provide a 3D inspection system to verify the integrity of the outer thermal protective surfaces of the space shuttle when it returns to flight. The sensor will be held on the end of an extension boom, which will be manipulated using the shuttle's robot arms. The sensor will be used to detect and characterize anomalies in the shuttle's wing leading edge panels and nose cap at ranges of up to 2m.
We are also developing the same technology for use in industrial applications to provide standoff metrology capability for quality control, inspection, and reverse engineering. We also hope to develop an in-line inspection system based on the same sensor.

[Mark Hoefing – Perceptron] ‘‘Robot guidance, dimensional measurement and body panel gap and flush measurement.’‘

[Rob Stoner – Zygo] – ‘‘One might say that Zygo was the original machine vision company as long before machine vision term was coined we offered a computer imaging-based system based on Fizeau interferometry to measure properties of optics offline. However, we considered it a metrology instrument rather than a machine vision system, which were mostly 2D-based. Today our small and large aperture wavefront interferometers are widely used to measure glass or optical components such as flats, lenses and prisms as well as precision components such as bearings, sealing surfaces, polished ceramics and contact lens molds. More recently our fully automated NewView microscope with robotic material handling inspects microscopic structures on large flat panel displays. Our systems are also being used in the automotive industry to inspect precision machined parts used in fuel injectors; data storage market to measure surface properties of disk heads and in flying height testing; as well as in the semiconductor industry both in the process side for thin film measurements and packaging side, where they make 3D measurements on the interconnect bumps, films on the bumps and packages themselves. Other growing application areas include 3D stacked chip packages and MEMS.’‘

2. Can you provide a general description of the approach your products employ to arrive at 3D image data?

[Toni] ‘‘The DataPixel OptiScan basically uses laser triangulation techniques, optimised for high accuracy. OptiScan combines on a single sensor a laser source and an area sensor (CCD or CMOS depending on the model). 100% of the sensors are individually calibrated and tested, offering an optimal performance for high accuracy applications. OptiScan sensors are able to obtain more than 60 thousand high accuracy 3D points per second.’‘

[Lieven] ‘‘Structured light (i.e. a swept laser beam) is projected on an object, a camera receives the reflected light and a controller calculates 3D co-ordinates based on trigonometry.’‘

[Tobby] ‘‘We project a grating or gratings onto a 3D surface and use a digital area camera to take image or images of the grating patterns, which are deformed by the 3D surface. We then calculate XYZ coordinates for each pixel in that image. The direct outcome is a set of points with known XYZ coordinates, which are referred to as point cloud data. For reverse engineering, the object can be measured from different angles until the measurement covers the whole surface of the object. This involves rotation or translation. Then the whole point cloud data can be further input into software to re-construct the surfaces (human intervention). Then the outcome (usually in IGES format) can be input into any CAD system. In the case of inspection, the point cloud data can be compared with a master data set (could be a CAD model or the measurements on a good part) to see color coded deviations. We calculate XYZ coordinate for each point. We use area camera to take images, of the structure light pattern on the object. The handling of point cloud data is totally different from one application to another. For a simple case, we can have automatic decision made by the software. For most of the cases human intervention is needed for post data processing. We are more like a coordinate measurement machine (CMM) but with a non-contact optical sensor head, which can digitize an area into point cloud data in a few seconds.’‘

[Marty] ‘‘Combination of photogrammetry and laser triangulation sweeping a surface in a single shot.’‘

[Iain] Our sensor uses a patented technique known as ‘‘auto synchronous triangulation’‘. The technique employs a double-sided mirror to fold the standard triangulation optical path to allow it to fit within a compact sensor. The sensor uses a two axis steerable beam to provide both scanning and tracking modes.

[Mark] ‘‘Laser triangulation.’‘

[Rob] ‘‘Our basic approach can be described as scanning white light interferometry. We do not use lasers, but rather white light. Interferometry is a technique in which a pattern of bright and dark lines (fringes) results from an optical path difference between a reference and sample beam. The incoming light is split inside an interferometer, one beam going to an internal reference surface and the other to the sample. After reflection, the beams recombine inside the interferometer, undergoing constructive and destructive interference, and producing the light and dark fringe pattern. A CCD camera generates a 3D interferogram of the object that is stored in the computer memory. The 3D interferogram is then transformed by our patented Frequency Domain Analysis into a quantitative 3D image.’‘

3. What are critical 3D machine vision system performance criteria for each of the applications that you address?

[Toni] ‘‘DataPixel is in the high accuracy market (better than 0.010 mm). In our market the most important criteria are system accuracy, scanning speed, sensor range and stability on temperature and environment changes. DataPixel sensors incorporate a unique (patent pending) temperature compensation method.’‘

[Lieven] ‘‘Data coverage (point clouds instead of single points); data distribution (no single view line data but well distributed data measured from different angles); speed; accuracy.’‘

[Tobby] ‘‘Accuracy and automation.’‘

[Marty] ‘‘Accuracy, resolution (i.e., can it do the job?) ease of use and cost (i.e., can it save us money?).’‘

[Iain] Accuracy is always a primary driver. However, with a variety of solutions able to meet very demanding accuracy standards we believe that increasing emphasis will be paid to the flexibility of the sensor. As accuracy becomes a less important discriminator between competing technologies, the efficiency with which the sensor can be coupled to the critical decision making path will become a significant performance criterion as users seek to maximize the benefit of moving to standoff 3D sensors.

[Mark] ‘‘Calibration, accuracy, repeatability, consistent results with painted surfaces.’‘

[Rob] ‘‘Speed and precision, which includes both repeatability and reproducibility in 3D, are the critical machine vision performance criteria for the online applications we are seeing emerge today. It is also important to accurately interpret the 3D data reflecting application understanding to make the critical measurements required.’‘

4. What changes have been taking place in the underlying technologies that are the basis of your 3D machine vision systems that has resulted in improved performance?
[Lieven] ‘‘Faster camera's with more resolution; faster general use PCs (going from dedicated data acquisition cards in dedicated controllers to general frame grabbers in general use PCs).’‘

[Tobby] ‘‘Post data processing software.’‘

[Marty] ‘‘The addition of photogrammetry has enabled us to have a unique position in a full-car scanner for under $100K. The software has enabled full GD&T inspection of parts, and eased the work-flow for modeling.’‘

[Iain] The dramatic improvement in the recent past has been in the price of components more than improvements in performance. As little as five years ago the quality of components required to reach competitive performance would have made it difficult to compete on price. Reduced prices for electronic, optical and mechanical allow us to provide a solution, which is competitive on price and performance.

[Mark] ‘‘Higher resolution cameras, higher performance computers.’‘

[Rob] ‘‘By far the greatest change now making it possible to perform 3D metrology online has been the substantial gains in compute power over the last 10 years. In the case of interferometry-based measurements improvements in CCD camera frame rates and noise have also been critical. Advances we have made internally in the design of piezo scanners and encoders have also contributed to making online metrology possible.’‘

[Toni] ‘‘Most important changes are in structured lighting and laser systems, providing optimum focus and spot size, and the area sensors, where high resolution and high-speed CCDs and CMOS are the key factor for accuracy improvement. Additionally, other very important improvements are the new algorithms for 3D point detection and thermal compensation. Calibration methodologies are also a fundamental aspect for final product performance.’‘

5. Where do you see breakthroughs coming in the specific technologies that are the basis of your 3D machine vision systems that will result in further improvements in the near future – next three years?

[Lieven] ‘‘Faster cameras with more resolution that are small in size.’‘

[Tobby] ‘‘More automation.’‘

[Marty] ‘‘There are several branches one might choose when answering this question. Some technologies offer lower cost single-purpose scanners such as will be found in audiologist's or dentist's offices. Others would offer additional power in multi modal measurement.’‘

[Iain] Breakthroughs will likely come in the processing of data and not in acquisition. The technology for acquisition of 3D data is reasonably mature. However, applications to intelligently use the 3D data to perform tasks that cannot be performed with 2D sensors are in their infancy. To date, most of the effort in 3D processing has been centered on rendering and visualization applications which focus on converting 3D information to 2D and conveying that (albeit superior) information to a human operator. The next generation of 3D processing technologies will focus on adding value for the customer by linking the sensor and the processor to provide an intelligent and interactive acquisition system which is more autonomous and which reduces rather than increases the amount of data that is presented to the human operator. This will allow the human in the loop to focus more clearly on those tasks, which require operator input and will enable a proliferation of real time 3D sensing applications.

[Rob] ‘‘With further improvements in cameras, encoders, compute power we expect to be able to address and even larger number of applications – those where speed and size are critical. The faster the processing speed the less the impact of vibrations often found on shop floor and this will make even more applications possible.’‘

[Toni] ‘‘CMOS sensors are offering a promising improvement by offering pixel direct addressing and high speed using partial scanning techniques. Very important improvement will come from the 3D feature extraction techniques, in order to automatically extract relevant dimensional information from 3D point cloud. These techniques are the key to offering feasible and high performance 3D inspection systems for dimensional control. DataPixel is offering the new GFX library, especially designed for high accuracy 3D feature extraction.’‘ 

6. Are there market changes associated with those applications that you address that are driving the adoption of 3D machine vision?

[Tobby]  ‘‘Yes!’‘

[Marty] ‘‘The shift toward over sampling for inspection as opposed to more traditional ‘‘point picking’‘ techniques used by classical CMMs.’‘

[Iain] The NASA market is very specific. To date NASA has had very little exposure to 3D data. It is clear from early results that as 3D systems become operational the demand for them is likely to increase dramatically. Likewise because the need for autonomous operations in space is also increasing we are working to develop more intelligent 3D sensors and applications.

[Mark] ‘‘Emphasis on product quality.’‘

[Rob] ‘‘Interestingly, driving precision machining in the automotive market is the increased emphasis to respond to environmental concerns. Precision-machined parts in the fuel injection system reduce pollutants. Correspondingly, precision metrology systems are required. The overall emphasis on nanotechnology requires a corresponding capability in 3D metrology. Today one sees requirements for sub nanometer or even angstrom z-axis resolution and repeatability. There are many research activities at universities where making 3D measurements on moving objects is required and sooner or later these same requirements will move into manufacturing.’‘

[Toni] ‘‘Basically yes, as automatic 3D dimensional control of parts is becoming a main application field for automotive, aerospace and electronic markets. DataPixel forecast is that in 5 years, 50% of dimensional control of parts that are controlled using classical touch probes on CMM will be transferred to 3D vision systems. ‘‘

[Lieven] ‘‘Increased importance of quality control and quality management systems.’‘

7. How will 3D machine vision systems have to change to meet emerging 3D machine vision applications or to further penetrate their markets more successfully?
[Marty] ‘‘Ease of use will be a driver for the occasional user. Ease of integration for the in-process inspection user. Also, although there are a variety of applications in which the potential customer has not been exposed to the ROI possible with existing systems, the value proposition must continue to decline to add additional applications for which current systems are cost prohibitive.’‘

[Toni] ‘‘State of the art 3D systems offer good performance for many applications. In order to better penetrate the market some standardization effort is needed, to facilitate integration of 3D systems with CMM, robots or special inspection machines. DataPixel is actively working on the OSIS Committee (Optical System Interface Standard). They are taking the first steps in interface standardization for 3D sensors.’‘

[Iain] 3D applications will need to demonstrate the ability to solve problems more efficiently through the use of the 3D data. One of the principle advantages of 3D data is that it can provide simpler and more robust machine autonomy because it provides a richer and less ambiguous data set than 2D applications. Leveraging this advantage into more efficient processes through increased machine autonomy is one route that will need to be exploited to convince customers of the value of 3D imaging technology. To date a large proportion of the implementations of 3D have focused on convincing customers that they can provide data to improve the quality of existing decision-making processes. These arguments have been convincing in many areas, however there are many other areas of application where customers are satisfied with the quality of the current decision making process but where there is room to make the process more efficient by automating the process in a robust and reliable way. 3D technologies must be formatted to address these types of applications.

[Mark] ‘‘Simplified setup, higher MTBF, simplified troubleshooting and improved analysis tools.’‘

[Rob] ‘‘Once cost comes down as the cost of the underlying technologies comes down and system speed improves, machine vision-based 3D metrology tools will become ubiquitous in manufacturing industries.’‘

[Lieven] Adapt to small vertical applications (including software) rather than horizontally change underlying technology.’‘

[Tobby] ‘‘Accuracy and automation.’‘

8. As a supplier of 3D machine vision systems, what are some challenges you face in marketing 3D machine vision systems?

[Mark] ‘‘Customer ownership of systems, customer training and proving return-on-investment.’‘

[Iain] The value proposition is probably the most challenging feature of the market. Customers almost universally agree that the technology is impressive but they have difficulty reconciling the cost of 3D sensing and processing applications with the improvement they see in their operational or business models. The challenge for 3D sensing, going forward, is to generate a value proposition that shows how 3D sensing can enable significant cost savings by doing things differently rather than by merely doing the same things better.

[Lieven] ‘‘Changing the minds of customers to move the perception from new technology to proven technology.’‘

[Tobby] ‘‘Price and automation.’‘

[Rob] ‘‘One challenge in the marketplace is the term machine vision itself. Historically, metrologists don’t associate the term machine vision with metrology. However, machine vision capability has definitely increased and is now better able to address precision metrology applications. Significantly, in the emerging markets for 3D metrology systems, such as flat panel displays and automotives, the term machine vision has been accepted as a technology for process control. Another challenge is making the technology more user friendly as it migrates from the laboratory to the shop floor.’‘

[Toni] ‘‘DataPixel challenges are both technical and commercial: from the technical point of view, our strategy is offering feasible and high performance sensors for high accuracy dimensional control applications, increasing the speed and accuracy of sensors specially for medium and large volume (for example, able to control a full car body with high accuracy). From the commercial point of view, our challenge is to increase our international market, especially in Asia, but also in Europe and the United States.’‘

[Marty] ‘‘Educating potential customers of the availability and the benefit to them.’‘

9. What are your thoughts on the future of 3D machine vision?

[Iain] 3D sensing technology has, in large measure, achieved maturity in the last few years. There is a wide market and a large number of sensing technologies available. 3D rendering and visualization applications have become very sophisticated. The future of machine vision lies in finding ways to move past the point of collecting 3D point clouds for the purposes of generating ‘‘better’‘ 2D pictures for humans in the loop. The future of machine vision lies in finding value added applications that use the unique features of 3D sensors to build integrated systems that can employ a sense-process-adapt feedback loop to provide efficiencies that cannot be achieved with the human in the loop as the central processor.

[Lieven] ‘‘We believe that the market growth will be tremendous, since the potential is huge (in the Metris case the worldwide installed base of CMM's) and the new paradigm (measure point clouds instead of discrete points) has a proven track record and quickly gains acceptance.’‘

[Tobby] ‘‘Will grow for sure.’‘

[Rob] ‘‘3D-based machine vision in manufacturing applications for process control is in its infancy. Given the continued emphasis on size reduction in virtually every manufacturing industry and the recognition that most parts have a 3D character, the use of 3D-based machine vision will grow rapidly in the next couple of years. Given the performance improvements expected in cameras and compute power and related optomechanics, the technology will only get better and be able to handle even more applications. A nice convergence of capabilities rising to meet market demand.’‘

[Toni] ‘‘3D machine vision will continue to be a fast growing market, as infinite possibilities arise from the actual and future technologies. Growth of more than 50% per year is expected over the next 5 years.’‘

[Marty] ‘‘It is still in its nascence, early adopters are the majority of sales at this point, but we are transitioning to a more mature market where sales of mainstream buyers are resulting from word of mouth of colleagues who have been successful with the technology.’‘

10. What advice would you give to a company investigating the purchase of a 3D machine vision system?

[Tobby] ‘‘Things are changing so quickly and updates should be made as often as possible.’‘

[Toni] ‘‘Our advice is clearly to identify all the requirements of the application and choose the right expert supplier for the identified needs, able to support the customer during the full cycle of the project. This is particularly important for 3D systems oriented to high accuracy dimensional control, where full support is needed for integration, calibration and certification of the equipment. DataPixel and our group Innovalia offers the needed support during all the project phases, including certification of systems following internationally accepted standards.’‘

[Mark] ‘‘Clearly define how the system will be used and by whom; make sure your organization understands the commitment in people, training, and processes required to get value out of the system; plan the vision system at the same time the manufacturing process is being developed.’‘

[Marty] ‘‘In the past six years at any given time, there have been upwards of 20 companies offering a ‘‘general purpose’‘ 3D scanner. The attrition has been nearly 50%, and newcomers continue to enter the market. My advice is to deal with a company in whom you have confidence of their longevity and commitment. Konica Minolta is on our sixth generation of product, with improvements coming at an increasing rate. These are signs of that commitment.’‘

[Rob] ‘‘The customer should understand that there are lots of ways to make 3D measurements but that there is also a corresponding large variation in performance especially with respect to precision. Customers must understand their needs – be aware of the complexity of their parts, not only shape complexity but also reflectance complexities – in order to purchase the best system to correlate 2D images to 3D measurements. They must also understand their shop floor conditions and that metrology work station areas will probably require special designs, such as vibration isolation. Customers must understand the properties of the different 3D measurement tools to assure that the output is meaningful to the application.’‘

[Lieven] ‘‘Dare to radically change your inspection processes, without being a pioneer: leverage the Metris experience and expertise embedded in the standard Metris products which are market leading.’‘

[Iain] 3D sensing, while impressive, is not a end in itself. It should be a means to an end. Given that 3D sensing technology is typically a high end product it would be wise to consider suppliers that will help customize and integrate a system to serve your particular needs. Because the true power of 3D data lies in how it is used (as opposed to how it is collected), off the shelf solutions may not, in the end, be the most efficient way of implementing a 3D vision system.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: