• Font Size:
  • A
  • A
  • A

Feature Articles

University Research Related To Machine Vision – Part 3

by Nello Zuech, Contributing Editor - AIA

As suggested in my earlier articles on university research relevant to machine vision, while many universities have departments or interdisciplinary centers conducting research in computer vision, most of the research today appears to be aimed at security related applications and health related applications. Security related research is often biometric driven, especially face recognition. There are several organizations looking at the under vehicle inspection application. In addition I found that there is much research being conducted related to autonomous vehicles. The objective in these cases is 3D-based developments consistent with 24/7 outdoor ambient variables. For the most part there is very little research of an applied nature targeting industrial applications.

To prepare for this article we emailed the key researchers in over 50 North American universities identified as suggesting they conduct research in computer vision or image processing. I received only eight responses to my request for help. Consequently I gathered information for this article from the responses and by reviewing the websites of the over 50 institutions, culling from them descriptions of their work relevant to machine vision. What follows are descriptions of some of the research that is somewhat related.

Under the supervision of Dr. Firoz Kabir, Virginia Tech has conducted a good deal of research in machine vision specifically directed at applications in the wood and lumber industries. One of their projects demonstrated the principles associated of a scanning system that can automatically identify edging/trimming defects (knots, wane, decay and voids) on rough hardwood lumber and then recommend optimal edge and trim cuts to achieve maximum lumber value for each board. Another project resulted in a Tree Measurement System designed to acquire estimates of tree volume and products more reliably and in less time. The multisensor instrument incorporated a video camera with a custom lens system, a laser rangefinder and 3-axis orientation sensors.

They have also developed a multi-modal sensor system for lumber inspection. System combines color sensing, x-ray scanning and laser ranging. Their research has focused on evaluating system performance in terms of accurate lumber grading and parts yield maximization over a range of wood species and conditions. This research has lead to a commercial product offered by Nova Technologies.

They have also engaged in pioneering applications of computer tomography scanners on logs prior to initial breakdown. They have worked with two separate industrial scanning approaches. One technology (axial tomography) can scan relatively small diameter materials for long duty cycles. Where higher x-ray energies are needed for material penetration, tangential scanning is a viable alternative. It offers simple, mechanical operation, fast scan speeds per volume, relatively low power requirements and no image artifacts. They developed a procedure that automatically interprets scan information so that the saw operator receives the information required to make proper sawing decisions.

Their approach to automatically label features in CT images of hardwood logs classifies each pixel individually using a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Their ANN was able to classify clear wood, bark, decay, knots and voids in CT images of two species of oak with 95% pixel-wise accuracy. They also investigated other ANN classifiers, comparing 2D versus 3D neighborhoods and species-dependent classifiers. Concluded that 3D neighborhoods are better for multiple species classifiers and 2D for single species classifiers.

Undoubtedly one of the centers spawning some of the most important research leading to the development of the machine vision industry has been the Artificial Intelligence Labs at Massachusetts Institute of Technology. One of the leading researchers at the labs has been Dr. Berthold Horn. He suggests, ‘‘One way to look at the focus of some of our work is ‘‘physics based vision’‘.  This puts emphasis on the importance of understanding the imaging process when trying to interpret images. This stands in distinction to methods that treat the image merely as a picture or rectangular array of numbers. Nothing new here today, but it was a long battle to get this accepted.  Also, if you are working in
robotics or industrial applications this seems like a no-brainer.

Another thread is the calculus of variation approach -- or more generally least squares optimization-based approaches.  These solve for unknown parameters, shapes, positions and so on by minimizing some well-characterized meaningful error, such as sum of squares of image position measurement errors.  Some of these ideas have been relabeled ‘‘regularization’‘ in recent years.

Some major past efforts included ‘‘shape from shading’‘ (1970 and on), ‘‘optical flow’‘ (1978 and on --- now used in camera stabilization and in ‘‘optical mice’‘), ‘‘photometric stereo’‘  (1980 and on) and so on.’‘  Dr. Horn also notes: ‘‘I agitated strongly in the late 1970s and the 1980s for rapid advances in ‘‘advanced automation’‘ and use of vision and robots in manufacturing. This included years of consulting for GM Research, testifying before congressional committees and so on. Unfortunately, little happened, at least in part because manufacturing was cheaper overseas and in part because of the legal environment in this country.  Not much support for this now and I got tired of swimming against the tide.  Plus, ‘‘spin offs’‘ like Cognex have done an admirable job in this area, at least when it comes to semiconductor manufacturing which seems to be much more forward looking than mechanical or electrical manufacturing.’‘  He suggests, ‘‘My focus these days is more on ‘‘computational imaging’‘ than on machine vision per se. This includes work on (i) coded aperture imaging, (ii) synthetic aperture microscopy, (iii) diaphanography, and (iv) exact cone beam reconstruction.  Other work here in the lab is in the medical imaging area and ‘‘intelligent vehicle’‘ control.

Major emphasis these days on ‘‘learning’‘, statistical methods and such...’‘

At the University of Texas, Austin under the direction of Dr. Risto Miikkulainen, the computer vision research has been more fundamental. As he describes, ‘‘Our main focus is on computational modeling of the visual cortex. The main goal is to understand biological vision systems. By doing so, it is possible to gain insight into how effective vision systems in general can be built.

Several other projects study neural networks more generally in vision tasks for robots, for example learning to recognize objects and scenes, and building a hierarchy of visual representations useful for navigation. The hypothesis is that learned representations allow for more robust performance than those that are built by hand.

Our research has been solely basic research, and the tools and results we develop are freely available to everyone. For example, we are currently developing a simulator for cortical maps called Topographica (www.topographica.org), which will be available under the GPL (and open source license). It would certainly be possible to take it into applications such as handwritten character recognition or even object recognition, and UT does have a licensing organization to do that.

In the near future, more detailed models of the cortex, such as those including motion and color. In the longer term, constraining the models with imaging techniques such as fMRI, MEG, and also TMS.

Research into biological vision at the University of Florida under Dr. Paul Holloway has been in the news recently. As their press release suggests, ‘‘The next generation of smart weapons may ‘‘see’‘ targets with a manmade version of that wonder of the natural world, the insect eye. Their research has been inspired by the panoramic and precise vision of flies and other insects. At the University of Florida, the focus of the ‘‘bio-optics synthetic systems research,’‘ sponsored by the federal Defense Advanced Research Projects Agency, or DARPA, is on adapting mechanisms called ‘‘photon sieves’‘ for visual purposes.

We think we can use this concept to make smart weapons smarter,’‘ said Paul Holloway, a distinguished professor of materials science and engineering and the project's lead researcher. Holloway said today’s smart weapons rely on systems that use refractive optics, or lenses that bend light, to produce a focused view of the target. The resulting image is like what is seen through a telescope - the view of the target is good but the surroundings are completely lost. This limits a weapon’s accuracy on moving targets, as well as its ability to overcome flares or other counter measures designed to confuse the weapon. Refractive systems also are relatively heavy, because they use mechanical systems to move the lens and keep the target in view. The added weight requires more propellant and increased size, which boosts the cost, Holloway said.

The alternative approach of Holloway’s team of engineers and physicists relies on diffractive optics, which uses interference effects to redirect light in different directions rather than bending it. Their vision for the technology merges the developments of a 19th-century French physicist named Augustin Fresnel with a modern appreciation of how insect eyes work. Fresnel invented the Fresnel zone plate, also known as the Fresnel lens, which uses concentric circles of transparent and opaque material to diffract light into a single, marginally focused beam. The Fresnel lens became the standard on lighthouses for many years. Holloway and his colleagues have modified the zone plate, replacing the transparent rings with a series of precisely spaced holes that sharpen the focus quality of the beam. Although similar devices, called photon sieves, had been developed before, they are typically used for X-rays or other electromagnetic radiation outside the visible light spectrum.

The UF team is the first to develop photon sieves for visible and longer-wavelength light, including infrared light, Holloway said. The latter can have important implications for weaponized vision systems, which sometimes use infrared light. Art Hebard, a UF physicist and member of the project team, said although the holes help sharpen the focus of the light, they also significantly reduce the amount of light that gets through the metal plate. He and his colleagues are developing a way to combat this using another physical phenomenon: When light strikes a metal surface, such as silver, it generates electrical charge oscillations, called surface plasmons. Hebard said the UF team has made progress in ‘‘reconverting’‘ these plasmons into light by altering the surface characteristics of the metal. ‘‘If you can corrugate or structure the metal properly, you can reconvert plasmons back into light,’‘ he said. ‘‘That way, you get increased transmission of light because some of the light that is hitting the opaque part of the lens is transmitted rather than absorbed.’‘

The team has made and tested small prototypes of the lenses. Once perfected, the next step could be to put many such lenses together - some designed for high resolution, others for lower resolution - onto a surface to produce a multiple-eye effect, Holloway said. The result would be a lightweight panoramic vision device with no moving parts, he said.

Smart weapons aren’t the only potential application. Robots designed to operate autonomously, such as those used to transport nuclear materials, fight oil well fires or do other tasks too dangerous for people, also could benefit from improved vision systems, he said. Eventually, such lenses may even replace refractive lenses in consumer products, such as cameras, making them lighter and potentially reducing their costs.’‘

Probably some of the most fun machine vision related R&D at universities is that associated with autonomous vehicles. DARPA recently sponsored a competition in which 21 participated to see if a their vehicle could be autonomously guided over a 150 mile route through the Mojave dessert in 30 hours. Only 7 survived the first trials through a 1.36-mile obstacle course at a California Speedway. On the actual race (with a million dollars going to the winner) all 7 vehicles failed within 8 miles of the start, some just yards from the start.

The University of Cincinnati is one school that has invested heavily in this niche research. Their Bearcat’s vision system comprises three cameras, two for line following and one for pothole detection. The vision system for line following uses 2 CCD cameras and an image tracking device (I-Scan) for the front end processing of the image captured by the cameras. I-Scan tracker processes the image of the line. The tracker finds the centroid of the brightest or darkest region in a captured image. The three dimensional world co-ordinates are reduced to two dimensional image coordinates using transformations between the actual ground plane to the image plane. A novel four-point calibration system was designed to transform the image co-ordinates back to world co-ordinates for navigation purposes. Camera calibration is a process to determine the relationship between a given 3-D coordinate system (world coordinates) and the 2-D image plane a camera perceives (image coordinates). The objective of the vision system is to make the robot follow a line using a camera. At any given instant, the Bearcat tracks only one line, either right or left. If the track is lost from one side, then the central controller through a video switch changes to the other camera.

The robot also has the ability to detect and avoid simulated potholes represented by two-foot diameter white circles randomly positioned along the course. A non-contact vision approach has been taken since simulated potholes are significantly different visually from the background surface. A monochrome camera is used to capture an image of the course ahead of the robot. The data from the camera is fed to the imaging board. The control software for the imaging board processes the formatted data. This software detects the presence of a simulated pothole and determines the location of the centroid of the pothole. In addition to the machine vision techniques, their Bearcat also uses two alternative solutions for collision avoidance and obstacle detection - one using a laser scanner and one with the sonar sensors. The line following, obstacle avoidance and pothole detection systems are integrated for pothole detection and avoidance. The obstacle avoidance system takes precedence over the pothole avoidance system. 


 

 

 

 

 

 

Comments:

There are currently no comments for this article.


Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.

First Name: *
Last Name: *
Your Email: *
Your Comment:
Please check the box below and respond as instructed.

Search AIA:


Browse by Products:


Browse by Company Type: