- This article is filed under:
- » View All
Vision Guides Aerospace from the Ground Up
by Winn Hardin, Contributing Editor - AIA Posted 12/04/2008
While the number of units placed into production in the aerospace industry will never equal the automobile, semiconductor, or general manufacturing industries, machine vision engineers may look at the aerospace and defense industries as their largest benefactors… at least when it comes to expanding machine vision technologies. From the construction of the airplane, to the flying, operation and landing, machine vision is helping to make aerial vehicles safer and more efficient at carrying both passengers and payloads. Machine vision also continues to open up new classes of aerospace applications by maximizing communications bandwidth with new ways to process images from unmanned aerial vehicles (UAVs).
A Stealthy Solution
Lockheed Martin’s F-22 Raptor stealth fighter is the most advanced jet fighter in existence. Stealth technology is all about minimizing radar profiles, and that includes every surface on the aircraft down to the screws. With this in mind, Lockheed turned to Cognex Corporation (Natick, Massachusetts) and Delta Sigma Corp. to develop a vision system that could guide an automated drill to create 3,459 countersink holes at exactly the right depth and angle to guarantee that the screw head would not ‘stick out,’ catch stray radar signals, and reveal the F-22’s location in the air.
According to Mark Bowen, Applications Staff Engineer at Lockheed Martin, ‘‘The vision drill depth calibration project was initiated because a manual measure-and-adjust cycle was being repeated hundreds of times per day. It appears that we can take a task that takes one or two minutes each time, and reduce it to just a few seconds. If we can do that and improve our accuracy at the same time, this project could be a tremendous benefit to the F-22 program.’‘
Machine vision is also helping scientists to qualify the composite materials used in stealth aircraft and satellites. Using multichannel plate amplifiers and high-speed data acquisition electronics from DRS Technologies’ Imacon 200 imagers, Professor Arun Shukla at the University of Rhode Island uses high-speed imaging and MATLAB algorithms to explore high-speed events in composite materials. By tracking Mach waves, or impact waves created by a projectile striking an object under test, Shukla’s lab is able to probe the material’s strength -- including resistance to fracture, penetration and separation in the case of composite materials – versus their weight, a critical consideration for any airborne vehicle.
Docking on the Ground and in the Air
What goes up, must eventually come down. To extend the time between take off and landing, GE Fanuc developed an in-flight machine vision system to guide UAVs to refueling tankers. Developed as part of the Advanced Airborne Refueling Demonstration (AARD) project under DARPA, the system tackled the most difficult in-flight refueling task called the hose-and-drogue. In this case, the hose has some flexibility to it, making it even more susceptible to turbulence.
The system had to be capable of determining the hose position within 36 inches at 100 feet, and within 4 inches at 12 feet during insertion of the probe into the drogue fueling line. While a high-resolution camera was expected to be the best sensor, the development time for the electronic processing elements was also considered for the proof of the concept. Instead, GE Fanuc settled on a NTSC camera with special algorithms to eliminate background imagery. A combination of centroid- and model-based algorithms were used to improve the system’s ability to locate the solid inner hub of the drogue fueling line.
Sensor selection also proved an important part for guiding aircraft on the ground to disembarkation points at airports so that ramps could easily be connected to the aircraft. Working with the Fraunhofer Institute (Stuttgart, Germany), Honeywell Airport Systems used a CMOS imager designed for welding applications with high dynamic range to handle both nighttime dockings and daytime dockings when snow and glare can challenge the ability of any sensor to extract useful images. The system uses the image data to measure airplane’s position and automatically display the course correction data at each disembarkation point.
Find Me a Target
UAVs have become critical weapons in modern military conflicts because of their ability to stay on station for many hours at a time, watching safe houses for suspected enemies as well as finding camouflaged military assets, such as tanks, missile launchers and anti-aircraft guns. Recently, the UK’s Ministry of Defense purchased image-processing software from Oxford Metrics Group to help locate targets from UAV imagery.
When it comes to analyzing UAV image streams, aerospace engineers are constantly caught between the need for high resolution imagery, the bandwidth constraints between the UAV and ground control centers and the need to keep the plane light, which means minimizing all weight, including computers for image processing and radio transmitters.
The answer is often a compromise between on board processing and on-demand high-resolution imagery for ground personnel. Computers, both on board and on the ground, process the imagery using automatic target recognition (ATR) search algorithms. The algorithms help military personnel to find hidden assets, and then are used to help guide ordinance to the location in concert with the movement of the UAV through the air. Today, military scientists are exploring a multitude of ways to parse this information efficiently, including variations on Stochastic and Markov models.
Through the Office of Naval Research, Association Professor Jennifer Davidson at Iowa State University is one researcher seeking to improve ATR through a better understanding of how random variables impact Stochastic models. In particular, Davidson explored multiresolution/multiscale methods using partially ordered Markov models to improve target recognition, image segmentation and discrimination. Similar research on mathematical solutions to find hidden features in random scenes is underway at Dr. David Marchette’s lab at the Naval Surface Warfare Center Dahlgren Division (NSWCDD). Marchette’s work focuses on the identification, spatial scan analysis, random shape work and multiple classifiers.
Powered by the latest computational engines, these new image processing methods strive to use ever more complex and numerous indicators to qualify random structures in digital images through a combination of multiple weighted probabilities. One can only guess how this research will impact industrial machine vision algorithms, but one thing is for sure: the machine vision industry and its customers will benefit as the aerospace and defense industries continue to push the envelope of machine vision technology.
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.