- This article is filed under:
- » View All
The Show Must Go On
by Winn Hardin, Contributing Editor - AIA Posted 07/07/2009
“Hanging out with the wrong crowd” convinced Rafael Lozano-Hemmer to switch from the world of the chemist to a collaborative artist, creating interactive exhibitions for museums and corporations around the world. The good news is that the “wrong people” were his family. “My family has always been involved it he arts, so it came naturally,” explains Lozano-Hemmer.
Interactive artistic exhibitions are just one way machine vision is helping to make the world a better place. Machine vision is also helping elite athletes in Germany to get the most of their training and explore new moves that can help them succeed in their chosen sport. Machine vision also helps to remove the barriers between Web-based simulated environments, such as Second Life, and the legions of followers that enjoy digital social interaction in addition to “reality.”
Who’s Watching Whom?
“Artists are using sensor technologies, including computer vision, as a way to make the artwork become aware of the public,” says Lozano-Hemmer. “We’ve turned the traditional art experience upside down. Now the artwork looks at us, listens to us, and is looking for us to do something with it. The relationship is what makes it interesting.”
Lozano-Hemmer has used machine vision to detect the presence of art viewers, and sometimes much more. “We’ve worked with facial recognition, and in other cases, used computer vision to predict where a person will be in the future, and put an experience in their path….Computer vision is changing very rapidly, which is great. We adore computer vision because there’s no tethering. You don’t have to hold a special wand to interact. Those approaches aren’t practical when you have high throughput, so machine vision has a plus when it comes to throughput.”
“Under Scan” and “Subtitled Public” are two of Lozano-Hemmer’s creations that turn on the power of machine vision. “Under Scan” is an interactive video art installation displayed in several UK cities as well as the 52 Biennale di Venezia in Venice (2007). Bright floodlights are aimed from above onto a large square. As viewers walk across the square, they create shadows, which are detected by a machine vision system. The vision system controls robotic projectors that project video-portraits within their shadow (see Image 1). Local filmmakers captured more than 1000 video-portraits of volunteers for the exhibition. In the installation, the portraits appear at random locations. They “wake-up” and establish eye contact with a viewer as soon as his or her shadow “reveals” them (see Image 2). As the viewer walks away, the portrait reacts by looking away, and eventually disappears if no one activates it. Every 7 minutes the entire project stops and resets. The tracking system is revealed in a brief “interlude” lighting sequence, which projects all of the calibration grids used by the computerized surveillance system (see Image 3).
“Subtitled Public” is both a triumph of machine vision, as well as a foil for showing its limitations. The exhibition, which showed at the TATE museum in Liverpool, UK, as well as museums in Mexico City and Madrid, consists of an empty exhibition space where visitors are detected by a machine vision system. When people enter the space, the system generates a subtitle for each person and projects it onto him or her: the subtitle is chosen at random from a list of all the verbs conjugated in the third person. The only way of getting rid of a subtitle is to touch another person, which leads to the two subtitles being exchanged (see Images 4 and 5).
Unlike industrial applications, unforeseen consequences can actually help artistic demonstrations. “One of the problems we have with computer vision is tracking,” explains Lozano-Hemmer. “We can’t discriminate targets. With ‘Subtitled Public’ the computer doesn’t know if the blob is made out of two people close to each other, or one person. We’re eager to work with someone that has a number of PhD’s in pattern recognition. We’re collaborators. We believe if someone has spent 5 years trying to solve a particular problem, and they’ve done so successfully, we’d love to work with a that kind of specialty, assuming it can be worked into art.”
When it comes to hardware, the machine vision industry is fitting the artistic bill nicely. “We work with AVT (Newburyport, Massachusetts) firewire and Prosilica (British Columbia, Canada) Gigabit Ethernet cameras because they have never let us down. Plus they’re awesome cameras that give us a good price point, and both companies have a diverse product portfolio, so if you need more resolution or faster frame rate, you can go back and get whatever you need for the same form factor,” explains Lozano-Hemmer. (Editor’s note: for more videos of Rafael’s work, go to www.lozano-hemmer.com)
Best of the Best
Professional athletes can win or lose based on a few millimeters or milliseconds. To better understand the relationship between physique and motion, the German Sport University’s Institute of Biomechanics and Orthopaedics offers a range of 3D machine vision based analysis programs to help athletes and their coaches understand what they can do with their bodies, what they can’t, and how to bridge the two for athletic success.
The used machines (3D Full Body Scanners, Motion Capture Systems, and Force Plates) provide information about the athlete’s anthropometry, the movement of the athlete and the external forces acting on the athlete.
“We capture the surface of the athlete to get information about the moment of inertia of the athlete’s body segments” explains Bjoern Braunstein, Senior Research at the Institute. “We use the surface information as input variables to model human movement. Then, using the models, we simulate special sports movements. As a partner for the Germany Research Center for Elite Sports, we cooperate with 25 national teams from Germany.”
During one investigation, Braunstein’s group recently used synchronized high-speed Basler (Ahrensburg, Germany) cameras from different perspectives to reconstruct the 3D movements of table tennis players, their racquets and the ball during competitive play with multi-body models. This approach provide qualitative analysis of the players motion during competition, and quantitative analysis of kinematic parameters such as velocity and acceleration of the wrist, elbow, shoulder, ball, and racquet, as well as displacement of forearm-upper arm displacement angles, and changes in the center of mass in athletes sagittal, frontal, and transverse plane. Braunstein’s group has done similar work for gymnastics, as well as exploring stationary athletic bodies to determine asymmetries and deformities that may help or hinder certain athletic endeavors (see Image 6).
It’s Virtually the Same Thing
Point Grey’s (British Columbia, Canada) compact Firefly MV IEEE-1394 (FireWire) camera and Bumblebee2 stereo camera are also helping people to immerse themselves in virtual worlds. Small enough to be easily mounted on a head mounted display (HMD), the Firefly has helped researchers at the Georgia Institute of Technology to create an augmented reality interface for the 3D virtual world, Second Life.
"We have used Firefly's, Flea's and Dragonfly's in our work, and had been using Flea's and the extended-head form factor of the Dragonfly for our previous head-worn displays," explains Blair MacIntyre, Associate Professor in Georgia Tech's School of Interactive Computing and GVU researcher. "We are now working with the current generation Firefly MV, which provides a nice balance between size, image quality and frame rate, at a much lower price point. It also supports automatic inter-camera image synchronization, which is important for creating stereo displays where the left and right eyes need to be synchronized."
If you’d rather a bird than an avatar, interactive arts company Squidsoup may have the answer. Squidsoup’s artists use Point Grey’s Bumblebee2 3D stereoscopic camera to monitor the bird-like motions of people in the Driftnet exhibit. The camera monitors body and arm position, and alters a projected 3D image in response to flapping arms, tilted arms, or similar movements, giving participants a “birds eye” view as they skim a variety of landscapes.
Whether it’s flying through the air like a bird, or landing a new high jump record at the Olympics, machine vision will continue to help man have fun as well as do his or her job. A century ago, people placed picture cards in a handheld viewer and imagined 3D landscapes as they looked through a pair of rough optics. A few decades ago, films came without sound, in black and white, and were accompanied by a piano player or small group of musicians. Today, films are distributed in 3D while amusement parks fill their halls with simulated environments that put the view into the action. As the examples above show, the potential of machine vision in the entertainment industry is just beginning, and only limited by our imaginations.
There are currently no comments for this article.
Leave a Comment:
All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.