Check out this interesting video of a virtual reality police line up. the system was developed at Stanford’s Virtual Human Interaction Lab and depicts several interesting concepts employing virtual reality to improve the line up process. The VHIL’s research was performed in collaboration with the Research Center for Virtual Environments and Behavior, the National Science Foundation, and the Federal Judicial Center. the goal of the work is to apply virtual environments to deepend understanding of how witnesses of crimes identify suspects.
The eyewitness observes the suspects through viewing a virtual environment delivered to a head mounted display, although I can see no reason why a similar set of functionality could not be delivered to a screen based system. This allows the eyewitness to potentially be at a remote or distant location from the suspects, who need not even all be present at one location themselves.
The use of a 3D interactive environment also makes it is possible for an eyewitness to not only to see the individuals in the line up from the front, but also to “fly around” them and view them from different angles or even from just inches away.
This also makes it possible to place the line up in a different virtual location, perhaps into a scene similar to where the crime was committed.
Unfortunately, the virtual line up as implemented suffers from several weaknesses. First, the current implementation uses digital “busts” glued onto representative bodies. While this approach allows for the creation of digital “foils”, simulated persons similar in appearance to a true suspect, it also means that facial motion can not be presented. This representation can be misleading also because the body shape, stature, and clothing may not be accurate representations of the suspects’ true appearance. It also limits the ability to employ realistic representations of distinguishing marks not found on the face, i.e. tattoos and scars.
Second, as implemented, the virtual busts have no ability to be animated in real-time. Facial motion is known to be an important cue in facial recognition and this work ignores some of the well known results in the study of human face and person recognition abilities here. Finally, the avatar body motions are completely fake eliminating the use of any cues related to body motion, gait, etc. which have also been shown to aid recognition.
Unfortunately these limitations of the head mounted virtual reality based line up are likely to prevent its use in any real world line ups. A better approach would seem to be using blue/green screen video based capture of real suspect images possibly from multiple cameras, and image based rendering to generate the virtual face views. See for example this research and the research at the Fraunhofer Institute into image based rendering of faces for virtual conferencing.