Research at the VECG group can be split into the following categories.
Main driver - the creation of interaction techniques that exploit the full power of the interface technologies, be they immersive or non-immersive.
We are increasing interested in moving graphics to new displays and interaction devices. We have been exploring the use of mobile devices as sensing and real-time graphical displays, augmented reality systems and the potential of collaborating in shared ubiquitous environments where user activity is represented in many different ways.
Virtual environments are the assembly of a variety of hardware and software technologies with the purpose of immersing a user within a interactive computer-generated illusion. Due to the nature of the technologies, this illusion is a first-person egocentric view of an environment that is "life-like" in scale, behaviour and interaction. For example, when one turns one heads, the graphics and audio "move" as the real world does normally. Ideally this illusion creates, for the participant, a sense of quot;presence", such that they believe, they can, and they do interact with this illusion in a similar manner to what they would do if experiencing a similar real situation.
There is a long technical history to such illusions but there is also a correspondingly long development of an understanding about how and why such illusions occur. Virtual environments are thus an exciting technology to study in and of themselves, because they create interesting illusions, but our group is more interested in what types of technology and media most easily create this illusions. Our group's own understanding of presence has evolved over time, as we have built better technology, but more importantly as we've learned how to start measuring responses of users.
The field of computer graphics has advanced rapidly over the past several years; the quality and realism of computer renderings is often stunning, as can be seen in movies like Final Fantasy or Shrek. Yet many open problems remain in the area of immersive, realistic, and interactive virtual environments.
The computation times of high-quality, realistic images are very long. A single movie frame often takes hours to compute, yet many applications such as flight-simulators and architectural walkthroughs mandate immediate feedback. Solutions for these applications exist (e.g., through dedicated graphics hardware), but require a substantial decrease in quality and realism to meet time constraints. At UCL we are developing efficient algorithms to produce realistic images at interactive frame rates.
Rendering realistic images requires realistic scene data. Currently, most data is authored by hand, which allows for flexibility but is also very tedious. Recently, image- and video-based rendering has gained more popularity, which is due to the ease at which realism can be achieved.
In the VECG group, we have a considerable history of research into how to polulate our virtual enivironments with human like characters. These characters might be entirely virtual, controlled by computer, or they might be avatars, representing real people in a collaborative environment. The scale of our populated environments range from large crowd simulations to one-on-one conversations. Our work mostly falls under two broad research questions: what is it about a virtual that makes believable? and what computational techniques can be used to create believable characters?
As with much of our research in virtual environments we do not uncritically assume that high levels of realism are neccessary for creating believable characters. Instead our aim is to investigate exactly which features of a character create believablity, what is it that makes a cartoon character like Bugs Bunny more compelling than a very much real, but wooden, B-movie actor. To this end we have performed many experiments with a variety of different virtual characters. These experiments show that fairly unrealistic characters can produce strong emotional responses in the right context. Some results even suggest that graphical realism can be detrimental if the characters behaviour is not simiarly realistic. Our current hypothesis, based on these experiments, is that one of the most important aspects of a character, is the simulation of social cues, and in particular non-verbal communications (often called "body language").
This raises the question of what compuational techniques to use to create characters with believable non-verbal communication. We have created characters that can use a number of different modalities for social and emotional expression, including gaze, posture, gestures and facial expression. We have developed an open source animation system, PIAVCA, that supports these forms of expression. We have investigated two main methods for generating non-verbal communication. The first is theory driven, using knowledge obtained from psychology and related disciplines to craft behavioural algorithm. More recently we have been investigating methods that centre on capture and statistical analysis of an actor's performance.
Geometry and Scanning
We have a number of different projects that have looked at scanned objects, or geometric sources. In particular, scans of human bodies, and scans of museum objects. We are interested both in the technologies for scanning, and the use of the resulting 3D models.