to typical questions, and for changing aspects of the avatar's appearance including outfit,
facial features, and mannerisms.
The most frequent means by which users communicate with Web-based avatars is by
typing instructions or messages into a text box. Responses from the avatars appear in
another text box.The background software running the avatar analyses a query or
response for keywords, and then searches for appropriate answers in its knowledge data-
base.The avatar's conversational abilities can be developed and made more useful and
natural by monitoring the user's behaviour and updating the knowledge database that the
avatar software queries. In order to view avatars on the Internet, users require a browser
plug-in; this is not dissimilar to the dedicated utilities (such as those made by blaxxun,
, and others
) required to view virtual reality environments.
From early, largely immobile avatars, the technology has developed towards increasing
realism, and data capture and animation are now used in the production of avatars.There
are two main stages in this process: tracking the desired motions, and visualising the
In the first stage, the movements and motions made by an avatar (or agent) are generally
tracked and stored digitally.There are currently a number of devices that can be used to
digitise a person's real-time motions, including optical systems for tracking facial expressions,
data gloves for capturing hand/finger movements, and sensor-enabled body suits for
capturing posture.These movements are captured at a high resolution, and sequences of
movements can be stored as video clips.
After these processes are complete, data from tracking systems for face, hands and body
can then be combined to achieve a realistic full-body animation.To make real-time visu-
alisation viable, the avatar's movements may follow predefined sequences.The visualisa-
tion software `glues' the captured sequences consecutively, giving the observer an impres-
sion of continuous movement.
Creating avatars using this technology is still expensive. Data capture remains a labour-
intensive and specialised task, and the image processing and animation algorithms require
top-end computing resources.The process becomes still more costly when combined
with text-to-speech functionality. Depending on the features and level of realism, avatar
software for Web deployment can (in 2004) cost as much as
25,000 to 100,000 per
annum.There are, however, companies that offer cheaper `talking heads'-type solutions.
Robotic avatars consist of a physical mobile platform (the robot itself) and a separate
workstation, with a representation of the user's avatar appearing on-screen.The worksta-
tion will typically host a database containing information on the museum, including its
exhibits, at various levels of detail depending on individual visitor preferences,Web links
for communication with televisitors and to the on-board control block, and a multimedia
Web interface which provides interaction of the system over the Internet. Users can
thereby control the robot's movements from a distance, specifying directions and objects
for observation. [Roussou et al., 2001 (full ref. on page 194)]
The mobile platform will typically be based upon an on-board interface which provides
interaction with on-site visitors of the museum, and a control block which takes care of
102 http://www.blaxxun.com; http://ca.com/cosmo/. For more browser vendors see DigiCULT Technology
Watch Report 1's section on Virtual Reality and Display Technologies (pp. 95-116).
TWR2004_01_layout#62 14.04.2004 14:07 Uhr Seite 72