background image
DigiCULT 29
playing visual and textual information, allowing for a
seamless presentation of displayed information. There
will be lightweight, flat and foldable displays; displays
will appear on fabric, on the surface of any object
through tiny, overlaying screens; and special paint may
turn entire walls into screens. Furthermore, projec-
tion of information over glasses as well as 3D pres-
entation of dynamic information may find broader
use. A state-of-the art industry report on flexible dis-
plays, including chapters on standard as well as new
technologies such as particle displays, OLED (Organ-
ic Light Emitting Diode) and others, is available from
Intertech Corporation (2004).
The emergence of ambient intelligence environ-
ments will be accompanied by a proliferation of
interfaces other than WIMPs (Windows-Icons-Men-
us-Pointing devices). In the next 5-10 years, novel
interfaces should highly increase the practicality and
convenience of digital communication, information
acquisition, learning and entertainment. For exam-
ple, the MEDEA+ Applications Technology Road-
map (2003) highlights the `overriding importance of
better user-tuned interaction capabilities of nearly all
Such user-tuning should ideally be
achieved through highly adaptive, personalised inter-
faces that allow for devices and input/output modali-
ties to be adjustable to each individual.
While single devices may conform to this require-
ment fairly quickly, a major challenge for AmI envi-
ronments lies in the interaction design or, rather,
the authoring of the users' experiences in an envi-
ronment. A good example of this is the `distributed
browser approach' suggested by Philips Research. This
approach suggests a mark-up language to describe and
control the devices within a location. Each device acts
as part of the browser and together they render the
experience. For example, an experience described as
`warm and sunny' could involve contributions from
lights, the TV, the central heating, electronically con-
trolled blinds as well as the ceiling, wall and floor cov-
t is envisaged that in the near future multimo-
dal interfaces will to a considerable degree ena-
ble us to control and interact with the environment
and various media in a natural and personalised way.
This should lead to highly convenient living environ-
ments (including opportunities such as entertainment,
learning and creativity), as well as improving working
environments in terms of productivity.
Multimodal interaction includes voice, touch,
pointing, gestures, eye movements and facial expres-
sion to communicate needs and wants. One impor-
tant element in multimodal interfaces will, of course,
be hands-free voice control of applications based on
natural language recognition and understanding by
computers to process the human input. On the oth-
er hand, machine output in the form of well syn-
thesised language as well as language translation are
also important features in future interactive environ-
ments. For example, experts think that by 2010 it will
be possible to achieve a reasonably good translation of
natural language. IBM, to name but one example, has
set a goal for its research laboratory to have a natural
language translator working in 28 languages by 2010.
For interacting in virtual and augmented reality
environments, future developments should also lead to
a more immersive experience. Besides more realistic
Gesture or write your story in the air
The image below gives an example of new inter-
face and interaction designs, as developed in stu-
dies by Philips Research. Here a girl in a family's
children's playroom is interacting with an applica-
tion that allows for generating a narrative in whi-
ch she plays a role herself. The scenario includes
motion capture by means of a camera and special
software. To give another example in this domain,
the Korean company MicroInfinity, which, among
other technologies, develops advanced human
interfaces (motion capture, head-mounted displays
with head-movement tracking, etc.), recently pre-
sented a three-dimensional input application. With
a special pen, words can be written in the air and
are converted into a document format.
Philips Research Technologies: Ambient Intelli-
logies/syst_softw/ami/background.html; image:
Intertech Corporation
(2004), "Flexible Displays
and Electronics: A Techno-
Economic Assessment and
Forecast", Portland/USA,
2004. http://www.
MEDEA+ Applications
Technology Roadmap.
Vision on Enabling
Technologies of the
future, Version 1.0, p. 8, 25
November 2003. http://
Philips Research,
"PML [Physical Markup
Language] - When your
room becomes your
browser". http://www.
For an in-depth roadmap
for RTD and design in
novel interaction technolo-
gies see: L. Norros et al.,
Interaction Research and
Design. VTT Roadmap,
Espoo 2003. http://www.
DCTHI7_271104.indd 29
06.12.2004 8:37:18 Uhr