background image
DigiCULT 25
and proactively fetch relevant documents for the user.
For this, we developed a system called Fetch (Figure
3), which is explained below.
A query is executed via the search area (1) and
results are returned together with a query-biased
summary (2) for each link (3) in the result set. Links
can then be dragged on to the workspace (4) and
grouped together with similar documents to form
bundles (5) analogous to the way in which related
documents are placed in the same folder on a desktop.
Bundles on the workspace are also represented in the
overview panel (6) in order to complement the flex-
ibility of the workspace with a more structured view.
The agent will at some future time analyse the bun-
dles belonging to each user and formulate a new que-
ry for each. The system notifies the user of this new
information by changing the colour of the bundle
on the workspace from green to red (7). By double-
clicking the updated bundle, instead of opening the
bundle, a new search will be initiated using the asso-
ciated query with results returned as before. Relevant
links can then be dragged into new or existing bun-
dles in the same fashion as before. The list of query
terms can also be edited in the query editor (8) based
on the quality of the first result set. Iterations of this
form continue as long as the contents of the bun-
dle are updated and thus the user's changing infor-
mation need can be captured. The agent also checks
for updated links on the workspace, alerting the user
by changing the colour of the link icon from green
to red.
Thus, Fetch adopts the flexible environment while
incorporating a bundling technique that allows users
to develop strategies for coping with the loss of con-
text occurring when a variety of independent sources
are viewed together. Over a period of time, through
the observation of this bundling, the agent can build
an accurate profile of the multifaceted information
need and eventually recommend relevant Web pag-
es without the need for users to mark documents
explicitly.
The Fetch interface gives a good visualisation of
the information space. The bundles are visible on the
workspace and it can be moved around. In this way,
users are able to associate spatial clues with the bun-
dles. For example, one could say all bundles on the
top-right corner deal with museums.
E
VALUATION
OF
P
ERSONALISED
S
EARCH
S
YSTEMS
T
he activity of evaluation has long been recog-
nised as a crucially significant process through
which information retrieval systems reach implemen-
tation in a real-world operational setting. Evaluative
studies are concerned with assessment of the quality
of a system's performance of its function, with respect
to the needs of its users within a particular context or
situation. The direction of such studies is commonly
determined, and thus implicitly validated by the adop-
tion of some kind of structured methodology or eval-
uative framework.
The traditional evaluative study of IR systems
derives from the Cranfield Institute of Technology's
projects in the early 1960s and survives in the large-
scale experiments undertaken annually under the aus-
pices of the Text Retrieval Conference (TREC).
However, on many occasions the suitability of such a
framework for the evaluation of interactive systems is
questioned.
We have been following a task-oriented and user-
centred approach for the evaluation of personalised
IR systems. Our approach was based on the adoption
of simulative work task environments to place the
user in a realistic information seeking scenario. The
basic idea is to develop simulated search tasks. Such
tasks will allow the user to simulate an actual work-
ing environment and thereby better judge the rele-
vance of documents from an actual information need
perspective. In addition, such a situation facilitates a
realistic interaction in a laboratory setting. The system,
tasks and users are distributed with a proper statistical
design in order to avoid learning bias.
Figure 3: Fetch interface