Spring 2006 DLF Forum: Melvyl Recommender Project, 25
3 April, 2006
Assessment
●Task-based, facilitated and observed
●Rotated through 4
ordering methods:
–Content ranking
only
–Content ranking boosted
by circulation
–Content ranking boosted
by holdings
–Unranked, sorted by
system id
●Grouped by naive vs.
expert
●
Not a trec-style
competition...user focused.Sample
size of 10.
Mostly subject
searching, but one known-item task.
Independent variable
= ordering method...rotated through on each query, assigned in random order
for each participant.Happening in the
background, users were not aware of the difference from query to query.
Expertise grouped by
naive or expert....roughly undergraduate vs. graduate students (all UC
Berkeley)