DLF logo Front Page
Printer-Friendly Page

DLF Home

Editorial Page

Features

Reports to the DLF

Recent and Future Events

Technical Reports

Volume 1 Number 2 July 2000

The Trouble with Tools: Designing Effective Survey Mechanisms

By Ann Marie Parsons

Revised on 9 November 2000

How simple life would be for librarians if they could stitch digital content onto a prefabricated pattern and voila - the "One Size Fits All" collection! Unfortunately, collections, like people, rarely look their best when clad in a "One Size Fits All" ensemble. Digital collections come in a variety of formats and are developed to meet the information needs of different user communities. Confronted with this diversity, libraries are frequently left wondering how to assess their very distinctive digital collections in order to determine, for example, how, and with what effect, they are being used. In an effort to assist libraries in responding to this challenge, the Digital Library Federation is conducting research into the methods that libraries are using to evaluate the use and usability of their digital collections. This feature reflects on some of the lessons that are beginning to emerge from that investigation.

Although it is too soon to make sweeping statements about the experiences the DLF is uncovering in its member libraries, it is possible to document a common idealized approach to the planning of use assessment activities. As an initial step, library staff, having selected a collection they want to evaluate, define their assessment goals and desired outcomes. They then review the user information already exists, for example, from the email, phone, and personal inquiries that are made of public service or reference staff. Here it is important to determine whether such data are available in a manner that is suitable for formal analysis. A third step involves discussion about what additional information is required for the evaluation and the methods that may be used to collect it. Clearly there are roles for quantitative and qualitative methods; even an apparent preference for combining them to develop a more complete picture of how a collection is used. Methods that are commonly deployed are listed briefly below alongside some of their key strengths and weaknesses:

  • Focus groups are used to establish face-to-face dialog with users. They are helpful in determining how why users respond well or badly to aspects of a digital collection. Focus groups also present their own unique set of problems. Managing group dynamics can be challenging, particular where one or two participants are determined to ply their own agenda. And one can never be completely certain that the views of a focus group represent those of the user community at large. This latter difficulty may be compounded in an era when focus-group participants need increasingly to be compensated for their time, for example, with free pizza and cookies (appropriate, we are told, for student groups) or with catered lunches (increasingly required by faculty!).
  • Survey questionnaires or personal interviews eliminate the difficulties inherent in group settings but introduce their own problems. Whether and to what extent survey respondents represent a community of users is a persistent problem. In addition, surveys or interviews must be designed well to ensure their questions are interpreted consistently by respondents and to guarantee they elicit useful data - that is data that can be understood and analyzed by staff after respondents are long gone. Further, respondents who seem perfectly willing to share their time and their thoughts may in fact be reticent about sharing their real feelings about a collection, perhaps from a sense of embarrassment or inadequacy (e.g. where they find it difficult to use or comprehend part of a collection) or from a concern to protect their privacy.
  • Web transaction logs and other statistical data are also useful. Success in using them, however, requires close collaboration among the librarians who are typically responsible for designing and leading assessments, and the systems specialists who are typically called upon to collect and analyze the web transaction logs. Such data can also be misleading. Evidence of an extended web session may indicate a user who left the room rather than one who spent hours pouring over content. An unusually heavy amount of web traffic may reflect a group of students completing a one-time assignment. And interpretation of statistical data will frequently require more qualitative evidence. Statistical measures of how often a site is viewed provide little information about what users find aesthetically pleasing or useful about it.

Having collected data, it is important to approach its analysis with a skeptic's eye constantly weighing alternative explanations for any usage patterns that begin to emerge. It is as important to follow-up on results; to use them, for example, to improve existing collections or incorporate desirable features into new collections whether they are under construction or in the design phase. Another reason to follow up quickly on results is that digital collections are often dynamic, changing in their content or their functionality in response to stimuli that have nothing to do with their evaluation. The result is that assessment results may with time become outdated, even unhelpful. There are other ways to follow up on an evaluation. Results can be used to promote a library's genuine commitment to serving patrons and meeting their needs, for example.

While examining the fabric of their online collections, DLF members have thus far shown different tastes. Collections and services come in many sizes and shapes. In order to tailor these resources to a perfect fit, planning, co-operation, and follow-up are indispensable tools. The digital collections that are admired most, are likely to be the ones that are evaluated and altered to meet their users' needs.

The study of methods that are effective in assessing the use and usability of online collections is ongoing. The DLF welcomes further input about institutional experiences from both members and non-members. Sensitive information will be kept confidential. To learn more about this initiative and how to participate please visit Usage, Usability and User Support: http://www.diglib.org/use/useframe.htm. Alternatively, contact Denise Troll (Carnegie Mellon University). From November, Denise will be leading our efforts in this area as a DLF Distinguished Fellow


Please send comments or suggestions.
Last updated:
© 2000 Council on Library and Information Resources

CLIR Issue Table of Contents
Newsletter Index