Printer-Friendly Page | ||||
|
Volume 1 Number 2 July 2000
The Trouble with Tools: Designing Effective Survey MechanismsBy Ann Marie ParsonsRevised on 9 November 2000 How simple life would be for librarians if they could stitch digital content onto a prefabricated pattern and voila - the "One Size Fits All" collection! Unfortunately, collections, like people, rarely look their best when clad in a "One Size Fits All" ensemble. Digital collections come in a variety of formats and are developed to meet the information needs of different user communities. Confronted with this diversity, libraries are frequently left wondering how to assess their very distinctive digital collections in order to determine, for example, how, and with what effect, they are being used. In an effort to assist libraries in responding to this challenge, the Digital Library Federation is conducting research into the methods that libraries are using to evaluate the use and usability of their digital collections. This feature reflects on some of the lessons that are beginning to emerge from that investigation. Although it is too soon to make sweeping statements about the experiences the DLF is uncovering in its member libraries, it is possible to document a common idealized approach to the planning of use assessment activities. As an initial step, library staff, having selected a collection they want to evaluate, define their assessment goals and desired outcomes. They then review the user information already exists, for example, from the email, phone, and personal inquiries that are made of public service or reference staff. Here it is important to determine whether such data are available in a manner that is suitable for formal analysis. A third step involves discussion about what additional information is required for the evaluation and the methods that may be used to collect it. Clearly there are roles for quantitative and qualitative methods; even an apparent preference for combining them to develop a more complete picture of how a collection is used. Methods that are commonly deployed are listed briefly below alongside some of their key strengths and weaknesses:
Having collected data, it is important to approach its analysis with a skeptic's eye constantly weighing alternative explanations for any usage patterns that begin to emerge. It is as important to follow-up on results; to use them, for example, to improve existing collections or incorporate desirable features into new collections whether they are under construction or in the design phase. Another reason to follow up quickly on results is that digital collections are often dynamic, changing in their content or their functionality in response to stimuli that have nothing to do with their evaluation. The result is that assessment results may with time become outdated, even unhelpful. There are other ways to follow up on an evaluation. Results can be used to promote a library's genuine commitment to serving patrons and meeting their needs, for example. While examining the fabric of their online collections, DLF members have thus far shown different tastes. Collections and services come in many sizes and shapes. In order to tailor these resources to a perfect fit, planning, co-operation, and follow-up are indispensable tools. The digital collections that are admired most, are likely to be the ones that are evaluated and altered to meet their users' needs. The study of methods that are effective in assessing the use and usability of online collections is ongoing. The DLF welcomes further input about institutional experiences from both members and non-members. Sensitive information will be kept confidential. To learn more about this initiative and how to participate please visit Usage, Usability and User Support: http://www.diglib.org/use/useframe.htm. Alternatively, contact Denise Troll (Carnegie Mellon University). From November, Denise will be leading our efforts in this area as a DLF Distinguished Fellow
|