SPRING FORUM 20014-6 MayExtended Program |
Abstract pending
Dale Flecker, Associate Director for Planning and Systems in the Harvard University Library, Harvard University
Increasingly scholarly journals are published electronically. What will it take to keep them accessible electronically in perpetuity? Can the property rights of publishers, the access responsibilities of libraries, and the reliability assurances that scholars need be reconciled in agreements to create archives of electronic journals?
In early 2000 the DLF along with CLIR and CNI began to address these questions with a view to facilitating some practical experimentation in digital archiving. In a series of three meetings one each for librarians, publishers, and licensing specialists, respectively, the groups managed to reach consensus on the minium requirements for e-journal archival repositories.
Building on that consensus, The Andrew W. Mellon Foundation solicited proposals from selected research libraries to participate in a process designed to plan the development of e-journal repositories meeting those requirements. Seven major libraries have now received grants from the Andrew W. Mellon Foundation including the New York Public Library and the university libraries of Cornell, Harvard, MIT, Pennsylvania, Stanford, and Yale.
This session will introduce some of the key issues that arise from these initiatives and demand discussion and input from the broader library community.
A further breakout session has been reserved for those who would like to follow up the discussion in greater detail.
Daniel Greenstein, Director, DLF
The past six months have been productive ones for the DLF. This session will provide an overview of initiatives recently completed and underway. It will also reflect upon and invite discussion about possible future directions for an organization that is assessing its future as it nears completion of its initially agreed term.
Panelists: Ricky Erway, Research Libraries Group; Carolyn Larson, Library of Congress; Asunta Pisani, Stanford University
Decisions taken when creating or acquiring access to a digital information resource impact directly on how, at what cost, and by whom the resource will be used, maintained, and supported. Accordingly, libraries that are building digital collections are developing formal review procedures that help assess the ramifications of their digital collection development decisions. In order to capitalize on this experience, the DLF commissioned three studies that have assembled and reviewed existing practices, highlighting the most effective that emerged from the review. The studies (now available as pre-print drafts by following the links below) focus on three distinctive kinds of digital collections:
Participants in this "working session" of the forum will be invited by panelists to review key issues arising from the reports and to help formulate any practical next steps that may be taken by the DLF or other bodies to build on their recommendations. Reports will be available for participants' inspection prior to the session.
Kay Kane, Reference and Consultation Services Team Leader, University of Minnesota Libraries
The University of Minnesota Libraries has created complementary online instructional and reference tools that guide students through the complexities of locating and using both print and digital information resources. This Information Literacy Toolkit includes:
Research QuickStart http://research.lib.umn.edu. Research QuickStart is a wizard-like tool that generates dynamic web pages for over two hundred subjects. Students can use Research QuickStart to first select a subject, then access a selective list of subject resources chosen by librarians who are information experts in their discipline. QuickStart is driven out of a central database constructed so that content can be reused in other University Libraries web tools such as , our web gateway, and library course pages.
QuickStudy: A Library Research Guide. http://tutorial.lib.umn.edu QuickStudy is a web-based tutorial that teaches students information literacy skills necessary for research in the U of MN Libraries and on the Web. QuickStudy's eight modules contain lessons on a variety of topics, including designing a research strategy, evaluating web sites, etc. QuickStudy is also database driven so that modules or lessons can be isolated and reused in other instructional contexts.
CourseLib: An Authoring Tool for Creating Customized Library Pages http://courses.lib.umn.edu. CourseLib generates customized web pages that support the library research components of academic courses. The CourseLib tool is unique because it provides an easy authoring environment; does not require knowledge of HTML; and utilizes templates in the formation of customized course pages. The objective is to enable library staff to create customized pages in the most efficient and scaleable way possible by linking to and reusing descriptive data stored in other Library databases such as Research QuickStart and QuickStudy.
Aaron Trehub, Director, Illinois Researcher Information Service (IRIS), University of Illinois at Urbana-Champaign
Personalized library services are a hot topic. The past few years have seen a number of articles on services of the MyLibrary type, and the December 2000 issue of Information Technology and Libraries (ITAL) was devoted exclusively to this theme. Unfortunately, libraries that might be interested in offering personalized services often lack the programming resources to build them, while libraries that have the necessary resources wind up writing their own code from scratch. This seems like a textbook case of reinventing the wheel. This presentation makes the case for establishing an archive of open-source software for personalized library services--and, perhaps, other library services as well.
Max Marmor, DLF Distinguished Fellow and Head, Art Library, Yale University
Abstract to follow
This session will introduce two important initiatives taking a closer look at services that register the existence, location, and other information about digital information objects. Both anticipate a possible role for the DLF or other library consortia. The session is being convened to introduce the work, evaluate the directions it anticipates, and consider implementation and other next steps for the DLF.
Abby Smith , Director of Programs, CLIR
Written by a task force of scholars, librarians, and archivists, this draft report is designed to address the critical need to preserve the valuable evidence in research institutions and other libraries. The report reviews the state of preservation with major media (print, audio-visual, and digital) and presents a number of recommendations pertaining to the stewardship of our cultural heritage. The report and its recommendations have potentially very profound implications for the library community as a whole and for leading research and digital libraries in particular. They also anticipate the development of community managed registry services as essential infrastructure without which it may be impossible for the community systematically and cost effectively to manage the nation's cultural heritage over the longer term.
Session participants are encouraged to read the report in advance of their attending this working session of the forum. Since the report is long (running to nearly 70 printed pages), participants may wish to concentrate in particular on Section 3 which will be particularly relevant to our discussion.
The report may be found at: http://www.clir.org/activities/details/artifact-docs.html
Registering digitized monographs and journals
John Price Wilkin, University of Michigan
This session reports on a meeting convened by the DLF in April 2001 to discuss the purposes, potential uses, and essential requirements of a registry service that records information about digitized monographs and journals, and to outline a process that might see such a service developed at least to some prototype stage. Participants in the meeting will discuss the uses to which such a registry service might be put (e.g. avoiding duplication of digitization effort, enabling access to digitized content), whether registered objects would have to meet some minimum or benchmark standard (and if so whether such a standard could be agreed), and whether a registry can be built upon existing services or will require new effort. Organizational, funding, and service sustainability issues will also be considered.
Rosalie Lack, Evaluation and Instruction Analyst, California digital Library
John Kupersmith, Service Design Analyst and project manager for MyLibrary@CDL, California Digital Library
As a "co-library" of the University of California system, the California Digital Library (CDL) has relied on extensive input from our member campuses as a basis for decisions on collections, policies, and design issues. Originally, such input was primarily through consultation with formal advisory bodies and analysis of use statistics and user comments, supplemented as necessary with focus group research.
In addition to the important information from these sources, the CDL is actively increasing its range to include usability testing techniques. In 1999, we created the position of Evaluation and Instruction Analyst, whose responsibilities include working with our campus libraries to design and conduct usability tests. In evaluating "MyLibrary@CDL", a prototype user-customizable interface, we combined focus groups with user observations, and the resulting information led to a major decision. Our preliminary evaluation of Counting California, another CDL prototype, successfully used heuristic analysis to focus attention for further development.
Through these projects we have learned how to conduct these evaluations in a collaborative environment for the most effective results, what kinds of staffing and expertise are needed, and how to interpret the information gathered. We hope to employ similar techniques in a series of major projects, including replacement of the MELVYL system interface.
Gale Halpern, Cornell Institute for Digital Collections
How faculty and students will use digital image collections has been of interest since at least the start of the MESL program. During the fall semester of 2000, three Cornell undergraduate classes used images from the Herbert F. Johnson Museum collection and Insight software from Luna Imaging as part of the course curriculum. This paper will describe the different ways each class used the image database, report on the results from an evaluation survey taken at the end of the semester; and discuss how staff from CIDC and the Herbert F. Johnson Museum provided instructional support on use of the image database and delivery software.
Our experience confirms the finding of some earlier studies of the use of digital images. Whether the digital library of images was useful in each class was dependent on:
Using Web Stats and Query Logs to Improve Your Website
Kody Janney, Digital Initiatives Coordinator, University of Washington Libraries
Do the offerings on your website match what your users are looking for, and if not, why not? Are your users getting to the pages they really want? Why is the information at the bottom of query log files sometimes as important as that at the top? And once you know those facts, what do you do with them?
Information professionals have been using web stats for years to inform us about our web sites. Traditional statistics collected and reviewed include number of visits, length of visits, and top pages visited. There is a lot of other information buried in your stats that is commonly underutilized. This presentation focuses on query terms, some of the most important overlooked data. The presentation will show you how, informed by web stats and common sense, you can create a better experience for your users by meeting their expectations, instead of forcing them to meet yours.
Jerry McDonough, Digital Library Development Team Leader, New York University
Libraries' success is critically dependent on the metadata they create and maintain regarding the resources they provide to patrons. As holdings of digital materials grow, this dependence increases, and the forms of metadata required for successful operations expand. An additional complication to the need to produce and maintain more extensive metadata is the need to establish standards for exchange of metadata if libraries are to collaborate on development of software tools for working with digital resources and achieve interoperability between their collections.
This session will report out on a DLF initiative that seeks to develop a standard formalism for recording structural, administrative, and technical metadata for digital objects. Although in its early stages, the initiative has produced a number of formative recommendations including an alpha draft XML encoding scheme for the proposed standard. These will be presented for review and comment. Participants will also be invited to comment on plans for developing and reviewing the standards work as it develops.
Daniel McShane, Electronic Cataloging and Metadata Coordinator, Alderman Library, University of Virginia
The University of Virginia Library is building a digital library management and delivery system based on the FEDORA specifcation developed by Carl Lagoze and Sandy Payette at Cornell University (see "Virginia Dons FEDORA: A Prototype for a Digital Repository" in the July/August 2000 issue of DLIB http://www.dlib.org/dlib/july00/staples/07staples.html). As this development progresses, we must face the task of describing, auditing and providing the means by which the objects populating the repository can function properly. Our task is complicated by a number of factors: the sheer volume of our digital resources; the proliferation of metadata syntaxes, semantics, and vocabularies; and a staff that has not significantly increased in size and is untrained in the new markup technologies.
This presentation will examine the issue of metadata harmonization in the context of technologically mediated metadata enhancement, and will include an overview of the work currently being done by the Digital Library Research Group at the University of Virginia to create a "bionic cataloger."
Tim Jewell , Head, Collection Management Services, University of Washington
Some larger libraries have been developing local databases to help manage their collections of licensed electronic resources. These databases often supplement or work in conjunction with local integrated online systems, and typical functions include the generation of gateway lists of databases and e-journals and keeping track of license terms, renewal dates, vendor and local contact information, and order status. These databases have much in common with one another, and there may be benefits to greater standardization and shared development work. As a step in this direction, an inventory was done of the functions and data elements found in a dozen of these local databases. The results of the inventory will be presented, along with a summary of recent efforts to promote discussion of functions, data elements, and shared definitions through a web hub and listserv.
Bill Kehoe, Information Technology Services, Cornell University
"Dirty, dirty-and bad documentation." A librarian at Cornell, rolling her eyes, said those words about the possibility of the library serving up the many kinds of digital collections created by local researchers. In addition to poorly organized content and meager documentation and metadata, some of the other issues in acquiring, preserving, and providing long-term access to stand-alone digital collections are the lack of interoperability, the political environment within departments and disciplines, and the competition for funding. Moreover, researchers frequently want to keep their data semi-private and restrict access to their sites for as long as possible. After they have used the content as thoroughly as they can, they let the sites disappear.
A group of librarians and IT professionals at Cornell's Albert R. Mann Library are working to create a comprehensive solution to these problems that will ensure the survival of the researchers' content. We are developing an organizational framework in which researchers put their sites in a living trust, with the library as trustee. The presentation will illustrate the details of the trust by describing the relationship we are building with a research group in childhood language acquisition-how we are offering the library's expertise in creating digital libraries in exchange for the long-term right to archive and distribute the group's sound files and research data. Furthermore, the presentation will explore some of the opportunities we envision, some of the obstacles we are encountering, and some thoughts on the trust's impact on collection development and information technology services.
The University of Illinois Library has focused on the development of digital library outreach programs that involve collaboration with other libraries, museums, and archives, to bring digital content to K-12 schools. Designed to create model collaborative environments, the Digital Cultural Heritage Community (DCHC) and the "Teaching with Digital Content-Describing, Finding and Using Digital Cultural Heritage Materials" (TDC) projects focus on the digitization of materials from Illinois and Connecticut museums, archives and libraries for integration into elementary and middle school curricula. During the DCHC, the participants built a framework for digitizing primary source materials on common teaching themes, according to the Illinois State Board of Education Learning Standards, and provided free access to those materials, organized through a simple search interface. The TDC project is now focusing on using those materials and other more extensive digitized materials from larger partner institutions in school classrooms throughout Illinois. One of the central aims is to build a practical foundation for integrating digitization into the mainstream of digital library development in cultural heritage institutions.
The database design is based on the Dublin Core (DC) metadata scheme. The project participants made a collective decision to use DC to form the metadata framework but implemented a number of qualifications that helped to customize the metadata in useful ways. The qualified DC can be easily mapped into DC simple, as is demonstrated in the use of the dataset for the alpha-testing phase of the Open Archives Initiative protocols. Content selection for the database is driven by the teachers' social sciences curriculum units and broad historical themes identified within them. Collaboration plays a major role in the success of both projects. This presentation will discuss areas that require significant collaboration such as project administration, database content selection and accessibility of materials, digital capture, archiving and delivery, metadata schemes and formats, database and search interface design and intellectual property agreements. I will conclude with an update on the current status of the Open Archives Initiative Metadata Harvesting Protocol.
Maria Bonn, Head, Scholarly Publishing Office, Digital Library Initiative, University of Michigan
Abstract pending
Dale Flecker, Associate Director for Planning and Systems in the Harvard University Library, Harvard University
Daniel Greenstein, Director, DLF
Follow-up discussion of issues as presented in Friday's plenary session.
Daniel Greenstein, DLF Director
Nancy Elkington, Research Libraries Group Member Programs and Initiatives
A stated objective of the DLF is to leverage its members' reputations and expertise in order to encourage adoption of those practices that support the development of reliable, persistent, and interoperable online collections and services.
With this mandate in view, the DLF has given considerable attention to the development of good practice guidelines in a variety of areas. It has, for example, developed and recommended guidelines for the use in library of the Text Encoding Initiatives; a model license for negotiation access to commercial electronic content; guidelines for producing digital images; and strategies for developing sustainable digital collections respectively comprising commercially acquired content, third-party public domain Internet content, and digital surrogates for library holdings.
It is also currently working on standards for representing technical, administrative and structural metadata associated with digital objects; criteria for assessing the "quality" of digital images; effective methods for assessing the use of online collections and services; benchmark standards for preservation reformatting; implementation guidelines for the Visual Resources Association's Core Categories for the description of works of art; and the Open Archives Intiative's metadata harvesting protocol.
Given the level, quality, and perceived importance of this work, the DLF Board agreed at its meeting in September 2000 to formally review, adopt, endorse, and promote good practice recommendations as they emerge from DLF-funded initiatives.
This working session is designed as a panel discussion and is intended to promote discussion about how in practical ways the DLF can implement this directive:
Nancy Elkington from the Research Libraries Group (RLG) will join the panel and seed discussion with introductory remarks about how RLG approaches a similar set of issues, that is, how it identifies variant and common practices, establishes and documents best practices, participates in and/or leads the development of national and international standards, endorses its own and other standards and implements them at RLG and among RLG member institutions, and about how it works to maintain (revise or drop) adopted standards through time.
Cecily Johns, Project Director, Collection Management Initiative, University of California, Santa Barbara
Collection Management Strategies in a Digital Environment is a two year grant project awarded to the University of California on January 1, 2001 and funded by the Andrew W. Mellon Foundation. The objectives of the grant are to:
The project will be carried out in three phases, beginning with a six month period of consultation and decision-making, followed by implementation of experiment and gathering of data (12 months) and finally a period of assessment of the findings of the study during which policies and strategies for archiving and managing collections in both print and digital form.
The presentation will address the issues of usage measurement, determining user behavior and preferences when users must rely on the electronic version for their research, and how we plan to incorporate user needs assessment into the design of the project.
Michael Biggins, Head, Slavic and East European Section, University of Washington Libraries
New programs sponsored by the NSF, U.S. Dept of Education, and others have begun soliciting proposals for assembling critical collections of foreign area information using innovative technologies. These programs are providing both libraries and academic programs an important opportunity to influence the shape of things to come in the sphere of area studies information, which lends itself particularly well to digital resource development.
The Central Eurasian Information Resource (CEIR), operating out of a consortium of colleges in Western Washington State, had its origins in an attempt to assemble certain categories of current statistical data for the Russian Federation within a GIS framework. More recently the project has adapted the geographical framework of its statistical resources to organize other collections of information, including text, periodical indexing, images, and curricular guides. An innovative main interface allows users to identify their geographic and subject interests and level of expertise in order to retrieve resources most appropriate to their needs.
The remoteness of the CEIR's subjects, the relative or even absolute lack of equivalent information resources in any format, the flexibility that copyright law allows for reformatting boundary and government-produced statistical information, and the omnipresent geographical element describing nearly every unit of information in the resource: these are all factors which have worked to the CEIR project's advantage and which can work in favor of digital resource development for many areas of the world.
The CEIR has involved representatives of a crucial user group--faculty--in the design process. They have provided suggestions for and feedback on content and design, and are developing courses that both draw on and supplement CEIR's online resources. Faculty involvement from the early stages is ensuring that the CEIR will address real teaching and research needs in a user-friendly way.
Joan Ruelle, University of Virginia
As more libraries are creating, purchasing and maintaining digital collections, librarians are increasingly faced with collection development decisions about digital acquisitions.
"Supporting Digital Scholarship" is a collaborative project funded by the Andrew W. Mellon Foundation between the Digital Library Research & Development department of the Library and the Institute for Advanced Technology in the Humanities (IATH) at the University of Virginia. The Library is working collaboratively with faculty scholars to address the scholarly use of digital primary resources, library adoption of "born digital" scholarly research, and the co-creation of digital resources by scholars, publishers and libraries.
This session will present an overview of the UVa development of a process and policy to guide the collection of digital content in the University Library.
Joyce Ray
Abstract to follow