Summary of the Projects and their
Progress
Contents
Cornell University Library: Project
harvest
Harvard University Library: A Study of
Electronic Journal Archiving
MIT Libraries: Planning for an Archive of Dynamic
E-Journals
New York Public Library: Archiving Performing
Arts Electronic Journals
University of Pennsylvania Library: Archiving and
Preserving E-Journals
Stanford Libraries: LOCKSS, A Distributed
Digital Preservation System
Yale Library and Elsevier Science: A Digital
Preservation Collaboration
Cornell University
Library: Project Harvest
The Project as Planned
Can a major research library effectively become an archive for
numerous electronic journals, published by different publishers,
in one field? Because Cornell is a land-grant university, its
library has spearheaded national initiatives to ensure the
preservation of agricultural literature. It also has developed
expertise in creating and preserving digital library material.
Now it will try to organize, design, and implement a digital
archive for agricultural e-journals.
Project Harvest plans to invite as many as a dozen publishers
of core journals in agricultural science to work with Cornell
staff on a model agreement for archival deposit. The agreement
will define responsibilities of both the archival repository and
those who deposit journals in it, specify access policies
including copyright clearance conditions, and resolve numerous
other pertinent issues.
Simultaneously, the project staff will devise a model
architecture for the e-journal repository, providing for data
ingestion, storage, management, migration, and access. Among
other things, the project will investigate contractual, economic,
and technical implications of whether the repository should be
"dark" (the content preserved on some stable format with minimal
functionality, essentially for emergency use) or "living" (the
content no longer maintained by the publisher but regularly
accessible from the archive).
Also the project will deal with questions of how to ensure
scholarly acceptance (trust and use), how to develop the
repository organizationally, how to manage its growth, and
ultimately, how to finance it. Not least of all, the staff is
working on digital-repository certification standards, which
Cornell would plan to meet, and which, along with the prospective
model agreement and architecture, eventually could help
others.
Project Developments as of 5 May 2001
Project Harvest has hired the publisher-relations specialist
called for in its plan and has worked its way through early
housekeeping requirements. Additionally, it has established
criteria for selecting partners, identified agricultural journals
with which it wishes to work at the outset, and started
contacting publishers to arrange meetings. Moreover, Project
Coordinator Peter Hirtle and his colleagues have begun
consideration of attributes for an ideal system. They expect to
make more information on their work available soon on a Web site
they are drafting.
Harvard University
Library: A Study of Electronic Journal Archiving
The Project as Planned
Can a major research library arrange with multiple publishers
to archive many of the varied journals and databases that it
provides in its electronic gateway? The Harvard University
Library's list of such resources exceeds 2,000. Harvard gets
paper copies also, primarily for preservation, but the Library
does not regard this costly duplication as sustainable. Now it
will plan an archive to preserve journals electronically, based
on infrastructure for the creation, storage, and delivery of
digital library collections in which it has invested heavily in
the past two years.
In its planning, Harvard will analyze a two-part question:
Which journals-and which components of them-will it archive?
Answering will involve arranging with at least one journal
publisher to provide a significant volume of material to test the
scaling of the archive, working with that publisher (and possibly
also with an actively publishing scholarly society) to develop a
model for an archiving relationship, and selecting titles to
archive from the list of journals that Harvard now acquires only
in digital copies.
Harvard's plan includes drafting a policy on the components
part of the question-will the archive contain only article texts
from journals or also their covers, ads, letters-to-the-editor,
book reviews, and digital links? The project also will
investigate technical requirements for accession automation,
archival formatting, on-going validation, bibliographic control,
naming systems, access management, storage strategy, and output
facilities.
The project will not now negotiate archiving licenses, but
will explore what publishers are willing to provide and under
what arrangements. A major concern is cost-designing the
archiving process to minimize marginal costs, developing a model
for cost distribution, and exploring long-term options for
financial support.
Project Developments as of 5 December 2001
Marilyn Geller, project manager of the Harvard project,
provides the following report: Since the last update this
summer, Harvard has completed a first round of business meetings
and technical meetings with our publisher-partners, Blackwell,
John Wiley, and University of Chicago Press. We have also
received a report from
Inera, Inc.on the feasibility of developing a common archival
article DTD [document type description].
Our business meetings have helped us refine the mission of the
archive as a set of services and a logical organization for the
preservation of significant intellectual content of the journal
independent of the form in which that content was originally
delivered. Substantive discussions have also taken place around
the issue of the archive's stakeholders including researchers,
authors, societies, publishers, and subscribers as represented by
libraries. This stakeholder community, however it is organized,
would have the opportunity to review and comment on policies and
procedures for the development, administration, ongoing
maintenance, and financing of the archive. Policies regarding
access and financing of the archive continue to evolve.
The project's technical team has met with each of the
publishers regarding the principles of technical development and
the specifications for ingesting content. The most significant
technical development in the last few months has been the
delivery of the Inera study on the feasibility of creating a
common archival DTD that would allow the archive to received
material from all publishing partners tagged in the same manner.
Ten publishers participated in this study by contributing their
DTDs, documentation, and samples for review. The significant
conclusions drawn from this study are that it is possible to
create a common archival article DTD that would represent the
intersection and the union of several existing publisher DTDs and
that thorough documentation and quality assurance tools would be
essential to insure that conversion is successful. Because this
study has so much potential for resolving ingest, storage and
delivery issues, it is being made available to the entire
scholarly communications community. We are optimistic that this
will encourage discussion and progress in the technical aspects
of e-journal preservation.
In the coming months, we hope to finalize the conceptual
agreement with our publishing partners, document technical
development, operations, and staffing of the archive, and refine
the business model that will sustain this archive over time.
Project Developments as of 31 August 2001
In the past few months, both the Steering Committee and the
Technical Team of the Harvard E-Journal Archiving Project have
made significant progress in refining their broad understanding
of the research topic and exploring the detailed implications of
this understanding. As a whole, Project Manager Marilyn Geller
reports, the project has selected and begun to discuss the
business and technical models with three publishers as partners
in this project: Blackwell, University of Chicago Press, and
Wiley.
Discussions of the business model have been centering around
the nature of access to the archive; specifically, the project
and the publisher partners are exploring who should have access
to the archives, when, under what circumstances, and how.
Initially, the project proposed three access "trigger" events:
(1) when the content is no longer available on-line, (2) when the
title ceases to be published, and (3) after a defined amount of
time has passed; and it is the third type of trigger event that
is generating comment and being refined.
The project also has delved into the issue of costs to
understand what elements of the process of building and
maintaining the archive are sensitive to size or quantity and how
this might influence a model for sustainable financing of the
archive. Project staff envision that both the number and kind of
digital objects to be deposited will increase over time and may
be difficult to estimate. To a lesser extent, the size of the
archived content will have an effect on storage costs.
Additionally, the cost of migrating formats will be dependent on
the number of digital objects to be migrated, the frequency of
migration, and the technology available to accomplish the
migration.
Harvard is basing its archive on the architectural framework
provided by the Open Archival Information System (OAIS) Reference
Model. Under the OAIS model, material from a content producer is
transmitted to the archive in a form called a Submission
Information Package, or SIP. We have put together a tentative draft proposal for the technical
specifications of the SIP that defines acceptable data formats,
file naming conventions, bibliographic and technical metadata,
and so forth. We are scheduling a round of meetings with
technical representatives of our publishing partners to discuss
and refine this proposal.
One of the key ideas we are exploring on the technical side is
whether it is practical to design a common XML DTD that will
reasonably represent the intellectual content of archival
e-journal articles. Such a common DTD would simplify the work of
gathering content from a variety of publishers using different
DTDs. In this study, we have contracted with Inera because of
their substantial background in this area and will look at the
article DTDs being used by our publishing partners as well as a
sampling of other DTDs representing large volumes of content and
interesting elements. After determining the common elements of
these DTDs, we hope to analyze the usefulness of this approach
paying attention to what information is common to all DTDs and
what information may be lost by using this common DTD.
MIT Libraries: Planning for an
Archive of Dynamic E-Journals
The Project as Planned
Can a major research library capture and archive new scholarly
"publications" called "dynamic e-journals"? These are Web sites
where scholars share their findings unbound by conventions for
articles published in periodic issues of print journals. Such
"publications" provide dynamically updated, centralized access
points for a wide-ranging variety of research information,
scholarly interaction, and teaching resources in particular
fields of study.
Convinced that "the dynamic e-journals currently published
represent the leading edge of a broad range of dynamic content
that we must learn to capture for future scholars," the MIT
Libraries are taking on the challenge.
The challenge is great because Web-site publications change
content flexibly; some parts of the content are more valuable to
preserve than others; different kinds of content may require
different archival treatment; and "look and feel" is less fixed
for replication. But to the preservation task, MIT brings both
experience with the digital repository infrastructure it already
is building and the close relationship it has with the MIT Press,
which last year launched CogNet," a central repository for
resources for cognitive and brain sciences. The project would
seek relationships with other publishers of "dynamic e-journals"
as well.
In the planning year, the project may experiment with
prototypes of the archiving process, but it will focus on the
partnerships, strategies, and plans needed for the project's
development. This work will include negotiations with publishing
partners, detailed analysis of technical and legal challenges,
identification of key technical and legal hurdles that must be
addressed, and the development of technical specifications. Thus
the project hopes to begin resolving archival issues for these
new dynamic publications while they are still evolving.
Project Developments as of 19 November 2001
The Dynamic E-Journal Archive (DEJA) Project has determined to
interface, as anticipated, with MIT's DSpace project, a digital
repository for MIT's own digital, intellectual output. DEJA will
rely on DSpace's repository for long-term storage and
preservation. DEJA, the archive's engine, if you will, will
handle all data coming from publishers, verify the data's
integrity and completeness, track changes, add metadata, and
generally ready SIPs (submission information packages) for DSpace
to ingest them. DEJA will similarly call files back from DSpace
and create DIPs (dissemination information packages) for the
dissemination of information upon users' requests.
In recent weeks, we have concentrated on the interfaces
between DEJA-Depot and DSpace. We have also continued our
research into the description (using METS and Xlink) of the
"spaghettiness" of web journals (the complexity of their
inter-connection).
Project Developments as of 16 August 2001
The project hired Patsy Baudoin as project planner; she
reports the following:
Since May, the MIT-Dynamic E-Journal Archive (MIT-DEJA)
project has focused on born-digital journals. We have spent time
defining "dynamic," identifying what characteristics make
e-journals dynamic, including the full range of elements that
make up the "webness" of these sites (multimedia, links,
navigation systems, data-retrieval algorithms). We have traveled
several paths trying to decide what to focus on and how best to
capture such characteristics.
Two of the main issues we expect to tackle for the long run
are:
- Describing electronic journal sites? - what metadata will the
archive need to describe a site and the inter-relationship of its
parts to each other and to the whole?
- Understanding the quandaries of archiving versions of
journal-sites.
Preliminary discussions with MIT Press and Columbia
University's EPIC are underway. No partnership has been shaped
yet.
New York Public Library:
Archiving Performing Arts Electronic Journals
The Project as Planned
Can a premier public library with vast research collections
extend its services in a particular field by establishing secure
repositories of archived electronic journals in that field? The
New York Public Library (NYPL) will develop a plan to do so for
e-journals in the performing arts and such related areas as media
studies.
The project builds on several strengths of the NYPL: its long
experience in library preservation, its current development of a
digital library program, and its performing arts collections,
which are among the most extensive in the world. One of four
centers in The Research Libraries of the NYPL is its Library for
the Performing Arts, at Lincoln Center.
Project staff will identify and select relevant journals,
including those created digitally, those published in both print
and electronic forms, and those published as an online supplement
to print titles. Staff will then work with willing e-journal
publishers to develop agreements on archival rights and
responsibilities. Staff also will work on a technical
implementation plan for the archive, an acquisitions and growth
plan, an organizational model, staffing requirements, access
policies, and long-term funding options. Special attention will
be given to the development of methodologies that the archive
would use to validate the archival processes and assure the user
communities that the journals for which the archives is
responsible will be accessible and readable into the future.
The project faces special challenges in dealing with
performing arts e-journals because their publishers are often
small rather than major commercial operations, and they make
great use of embedded multimedia functions (such as digital sound
and video) and hypertextuality (such as links to other Web
presences). Questions will include how much commitment can be
made to original presentation style.
Project Developments as of 29 August 2001
Jennifer Krueger, deputy to the director of NYPL's research
libraries, is project officer. She reports that the project has
identified a core group of electronic journals in the performing
arts from which to derive a final list that will give the project
a range of partners. The project plans to include journals
published by a university press, by a commercial press, and by
small, independent publishers whose publications are important in
the performing arts but who are not part of an organization that
could be expected to take care of them. Also, the journals
selected will be in one or more languages besides English. Work
is under way to engage such e-journals as participants in
exploring the numerous issues involved in archiving, including
means of handling a variety of ingest formats, the legal
responsibilities of both parties, user needs and approaches to
the archive and its data, decisions about advertisements and
different media types contained in the original issues, and basic
technological issues about where and how to store.
University of Pennsylvania
Library: Archiving and Preserving E-Journals
The Project as Planned
Can a major research library take full advantage of electronic
journals by partnering with academic publishers to ensure
long-term access through creation of an e-journal archive? The
University of Pennsylvania Library will try to do so by building
on a successful relationship it already has with a major
scholarly publisher.
For more than a year, the library has been working with the
Oxford University Press to make its current book releases in
history available to the library's local community. This has
given the library experience in receiving a publisher's digital
content, converting it to a form suitable for on-line use,
maintaining it, and making it available under terms beneficial
both to the publisher and to the library's scholarly community.
Also of use in the e-journal archiving project will be hardware
already in place at the library and a database of information
about e-journals to which the library subscribes.
In the planning year, the project staff expects to select a
set of e-journals, arrange with their publishers to archive them,
and negotiate agreements for doing so that identify rights and
responsibilities. Also the project will determine and arrange to
support the metadata and workflow needed to receive, validate,
archive, and provide access to e-journals; design and begin
installing an information base for storing and accessing archived
journal content and related metadata; begin to install
documentation and support for the data formats and protocols used
by the archive, and start acquiring and archiving journals,
indexing them, providing metadata, and providing content to
authorized parties on an experimental basis.
The planning year will conclude with dissemination of results
and development of a plan for continuing and growing the archive
on a permanent basis.
Project Developments as of 1 May 2001
Oxford University Press has agreed in principle to work with
the University of Pennsylvania Library in the e-journal
preservation project, and the project staff, lead by John Mark
Ockerbloom, is working on a draft agreement specifying rights and
responsibilities for both parties. At the same time, the project
is progressing in its work on metadata descriptions.
Stanford Libraries:
LOCKSS, A Distributed Digital Preservation System
The Project as Planned
Can an automated, decentralized preservation system protect
libraries against loss of access to digital materials such as
electronic journals to which they have subscribed? Fear of the
demise of journals or problems with their publishers has
inhibited library investment in electronic resources. Staff
members of the Stanford University Libraries, a major research
library system experimenting with automation, believe they have
found one solution in a system called LOCKSS.
LOCKSS, which stands for "Lots Of Copies Keep Stuff Safe,"
provides a bootable floppy disk that converts a generic PC into a
preservation appliance. The PC runs an enhanced Web cache that
collects new issues of the e-journal and continually but slowly
compares its contents with other caches. If damage or corruption
is detected, it can be repaired from the publisher or from other
caches. The intent is to make it feasible and affordable even for
smaller libraries to preserve access to the e-journals to which
they subscribe.
With support from NSF and Sun Microsystems, Stanford developed
an alpha version of the system and ran a 10-month test with a
single journal, six libraries, and 15 caches. With the Mellon
funding and continued support from Sun, a more complete beta
implementation has been developed. The beta version is being
distributed to more than 40 libraries worldwide. They will run
approximately 60 caches. Four "shadow" publisher machines at
Stanford will mirror approximately 15 GB of content from real
journals, and will simulate brief failures and permanent outages.
Failures of the caches and corruption of their contents will also
be simulated, as will attacks by simulated "bad guys."
The beta software will be released as open source. With
experience from the beta tests and further funding, Stanford
hopes to produce a production version in 2002.
Project Developments as of 20 July 2001
The project has begun its worldwide beta phase, which will
test LOCKSS security, usability, and software performance,
including impact on network traffic. More than 40 libraries with
60 widely distributed and varyingly configured caches have signed
onto the project, and 35 publishers are endorsing the beta test.
Beta test sites include major libraries, such as the Library of
Congress, and smaller ones, such as the University of Otago in
New Zealand. The publishers' Web sites are simulated on shadow
servers to isolate LOCKSS data streams, measure network traffic,
and test whether LOCKSS works when the publisher "goes away."
Once the beta software is stable, representative government
documents and other non-HighWire publisher content will be added
to the test.
In mid-July, Project Manager Victoria Reich reported that the
project had passed an important milestone. Of participating
library sites, 35, almost all, have installed and are running
version 06122001 of the beta software. 45 machines are
simultaneously participating in the beta test. By comparison, the
alpha test had a maximum of 18 simultaneously participating
machines.
The Stanford team has improved the security of the user
interface and tweaked the system to respond to various worldwide
network configurations. Additionally, the LOCKSS protocol (LCAP)
is working well. This larger international beta test bed is so
far using one journal (the BMJ). It has identified and repaired
content damage in some local library LOCKSS caches.
The next steps are to bring the few remaining beta sites
online and to slowly add more journal content to the system.
See lockss.Stanford.edu for
more information and status updates.
Yale Library and Elsevier
Science: A Digital Preservation Collaboration
The Project as Planned
Can a major research library and a major publisher of
electronic journals on multiple subjects develop mutually
satisfactory agreements and technologies for ensuring long-term
access to those journals? The Yale University Library, which has
staff expertise in digital-information preservation and
licensing, and Elsevier Science, which publishes numerous
electronic journals varied in size and content, have agreed to
explore that question together.
Elsevier will continue to provide ordinary business access to
e-journals, which Yale will archive when they no longer are
commercially viable, and will make accessible for future
inquiries that could range far beyond contemporary use. For this
archive, Elsevier intends to provide coded content and metadata;
Yale intends to provide rendering software and a computer
environment for use.
Technology to be developed will include content storage with
database-management functionality, process-management software,
and a user interface (a Web server-browser for requesting,
rendering, and displaying data). Central to this will be a
technical means for separating e-journal content from original
functionality to allow "historical" researchers to use the
material in ways more congenial to them while remaining assured
of the authority and authenticity of the content.
In addition to developing the technology, the project will try
to identify the circumstances in which the library would become
the access provider, work out an agreement on intellectual
property rights in those circumstances, and answer many questions
about good design and management of a digital archive. Because of
the variety in Elsevier's publications, the resulting design may
be scaleable for adaptation by other libraries and
publishers.
Project Development as of July 2001
The following is a mid-year status report on the Yale
University Library/Elsevier Science Digital Archives Planning
Project, generously supported by the Andrew W. Mellon Foundation.
This report focuses on three lines of inquiry pursued since
January 2001, involving technical questions, preservation
metadata, and the service mission of a digital archive.
The principal investigators have been Scott Bennett and Ann
Okerson of Yale University. Paul Conway, David Gewirtz, and
Kimberly Parker are other Yale library staff who have
participated deeply and consistently throughout the project.
Karen Hunter, Geoffrey Adams, and Emeka Akaezuwa have led the
participation of Elsevier Science in the project. The partnership
built for this project between the Yale library and Elsevier
Science has been cordial, candid, and highly productive. Both
organizations are committed to understanding good archival
practice for electronic publications, and both strongly wish to
help create a functioning digital archive.
David Gewirtz visited the Electronic Warehouse maintained for
its publications in Amsterdam by Elsevier Science. He also
visited the National Library of the Netherlands. Titia van der
Werf, NEDLIB Project Administrator at the National Library,
visited for a day in New Haven. Both Yale and Elsevier Science
staff met separately with William Telkowski and his colleagues at
JP Morgan-Chase to discuss that firm's digital archives service.
Karen Hunter had conversations about our project with staff at
OCLC, the British Library, the National Library of the
Netherlands, the University of Toronto, Australian academic
libraries, and some libraries in Japan. Yale staff had
conversations relating to our project with Sun Micro Systems
staff in California.
TECHNICAL INQUIRIES. Early in the project, David Gewirtz
purchased and commissioned a Sun server and complementary disk
and tape subsystems. Elsevier Science sent us 18 tapes
containing, in about 500-700 gigabytes of data, all of the
e-journals it published between 1995 and November 1999. Elsevier
Science also arranged for the project to use the ScienceServer
software. With these and OpenText software in place, David and
other project participants have pursued two inquiries.
- We have documented the technical characteristics of Elsevier
Science's e-journals. We have identified the various Document
Type Definitions used by Elsevier Science and the technical
standards that have controlled the production and distribution of
its content since 1992. We have also documented the current
workflow used in the production of Elsevier Science content. See
Document 5 (in the Appendix listing project documents) and
Document 10.
- We have begun to investigate the technical configuration that
the data store for a digital archive might take. See Document 13.
David Gewirtz and Geoffrey Adams are consulting closely on this
matter.
METADATA. Paul Conway has led a consideration of preservation
metadata. He started with a review of the literature (Document 3)
and with analyses of metadata issues (Document 4) and of
preservation metadata in the OAIS model (Document 7). With this
background, Paul and others studied the metadata provided for and
actually created in the Elsevier Science EFFECT production
standard. They also compared these findings with the standards
for preservation metadata advanced by the British Library and by
the NEDLIB project at the National Library of the Netherlands and
with the MARC standards for cataloging serials. These findings
are reported in Document 12.
Project participants will next model the preservation metadata
that might be used in an archive of Elsevier Science e-journals.
We will do this by selecting a limited body of content types
(journal articles, editorials, and letters, for instance);
identifying the metadata standards relevant to these types of
information; agreeing on which metadata elements are essential to
an effective archive; determining how much of this metadata is
already being created by Elsevier Science and by serials
catalogers; and assessing the effort required to create any
essential metadata that are not now being created.
MISSION OF A DIGITAL ARCHIVE. In one sense, there is no
question about the mission of an archive of digital content. It
must provide permanent access to its content. This simple truth
grows immensely complicated when one acknowledges that such
access is also the basis of the publisher's business and that, in
the digital arena (unlike the print arena), the archival agent
owns nothing that it may preserve and cannot control the terms on
which access to preserved information is provided.
Project participants have seen the question of archival
mission as turning on our ability to identify conditions that
would prompt a transfer of access responsibilities from the
publisher to the archival agent. These conditions would be the
key factors on which a business plan for a digital archive would
turn. We started by trying to identify events that would trigger
such a transfer (Document 2), but concluded that all such events
led back to questions about the marketplace for and the
life-cycle of electronic information that we could not answer.
Project team members-from the Yale library and Elsevier Science
alike-agreed that too little is known about the relatively young
business of electronic publishing to enable us now to identify
situations in which it would be reasonable for publishers to
transfer access responsibility to an archival agent (Document
11).
In the process of coming to this conclusion, we modeled three
kinds of archival agents-a de facto archival agent, defined as a
library or consortium having a current license to load all of a
publisher's journals locally; a self-designated archival agent;
and a publisher-archival agent partnership (see Document 14). The
first of these exists (e.g., CSIRO, OhioLink), and Elsevier
Science is actively discussing the second type with a number of
(mostly national) libraries. Whether the third type of archive,
the focus of our investigation, can now be brought into existence
turns on the business viability of an archive that is essentially
"dark"-an archive, that is, for which no access responsibilities
can now realistically be predicted. Project participants vary
about whether an archive with so uncertain a mission can be
created and sustained over time and whether, if created, an
individual library such as Yale or a wide-reaching library
enterprise like OCLC would be the more likely archival
partner.
It is possible to imagine a digital archive that, while being
"dark" for most purposes, might be "bright" for some highly
selective purposes. Two such purposes have so far been named. One
is providing access for libraries that have not renewed their
licenses, but whose former licenses provided for perpetual access
to the content covered by those licenses. A second is free or
very low cost access to content provided to third-world libraries
through some agency such as the World Health Organization. It
might be that one or both of these purposes could provide enough
of an enterprise base to make a publisher-archival agent
partnership viable.
Appendix: List of planning documents
Listed here are various documents created during the first
half of the Yale University Library/Elsevier Science Digital
Archives Planning Project. Some of these documents contain more
or less proprietary business information; many were written as
part of an investigatory process and are not fully intelligible
when dissociated from that ongoing process.
The status report to which this list is an appendix is meant
to be understood without referring to any other document. Readers
wishing to consult any of the documents listed below should
address their request to Ann Okerson, the project's Principal
Investigator (ann.okerson@yale.edu).
2000
1. October: Scott Bennett. "Proposal for a Digital Preservation
Collaboration between the Yale University Library and Elsevier
Science."
2001
2. February: Scott Bennett. "'Triggering Events' & Related
Issues for a Digital Archive."
3. March: Paul Conway. "Preservation Metadata. An Annotated
Bibliography."
4. March: Paul Conway. "Yale/Elsevier E-Journal Project. Metadata
Issues."
5. March: David Gewirtz. "Site Visit Report to Elsevier Science.
Amsterdam, Netherlands, March 26-30 2001."
6. March: David Gewirtz. "Yale/Elsevier E-Journal Project.
Koninklijke Bibliotheek - The National Library of the
Netherlands."
7. April: Paul Conway. "Yale/Elsevier E-Journal Project. OAIS
Preservation Metadata."
8. April: Scott Bennett. "Characteristics of Two Types of
Archival Arrangements."
9. May: JP Morgan Chase I-Solutions. "Building a Large Digital
Archive Network."
10. May: Chris Shillum, Science Direct. "Publication Item Types
in ScienceDirect."
11. May: Scott Bennett, "Purpose of the Digital Archive."
12. June: Paul Conway. "Metadata [Standards] Review."
13. June: David Gewirtz. "Mellon Foundation Planning Grant:
Toward a Parallel Line of Inquiry."
14. June: Scott Bennett. "Three Models of Archival Agents."
return to top >>
|