DLF logo DLF logo

DLF Home

About

Architectures, systems and tools

Digital preservation

Digital collections

Standards and practices

Use and users

Roles and responsibilities

DLF Forum

Publications and resources

Report of Imaging Practitioners Meeting on 30 March 2001 to Consider How the Quality of Digital Imaging Systems and Digital Images May be Fairly Evaluated

S. Chapman
submitted to DLF May 23, 2001

Present (20):
Sally Bjork (UM), Stephen Chapman (HUL), Bill Comstock (HCL), Franziska Frey (IPI), Hans Hansen (Octavo), Dan Johnston (UCB), Erik Landsberg (MoMA), Lee Mandell (HUL), David Mathews (MFA), Phil Michel (LC), Stephanie Mitchell (HCL), Ron Murray (LC), Alan Newman (AIC), Steve Puglia (NARA), David Remington (HCL), David Semperger (BosPhoto), Peter Siegel (Harvard), Don Williams (Kodak), John Woolf (MFA), Mingtao Zhao (HCL)

Other invited guests:
present (a.m.): Jan Merrill-Oldham (HUL), Mark Roosa (LC)
unable to attend: J. Meyer (Digital Attributes), D. Zabriskie (Luna), K. Kallsen (HUAM)

see Appendix I, "List of Participants," for more information

Overview

By 1998, several consortia recognized that significant investments were being made to digitize pictorial collections in libraries, archives and museums. This activity was and continues to be highly decentralized. In the absence of guidelines or best practices for image digitization, one concern was whether these collections would be interoperable: were images being made in formats that could be easily distributed? And to a baseline level of quality that would meet users' requirements? If one looks back even earlier to the experiences of MESL and other projects to create and exchange visual images, it seems that the focus shifted at one point from making digital images to making good digital images.

With these issues of imaging investments in mind, the Digital Library Federation and the Research Libraries Group sponsored a series of imaging guides that were published in July 2000. The guides were designed to help make project managers and technicians aware of the decisions-e.g., selecting and setting up equipment-that would have the greatest impact upon image quality. Parallel to this effort, a NINCH working group had been advancing its efforts to codify best practices for digitizing cultural heritage materials and was eager to receive feedback from the practitioners with the greatest amount of field experience.

The purpose of this DLF-sponsored meeting was to establish a forum for expert practitioners to exchange ideas about what is "good" and, if possible, to prioritize where tools, applications, and training would be of greatest benefit to meet our institutions' obligations to make digital reproductions of consistent quality and persistent utility.

High-level statement of the problem

Don Williams, an image scientist from Eastman Kodak who facilitated the meeting, spoke both for imaging practitioners and for imaging scientists and other members of standards committees when he noted, "There appears to be somewhat of a consensus that there is not any reasonable way right now to look at all imaging performance measures without ambiguity." [1]

Subjective assessments are notoriously flawed, due not only to differences among human observers, but to limitations of devices that render images (monitors, printers), as well as the differences in ambient lighting in two or more viewing environments.

Objective methodologies are also imperfect. Although quantifiable metrics are well defined in the scientific community-and currently receiving greater endorsement by standards organizations-it is currently very difficult to use tools to measure imaging performance in real-world production environments. Making an analogy to changes in auto manufacturing, Don pointed out that photographers no longer know "what's under the hood" in their imaging systems. It is difficult enough to find tools to interrogate imaging performance-in key areas such as noise, density and dynamic range-but even more problematic to subtract the errors that may have been inadvertently introduced by the process itself. Does a noise measurement, for example, refer to the imaging system or the noise generated by a target that was digitized?

Intentional or unintentional use of imprecise terminology also creates ambiguity. For example, industry's marketing literature and our community's funding guidelines routinely associate image quality with unreliable metrics-such as resolution and bit depth. These performance characteristics refer to input settings and become ambiguous if used to describe output quality. The same source could be digitized by two systems that produce the same nominal results (e.g, 3,000 pixel, 24-bit RGB images) yet the quality of the images may differ significantly.

Finally, as Ron Murray would point out at the meeting, there is a potential to confuse measurements with judgments (how people see images and whether they will be satisfied). The challenge is to integrate these two perspectives of quality into methods for quality control.

Several challenges brought imaging experts to this forum: to improve upon the conventions used to describe and distinguish the performance of various imaging systems; to institutionalize methods to monitor performance of equipment; and to simplify the task of evaluating images, particularly those produced outside of known workflows.

I. Summary of current practices to monitor quality of systems and images

The meeting began with a tour de table, with each photographer describing quality requirements and production practices in his or her studio. To initiate discussion on image quality, rather than production per se, Don Williams encouraged each person to answer three questions, listed below. Since viewpoints were remarkably similar among the commercial (2), museum (4), and library/ archives (5) communities that were represented, responses are categorized by question, rather than by type of studio.

Note: other questions were distributed to participants in advance. Their responses, summarized in Appendix II, provide a more complete picture of institutional practices.

1) How did you choose your equipment and with what priorities?

  • priorities
    • project-driven demands
      • budget, expectations for productivity, and end use (of images) are major factors: "...evaluate what file will be used for and what they're willing to spend... produce samples and show them with price tag attached..."
      • materials handling requirements can limit choice (e.g., UC Berkeley needed a flatbed scanner with variable focus in order to scan papyri in 1 x 14 glass mounts)
    • product support
      • availability of local service is critical; responsiveness very important, particularly when modifications are needed to keep studio up and running
    • technical performance
      • good dynamic range
      • quality of the signal (noise)[2]
      • sharpness: images must be consistently sharp corner to corner
      • support 16-bit output
      • ease of use
      • fit/integration into existing workflows
      • device-independence (openness)

  • techniques used to choose equipment
    • viable methods...
      • advice/feedback from colleagues-"perhaps the most valuable short cut"
      • research (read Seybold reports, ColorSync list, etc.)
      • demonstration and testing in one's own studio, with qualifications: although it is often possible to bring in equipment, it is very difficult to accomplish any meaningful testing in a production environment; heads nodded when one photographer said he "would love to have a separate lab just for testing"
      • evaluation of technical targets
      • soft-proofing and/or evaluation of prints (of "known outputs")
    • methods that don't work...
      • manufacturers' specifications are universally distrusted
      • evaluating equipment at trade shows-technical reps not present; difficult to scan materials at settings you want to use

2) How do you monitor system's imaging performance (e.g. tools, frequency, methods)?

  • visual assessment
    • evaluation of images on screen
      • "if something is going wrong we can see it in the 100% view of the image" (AIC)
      • in high-production workflow, only practical choice is to review small sample (sometimes as few as 1 in 200) (LC)
      • trouble areas to watch for are dust and reproducing corners as sharp as the center (getting the original aligned flat is challenging)
        ...requires monitor calibration (generally monthly)
      • note: heads nodded with the assertion that the monitor is the weak link in this type of quality assessment
      • the X-Rite DT92Q colorimeter and OptiCal software "have produced good results" on a variety of monitors in a variety of settings (AIC, MFA, HCL, NARA), although opinions vary about the quality of the monitors themselves. HCL is satisfied with Barcos; the MFA with Mitsubishi DIAMONDTRONs; AIC has established aim point densities for highlight and shadow at 6500ēK and eliminated monitors that failed to produce a bright enough white point (according to the photographer's eye)
    • calibration of ambient environment
      • varying degrees of familiarity and interest in implementing the ISO 3664 standard, Viewing conditions - Graphic technology and photography

  • target-based assessment (per setup, daily, or weekly)
    • detail reproduction
      • variety of targets/variety of uses: RIT line pairs (RT-1-71), SinePatterns Sinusoidal Test Pattern MTF target (used by LC principally to qualify vendor, but also routinely scanned in production), slanted-edge MTF target used to monitor equipment and select new components (e.g., lenses)
    • color reproduction
      • variety of targets: Macbeth ColorChecker and ColorChecker DC, Kodak Q-60

  • reliance upon "native" calibration and standard service
    • daily auto-calibration and self-diagnostics, preventative maintenance calls by professional technicians (i.e., no secondary assessment)

3) What techniques have you instituted to achieve consistency in your images? As a corollary, What standards have you integrated into your imaging workflows?

It is notable that no one mentioned specific standards. A range of practices was described. All studios include grayscales and color bars in frame with reflective source materials. But not all use them in the same ways. This raises important questions about image exchange and image evaluation.

  • use of grayscale for tone reproduction
    • all studios include Kodak grayscales and color bars in-frame when digitizing reflective media with consistent lighting ...the reason? as noted by Erik Landsberg, "targets and conditions provide a thread; know they're doing things consistently"
      • alternative method for film scanning and/or variable lighting setups not discussed
    • scan to aim points-varying numbers/methods:
      • aim points to grayscale based on gamma 1.8 curve (UCB and HCL)
      • HCL measure patches (of grayscale and Q-60) with colorimeter, then create curves to control contrast and gray balance during photography
      • "people know their numbers for highlight, midpoint, shadow": method is to hit aim points and keep RGB within two points throughout the scale (MoMA)
      • use aim points for shadow and highlights (AIC)
      • set middle gray to grayscale (UM)
    • MFA has developed system to do lots of soft proofing to original based upon tonal scale and the characteristics of original
    • MFA generates a "digital grayscale" to lock in RGB values calibrated to their system: the monitor is "the only set point in whole system"-image on screen matched to original artwork illuminated in GTI viewing booth
    • MFA also creates "dead-neutral" 11-step grayscale in Photoshop...

  • use of color bars for color reproduction
    • pin hopes on color management by using consistent lighting techniques; keep Q-60 scans and Macbeth color checker scans and associate them with corresponding files; hope to make input profiles from these (UCB)
    • metamerism takes place...numbers on grayscales and color bars not valid when shooting acrylics; color matching in the print comes down to perceptual issues...

  • subjective assessment
    • production-line monitoring; always scan in two-person team...two pairs of eyes (very much by observation); not using targets; using considered judgment; use best equipment we can (MoMA, UM); "[this technique] can be hit or miss, because production is such high priority"

  • other
    • periodic cleaning and maintenance (static controls, humidity, dust)

      At one point in the discussion, Don Williams asked what kinds of problems studios might have in exchanging images? The replies:

    • we would want to know something about how image was produced; would want to know which profile, target, curves were used-image metadata needed
    • one approach to manage image exchange: create a guide print (e.g., Pictograph, high-end Epson); match image on monitor to the print, then send out an RGB file-the huge assumption behind this technique is that the second studio will look at the print under same light source; proofing lights vary (e.g., GTI and Macbeth tubes produce noticeable differences in yellows and greens)
    • high-end printers are the ones most resistant to digital photography...threatening their scanning and their separation...one museum studio described the challenges in developing workflows to deal with "a hostile environment" in sending out images for printing

II. Metrics, standards, and tools-presentation and demo by Don Williams

Don Williams presented an overview of imaging criteria and some of the tools currently available to measure these. He emphasized that the main objective of image performance metrics is to maintain a genealogical thread back to source document. In the scientific community, it is well established that performance metrics go into image quality metrics. He represented that there is a "lot of consensus [in this community] on how to measure imaging performance" and emphasized that these metrics do not need to dictate how you manage or produce your images, only on how to measure system performance and image quality.

Don clarified that his area of expertise is image microstructure. He is most familiar with the tools-some already sanctioned by ISO-to measure resolution, grayscale tone, and image noise. In part by working with the library and museum communities, Don has evolved from being a researcher to an applied researcher to a clinician. What he has tried to impress upon standards committees-as Franziska Frey later echoed in her comments-are the workflow issues and concerns among practitioners.

Don argues that digital photography has not fundamentally changed imaging science-there's nothing new-but in digital imaging the photographer is responsible for knowing what's going on. The idea is to get metrics out in the open so photographers can evaluate what they see is fit.

  • key performance metrics

    signal

    • tone capture/reproduction, spatial resolution (see below)

    noise

    • random unstructured texture - 1D and 2D, artifacts, measure standard deviation in uniform textures and plot it
    • what's being evaluated...very hard to separate image processor from scanner....
    • can evaluate microstructure and compression behaviors with respect to resolution with digitally-generated images....

    tone capture/reproduction

    • bits per sample is a pretty shallow spec: if there are non-uniformities in system corners get dark near the center...several bits used just for that (big problem with early Photo CD scanners)...use of bits for density detection the key

    spatial resolution

    • sampling frequency (ppi, dpi) is a shallow spec-doesn't tell you much in terms of really knowing the amount of detail an imaging system can capture
    • bar targets-only method suitable to evaluate binary scanners, but with grayscale and color the cons are that these targets are contrast dependent and provide little diagnostic insight
    • MTF finding its way into standards development: use objective contrast criteria...plot ratio of contrasts...tells you extent to which contrasts degrade; example shows three outputs with same limiting resolution, but difference are important: one suggests flare, another well behaved, the third indicates sharpening (the hump at beg of curve)
    • emphasized that the MTF evaluation IS a predictor for image quality...one can cast just about any kind of an image process into an MTF...can take individual ones, cascade them and come up with a system MTF
    • ...next step: take performance metrics and come up with some kind of quality standard (take eye MTF and cascade with characteristics of imaging system...what's below curve is good)

    Partifacts

    • not too many tools to measure them

    color registration

    • work seems to be cyclical (not a problem in the late 1980s, then bad again, now lessening...); another strange artifact occurring in flatbed scanners is repetitive 2-D structure (wave going across the image...by interpolating down to 600, from say 610, not a problem with sharpening...but introduce sharpening and it screams at you)

  • demo: Applied Image Slanted edge target and "Auto SFR Slant" software
    • virtues of system: inexpensive (all you need is an edge), designed to be used as "field tool" for quantitative analysis of noise, spatial resolution and color misregistration
    • new target is ISO 16067 compliant; now only for reflectance; may be used for both camera and scanner; redundancy built in-spatial resolution estimated twice in both the horizontal and vertical dimensions
    • software can plot noise at various density level
    • MTF can be plotted for each color channel
    • free offer from Don to scan slanted-edge targets (see "Next Steps" at end of report)

    Q&A revealed limitations to current tool

    • cannot yet extract noise generated from the target-goal is to reliably extract noise due to scanner rather than noise to target
    • some expressed need for real-time feedback (e.g., to assist in measuring focus)
    • strong desire to measure "real dynamic range" with a target, since this cannot be seen on a monitor-would be very useful to assist in the set up phase of a shot/project
      • reply: will be discussing matter in 3 weeks. Issue greater for reflective than transmissive. Proposal to measure dynamic range for cameras is embedded in OECF Standard ...scanner group to adopt and extend.
    • since there are competing targets/software being distributed from PIMA IT10, it is important for standards organizations either to adopt one set of tools or to overcome the current situation in which different image analysis packages may not give you the same numbers

III. Metrics, standards, and tools-putting them into practice

There were several exchanges about the applications for using targets, as well as the terms and conditions that would need to be met for managers to institute their use. Bill Comstock noted that "all of us are using targets...they're not useless [but] only useful under certain conditions and with certain types of source material." The challenge is to articulate when they're useful and when they fail. An observation was made that the worst thing they could do is use these things in an inappropriate way-worse to do that then not use them at all.

Thus, everything that follows for the remainder of this section should be classed as Research Issues.

Observations were made that the two main applications are to monitor imaging system hardware, and/or to establish preferred processing actions (e.g., better compression; optimized amount of sharpening).

With respect to the technique of including targets in-frame with each item being digitized, an alternative was suggested: under set conditions, in the interests of saving time, could one scan the target separately and associate the target with the corresponding group of images in a database? This idea seemed to be received equally with potential and with skepticism, with the skeptics asserting that the targets are needed to determine the response of the system under different conditions-specifically, the different conditions (reflectivity, etc.) driven by the source material itself.

  • benefits to target-based assessment of quality
    • targets and associated imaging performance metrics can provide a "genealogical thread" back to the source
    • can establish that imaging was done consistently
    • targets function as common element to tell how far you've changed things
    • targets can serve same purpose as control strips did in the "old days" of film processing: provided that one has an appropriate level of expertise/background of experience, control files made it readily apparent where something looks different than it did "last week": one can isolate how much work has been affected...
    • agreement that image performance metrics are well understood by the scientific community
    • potential for improved troubleshooting and problem resolution: with targets and metrics, can better articulate problems with system components in discussion with vendors and manufacturers-avoid going round in circles with different eyes
      • evidence (HCL) already gathered that disproves the adage, "when something goes wrong you can see it"

  • dependencies
    • design and manufacture of targets (biggest concern):
      • practitioners insist targets must be of material similar to the source materials: spectral characteristics, contrast... comparing spectral responses of scanners to those of sources; will we ever have a target to assess a broad range of materials?
      • some practitioners also want targets to be of similar size to source material, but DW "hesitates to recommend use of large target...problem with high contrast"
      • need to be manufactured of material so "one has room for error"
      • inconsistencies in manufacturing: "there are six different Macbeth targets...not even remotely alike; they age differently"
    • instrument calibration
      • e.g., must be certain that densitometers are properly calibrated if we were to rely upon these numbers to reconstruct images
      • monitor densitometers must also be properly calibrated for soft proofing: one photographer asked, "How consistent is my X-Rite? Does it really measure consistently?" We're taking on faith that calibrator as delivered from X-Rite is working properly-should we have another instrument as a redundant check?
      • aging of lamps, particularly LEDs in new scanners, a concern: how trustworthy is auto-calibration? How do we assess consistency of lighting over the long term for such devices?
    • education and training
      • practitioners must use correct targets (e.g., group agreed that color targets that come with the grayscale are useless)
      • we may need an intensive targeting phase to figure out what best imaging process is...versus telling us about systems...they are different objectives

  • present obstacles to implementation (even if dependencies were adequately managed)
    • two main perceptions:
      • tools not user friendly; ease of use is critical: photographers do not want to "fuss idiosyncracies on a day-to-day basis" ... "I'm still not convinced that this would be useful in day-to-day...if I have to introduce another step...[my boss] will say, 'Forget it'..."
      • task more complex and time-consuming than soft proofing

IV. Other Areas for Research and Development

Although these topics received comparably less attention than performance metrics and targets, each represents a key challenge that demands further exchange of information and possibly collaborative investigation. Selected commentary is recorded below.

  • Documentation of practice/image metadata

  • Conservation guidelines
    • handling of bound materials
      • specifications about cradles and openability of books of particular interest to Octavo, but also to other participants
      • depth of field a challenge
    • tolerable levels of light exposure for various materials

  • Color management
    "...standards chair says that when people talk about color there's never any agreement"

    Use of profiles

    • Macbeth charts fail because they're mapping all bit to large gamut (which is terrible if you are doing photography of near neutrals). Generic profiles are "virtually useless," but then observed that in some cases we're talking about failures in imaging, not color management. To produce consistent quality, imaging needs to be right: black needs to be black, white needs to be white, and neutral needs to be neutral. If we segregate out the target issues from the color management concerns, we can make progress.
    • we're reluctant to embed profiles, concerned about what happens down the line...question of doing something now that gives us 85% accuracy or forego ICC and "stick with numbers"

    Color managed workflows

    • "pin hopes on color management by using consistent lighting techniques"
    • nothing inherently wrong with color management, but clearly problems with spectral responses and matching them to objects and sensors...color management good in a closed-loop system...there is issue with spectral sensitivity of sensors...by its very nature color management is an average and therefore a shortcoming
    • only good way to manage color is to know spectral response..."second-generation" color management systems now being developed are spectral, not averaged: will allow one to build absolute profiles

    Metamerism

    • differences between reflective spectra of commercial targets and many of the sources being digitized from libraries and museums
    • differences in ambient lighting cause problems in color matching, particularly in printing
      • create guide print (Fuji Pictograph, high-end Epson) that is matched to image on studio monitors; send RGB file to printer: "huge assumption" is that the print will be viewed under same light source used in studio; alternative: create color patches digitally matched to his system...his patches lock them into his monitor...correct reproduction on screen to match original...also create "dead-neutral" 11 step grayscale in Photoshop
      • found that proofing lights vary...went to GTI tubes instead of Macbeth tubes (noticeable differences in yellows/greens)
    • issue of false color condition...CCDs have very steep cutoff filters...presumes a certain light source...none render yellow, red, orange accurately, can read target then photograph organic dye or pigment .... red spike occurs at 640-680 nanometers...film sees what eye doesn't...roundabout way of saying that color targets are useless
    • Franziska Frey stated that "the only way to get around this problem" will be through the development and deployment of multi-channel, multi-spectral cameras, such as the VASARI Imaging system in use at the National Gallery in London[3]

  • File size at capture (as baseline for quality)
    • cases in which imaging guidelines mandate a specific size without paying attention to the size of the original; who else has thought about this?
      • possible to tailor specifications to output project-by-project
      • Boston Photo interviews client: stated use requirements drive the file size
      • stock market has established some standard measures for file size, e.g., 18-20MB suitable for print-the 8 x 10 standard (of 300 dpi on output)...agreement that this is not only viable, but AIC reported cases in which 20MB images were found to be cleaner than 80MB counterparts-Don noted that the imaging industry refers to this phenomenon as "empty magnification"
      • agreement that "what we need is way to get bureaucrats away from monolithic thinking"

V. Next steps

The meeting produced strong consensus to move forward on several fronts, including communications, testing, and lobbying.

  1. Set up listserv

  2. Conduct experiment to test the viability of the ISO 16067 sanctioned tools that are currently available (slanted-edge target and Auto SFR software)
    • purposes: to identify differences in performance characteristics among many scanners in different environments in scanning the same source; to specify the peak performance of systems in their current configurations; to see if the tool verifies subjective assessments and vice versa
    • methodology: scan target that Don gave you at the meeting; send image(s) to Don; Don analyzes data-in part, to identify clues as to how/why variances occur-then reports results to individuals and to group
    • perceived benefits: to generate a thought process; as noted by Bill Comstock, "we're a group that uses similar devices...not other people using same set...would be interesting to see how these devices perform in different environment provide us with clues about what affects system performance..."
    • depending upon level of interest....expand the source material to include not only the slanted-edge target, but also a challenging (but typical) pictorial image that can be distributed to all parties: it was suggested that a worthwhile goal would be to define a standard instruction set of images that could be used to enlighten this group and other interested parties, such as standards organizations

  3. Facilitate standards development. Our objectives are to inform standards committees about the volume of work being done in our community, and to prove (perhaps by completing item #2 above) there is a practical benefit to integrating imaging performance metrics into the production workflows established to create quality images of persistent value. (Our belief is that too much standards work is consumer-based.) Our working premise is that standards groups first need community acceptance, then the money to develop tools and applications.

    With help from Don Williams, Franziska Frey will draft a letter, which the group will endorse, to Ken Parulski or other standards chairs.

  4. Develop targets of materials (spectral response, contrast, size) similar to the range of historic materials we are asked to digitize. (No specific proposals were made, but several participants agreed to begin exchanging information and to collaborate if possible.)

  5. Meet again! Alan Newman invited to host the next meeting at the Art Institute of Chicago: some aspect(s) of managing color likely to be the main topic of discussion and/or training.

Appendix I: List of Participants

Sally Bjork, Photographer, University of Michigan

Steve Chapman, Preservation Librarian for Digital Initiatives, Harvard University Library

Bill Comstock, Manager, Digital Imaging Group and Photography Studio, Harvard College Library

Franziska Frey, Research Scientist, Image Permanence Institute

Hans Hansen, Chief Technology Officer. Octavo

Dan Johnston, Senior Photographer (and manager), University of California, Berkeley

Erik Landsberg, Senior Fine Art Photographer (and manager), Museum of Modern Art

Lee Mandell, Programmer/Analyst, Harvard University Library

David Mathews, Photographer, Museum of Fine Arts, Boston

Jan Merrill-Oldham, Malloy-Rabinowitz Preservation Librarian, Harvard University Library .

Phil Michel, Digital Conversion Specialist (and manager), Library of Congress Prints & Photographs

Stephanie Mitchell, Photographer, Harvard College Library

Ron Murray, Scientific Photographer, Librarian, Library of Congress Preservation, Directorate

Alan Newman, Executive Director, Imaging Department, Art Institute of Chicago

Steve Puglia, Preservation and Imaging Specialist, National Archives and Records Administration

David Remington, Photographer, Harvard College Library

Mark Roosa, Director for Preservation, Library of Congress

David Semperger, Project Manager, Boston Photo, Inc.

Peter Siegel, Manager, Digital Imaging and Photography , Harvard University Art Museums/Harvard Fine Arts Library

Don Williams, Imaging Scientist, Eastman Kodak Company, Imaging Research and Development

John Woolf, Digital Imaging Specialist (and manager), Museum of Fine Arts, Boston

Appendix II: Institutional Practices

Photographers from eleven studios-2 commercial, 4 museum, 5 library/archives-generously set aside the time to respond to a questionnaire distributed in advance of the meeting. The responses to these 22 questions are summarized below. If there is interest among the wider community (e.g., other practitioners, DLF, NINCH, funding agencies), these responses can be mined further as the basis for a report or article.

experience

  • photo-intermediates-began scanning 1982-2000. Average start date is 1993.
  • direct digital photography-began 1995-2000. Average start date is 1997.
  • began working with digital still images (individually) in 1990-1998. Average start date is 1993

source materials

  • varied...photographic prints are largest category of materials that have been selected for digitization to date; majority of studios digitizing several categories of materials (3D, fine art, photographic and photo-mechanical prints, plates, vintage film, film intermediates, papyri, and even "live" shots of exhibitions and people); two studios, however, have specialized in certain formats: LC Prints & Photographs with film scanning; Octavo with book pages

equipment

  • sensors: eight types in use, but two predominate: trilinear sensor in 100% of studios; one chip color area sensor in 45%; three chip color area and color sequential area cameras less common; least common: monochrome sensors (linear and area), CMOS and PMTs
  • lighting: varies across the board: fluorescent is top choice in four studios; tungsten top choice in five; strobe top in two; seven of eleven studios use at least two types of lighting

system monitoring

  • 10 of 11 studios use targets to monitor systems, including color calibration of monitors; several also use targets to create ICC profiles that are subsequently embedded in images

variations in quality cited due to...

  • environmental factors (vibration, lighting)
  • operator error
  • flawed systems (drifting during warm-up, imprecise/uniform lighting, imprecise focus, soft-proofing inconsistencies: CCD "sees" spectral reflectance of organic dyes and pigments differently than human eye or film or spectrophotometer)

specifications for archival images

format
TIFF used 97.3% of all cases (used exclusively in 8 of 11 studios); other formats: BMP, JPG

color space
RGB predominates; exceptions: sRGB used in two studios; Adobe 1998 RGB used for high-resolution "production archival" files in several studios

compression
no compression used in 9 of 11 studios; one commercial studio occasionally saves archival images as "maximum quality" JPEGs (at the request of clients); one library studio uses LZW compression approximately 30% of the time

file size (color images)

... largest in museum studios/Octavo:
>100 MB 100% of Octavo's and Harvard Art Museums' images
51-100 MB 100% of MoMA's and AIC's images

in one of the museums (MFA) and all of the libraries and archives studios-including Boston Photo, which typically serves these clients)-file sizes are more widely distributed, with the majority falling below 50 MB:
> 100 MB 4%
51-100 MB 38%
21-50 MB 25%
< 20 MB 33%

file size (grayscale images-8 studios reporting; three digitizing only in color)
> 100 MB 1%
51-100 MB 2%
21-50 MB 26%
< 20 MB 71%

tone reproduction
set to aimpoint values: 7 studios; to photographer's choice: 8 studios; both: 4 studios

note: this category needs further analysis...need to relate methodology to source materials

with targets included in-frame (grayscale and color bars predominate; sometimes rulers)
10 of 11 studios include targets (LC did not, but 90% of their work was film scanning)

specifications for delivery images

format
JPEG 52% of total (used by 10 of the 10 studios that create deliverables)
TIFF 28%
SID 10% of total (but used for 99% of images created at the Univ. of Michigan)
PCD 6% of total (but used for 60% of imaged created by Boston Photo)
GIF 4% of total (but used for 60% of imaged created by NARA)

for page images of books

PDF used for 100% of Octavo's products: interface design has been very difficult; Octavo has been evaluating other options; one photographer encouraged them, and the rest of the group, to look at JPEG2000 as a possible solution for many delivery applications

with embedded ICC profile
always: 30%, never: 30%, sometimes: 40%

with targets included in-frame
yes: 7 studios, no: 4

Responses to question, "What is your definition of an 'archival' digital image?"

overarching
It can be used for a defined set of purposes and functions for a defined period of time. Its attributes can be accessed, managed, and maintained. It might be permanent, a best-copy surrogate, or version of last resort.

An image that meets required standards, set by departmental policy or project.

pictorial quality
An archival digital image is a digital image that is created with sufficient quality so as to act as a replacement if the original is lost or damaged.

An "archival" digital image is a high-quality, lossless image that maintains the highest possible fidelity to the original. It retains the best possible color and grayscale information (no clipping). Its format is a widely accepted non-proprietary format, and it allows metadata to be associated with the file to make images easier to retrieve and understand in the future.

... I place considerable value on the following principle: Other things being equal, the best master file is the one that represents the appearance of the original most precisely and completely. I sometimes imagine a model capture system which could be teamed with a model output (printing) system, so that the output from the printing system would exactly reproduce the appearance of the original document, all without any intervention of human judgment. The image should accurately represent the subject or means should be available (the profile, target, etc.) to apply tonal and color corrections.

RGB, TIFF file scanned or captured with prepress quality device, stored on optical media (CD-R or DVD-R).

longevity
It is an image that meets certain criteria of longevity and integrity that we have defined based upon the state of technology and a consensus of opinion in our particular moment in time and history.

Will open correctly in ten years.

Uncompressed TIFF with color scales, grayscales burned to CD with backup CD offsite. In general: an application independent and platform independent file with references that indicate its characteristics, stored on "permanent media" with approximately 8-10 years life expectancy. Redundant storage with expectation to migrate for both media and storage formats within 5-7 years from archive inception.

openness, capability to generate other files
Suggest using the term "master image file," "archival image file" is an oxymoron. Cambridge Dictionary-master: an original copy of something, such as a recording or film, from which copies can be made.

The parent of all derivatives, stored on the most modern media redundantly as per IT best practices. 80-100MB TIFF or smaller depending upon size of original. (AN/AIC)

We define "archival" digital images as the raw data of highest resolution, without compression, with ICC color profiling.

The digital masters for our image capture projects are by definition our "archival images." They are scanned according to capture guidelines intended to make them useful for many different purposes and to make them easy to manage; and administrative metadata for each image is maintained in a project database. Ordinarily the digital master is the capture file as scanned, with only cropping and rotation allowed as processing. However, I find a scan from a slide or negative usually requires additional processing to be useful, such as fine color adjustments, dust spotting, etc. In this case a second digital master is archived.

A file that can be returned to for future derivative generation with minimal processing. Stored as either raw scan data or in a sufficiently wide gamut of color space as to assure no clipping. The image should accurately represent the subject or means should be available (the profile, target, etc.) to apply tonal and color corrections. File format should be non-proprietary and widely-used accepted industry standard with associated metadata.

Notes

1. In this context, "imaging performance" refers to the actual outputs of a system component (such as a lens) or of the entire imaging system.

2. The Museum of Fine Arts, for example, selected a 6M pixel area-array camera with a clean signal over available line-array camera backs. They reasoned that, if needed, an acceptable larger file could be created with interpolation.

3See, Francisco H. Imai and Roy S. Berns, "High-Resolution Multi-Spectral Image Archives - A Hybrid Approach," Proceedings of the Sixth Color Imaging Conference, Society of Imaging Science & Technology, Scottsdale, AZ, November 1998. Available on-line: http://www.cis.rit.edu/people/faculty/berns/PDFs/cic6_Imai.pdf. Imaging Practitioners Meeting, 30 March 2001


Please send comments or suggestions.
Last updated:
© 2000 Council on Library and Information Resources

CLIR CLIR Home Page