Those who are not yet convinced that we need digital deposit of government information into Federal Depository Library Program (FDLP) libraries, should read this:
- Could the Smithsonian No Longer Be Free?, by Stephanie Condon, CBS News, (November 11, 2010).
The draft proposal [of President Obama’s bipartisan deficit commission], penned by commission co-chairs Erskine Bowles and Alan Simpson, suggests a number of ways to cut discretionary spending by more than $200 billion in 2015 — including reducing federal funding for the Smithsonian and the National Park Service. The commission co-chairs suggest the Smithsonian Institution should charge admission fees at its 19 museums and the National Zoo, which are all currently free, to make up for the lost funding.
And the response by the Smithsonian:
…The Smithsonian is the national museum and has been open–free of charge–for 164 years. In a sense, Americans already pay to visit the Smithsonian with their tax dollars, which provide about two-thirds of the Smithsonian’s annual budget.
The Commission’s recommendation that the Smithsonian charge admission would create a barrier for many audiences–those who are underserved and who would most benefit from exposure to the Smithsonian’s collections, exhibitions and research….
Imagine that! An institution with a long history of free public access to our heritage being suddenly told, “Sorry, but we just can’t afford to keep access free anymore — even if the public has already paid for it.”
We have been told over and over that GPO has good intentions, and we fully believe that they do — today. They don’t want to charge for government information. They do want to make everything available freely for ever. But GPO does not set its budget. Good intentions are not sufficient to guarantee preservation and access in the digital age.
None of this will be new to regular readers of FGI. What is new is that it is becoming increasingly hard for those who say we do not need digital deposit to justifiably claim the it-can’t-happen-here argument. That is the argument that claims that a single repository or even a “a small number of dedicated preservation entities” will be sufficient to guarantee long-term, free, public access.
My work over the last two years has concentrated on evaluating digital repositories using OAIS and TRAC and I have learned and re-learned a lot of lessons from this work. One lesson is that the one immutable, undeniable thing about digital information is that preservation will not happen by accident. It requires constant attention and intention.
Another lesson is that there is too much important information to rely on a select few organizations to preserve it all. It’s not just about keeping “lots of copies” (though that is an important piece of the puzzle). It is also about lots of communities and lots of decision makers and lots of budgets. The big mega-repositories, no matter how big or how well-intentioned, will not preserve (will not be able to afford to preserve) everything. And, inevitably, some of the things they weed, or choose not to select, or are forbidden by policy from collecting, will be valuable to someone. Those “someones” need their community libraries looking out for their needs. (And in the digital age, “community” no longer has to mean local, geographically-bound communities. In the digital age we can have world-wide communities of interest, discipline, subject, etc.)
Another lesson is the importance of non-technological sustainability. When we look at long-term preservation, we have to look at risk of loss. Everyone knows that digital information is fragile and needs attention to ensure it remains usable. This fact is what drives the technological end of digital preservation projects. But there are also economic risks (not having enough money), organizational risks (changing priorities, changing missions), and social risks (being sure we select materials for preservation and do not withdraw that don’t get “enough” use or generate “cost recovery”). Even GPO’s first electronic transition plan explicitly guaranteed preservation only “as long as usage warrants.”
And every risk has an associated impact. If a failure happens, there is an impact. It could mean a new cost to recover or reformat or retrieve damaged or unreadable information. The worst impact is irretrievable loss of information. In between there are other impacts.
In the world of government information these include privatization, imposition of fees, and imposition of restrictions on access or use or re-use. We see these every day (e.g., NTIS, STAT-USA, FBIS, PACER, Public Health Reports, the electronic LCSH, Current Industrial Reports, Schizophrenia Bulletin, and less access in general).
A risk can be small or rare or unpredictable but the consequences or impact can be catastrophic. This is what happened with the financial industry. The financial ‘quants’ used (bad) data and (flawed) economic models to predict that a failure of the system was very, very unlikely. They decided that it was so unlikely that they didn’t need to account for it. By failing to specify the impact of a failure and failing to have a plan for dealing with it, they gave us a recession, foreclosures, unemployment, record deficits, and near complete economic collapse. This was their version of “it can’t happen here.”
Apart from the technical risks and impacts that are typically discussed in the digital preservation literature, the non-technological risks are at least as important, if not more so. “Surely, no one would privatize FDSys? Surely, Congress will always fund GPO adequately? Surely, the private sector won’t challenge any special deals that Portico or other GPO ‘partners’ Surely, It Can’t Happen Here?”
With the attack on the Smithsonian, we know it is already happening here.
David Rosenthal, who has written extensively about the technical aspects of digital preservation, has a excellent piece on related issues: The Anonymity of Crowds.
Related FGI posts: