Home » Posts tagged 'disintermediation'
Tag Archives: disintermediation
Joseph J. Esposito, an independent management consultant to for-profit and not-for-profit clients, has written a nice post at the Scholarly Kitchen about disintermediation:
- Disintermediation and Its Discontents: Publishers, Libraries, and the Value Chain, by Joseph Esposito, Scholarly Kitchen (April 18, 2011).
In this piece, Joe describes the “value chain” in scholarly publishing in which each link in the value chain adds something to the process: the author creates value by originating an idea and content; the publisher adds value through editorial selection, refinement, production, and marketing; and so forth. Libraries, too, contribute to the value chain:
Libraries are selective; they help guide readers to materials of higher quality. Libraries have purchasing power, which saves money for readers. Libraries provide a suite of tools for organizing publications and helping readers find what they are looking for. Libraries provide so much value that most people want them to be bigger.
Disintermediation occurs when one link in the chain is bypassed. This can be caused by the link losing value, but it can happen for other reasons and can result in destroying the value provided by that link.
How does this relate to the FDLP and government information? As we’ve noted here and here and elsewhere before, users of government information no longer need to go through FDLP libraries to get government information. And, as many others (including Ithaka S+R) have noted, FDLP libraries no longer have a monopoly on free government information, have no “purchasing power” advantage, and so do not save readers money. That particular value of libraries in the information value chain no longer exists and that results in government information users bypassing libraries for their government information needs.
Some (including Ithaka S+R) have concluded from this that libraries can rely on online access to government information stored on government-controlled web servers and create a service-only model of libraries without collections.
But this view overlooks the other important values that libraries add to the value chain. These include libraries’ selection of materials of interest to particular user communities, their tools for organizing information to make it easier to discover, their tools for making information easier to use, and their commitments to freedom to read, user privacy, and long-term free public access to information. It also overlooks the ability that libraries have (but which government agencies for the most part do not have) to build collections that combine government information with non-government information. In short, it overlooks the importance of digital library collections.
Joe makes another good point that is equally relevant to FDLP libraries: that, when we see a link in the value chain being bypassed, we should be asking what effect management decisions had in making that link lose its value.
[D]isintermediation lends itself to a version of technological determinism. Because the Internet makes it possible for an author to have a direct connection to a reader, therefore, it is assumed, authors inevitably will connect directly to readers. This ignores the role of human agency.
…[D]isintermediation, in other words, is an outcome; but the input is management strategy. Rather than talking about disintermediation, we really should be talking about the things that affect management decision-making and strategy and to take control of them.
Thus, shouldn’t we be asking, If users are bypassing libraries for their government information needs, what decisions are libraries making that make the library of less value to the user?
In some cases users may not be aware of value lost. If governments fail to preserve all the information that users need, users will not be aware of this until they find information missing — at which point it will be too late.
In other cases, users may find that getting some information easily (e.g., using google to search the web and find some possibly-relevant government information) is better than using primitive hard-to-use library tools to find some possibly-more-relevant government information. Librarians may say that users “should” use libraries and they’d find “better” information if they did; but making that argument will not attract users. Providing better services will.
If we look at what services users do use, we will see that they tend to be services built on top of digital collections. ProQuest, for example, does not build indexes that point to government web servers; it collects digital information and builds services on top of that.
In short, libraries could create new value in the government information life-cycle by building robust services on top of rich digital collections (FDLP: Services and Collections ). Or, as Joe says,
[I] think of it as the creation of a new value chain, with the stress being given to the new value that is created.
What works and what does not
In this post we’ll examine what we can learn from the Ithaka S+R Environmental Scan about existing government information models that are working and those that are not working.
The trends that emerge in the report are, for the most part, not surprising to anyone who has been following the trends in information access and libraries over the last two decades. But two developments stand out because of their huge significance to libraries and because they are developments that libraries can control. This is where our choices can make a difference to the future of libraries and the future of free access to government information.
The first development is that of disintermediation. This is the process by which users are increasingly finding that they do not need an intermediary (libraries) to identify, locate, and make use of relevant information. The report describes this development repeatedly in all types of libraries with all types of users.
The second development is the challenge of long term preservation of digital information, which the report describes in some detail in one of its longest sections (p. 24-32).
Which responses to these developments have been effective? Which work and which do not?
One of the strategies most often used and advocated by librarians to neutralize the trend in disintermediation is re-intermediation. There are two ways this has been done.
1. Force people to come to you. One way to re-intermediate is to change the environment so that people will once again have to use libraries. Although most librarians do not publicly advocate such an approach, it is the foundation underneath what libraries do when they license access to commercial services (e.g., e-journals, abstracting and indexing services, audiobooks, etc.). In these cases, libraries are benefiting from the restricted access to information imposed by publishers by making libraries a seemingly necessary intermediary. The federal government also uses this model, making some of its information available only for a fee. In some cases FDLP libraries get some sort of free but restricted access to these services, in others libraries have to subscribe to services.
GPO itself has used this model to create a new niche for itself in the digital age. At a time when printing was becoming irrelevant and a government printing office therefore unnecessary, GPO re-intermediated itself into the life cycle of government information by making it virtually impossible for the public to get whole classes of government information without going directly to GPO. It did this partly by refusing to deposit digital government information in FDLP libraries.
Effectiveness. This approach has a short-term, superficial effectiveness since it explicitly re-intermediates the library, but we already know that, in the long-term, it fails. The report describes users who access licensed services remotely but who do not appear to realize that the library is providing an intermediary service; the report implies that these users apparently believe that they no longer need the library. In addition, this strategy of negotiating and enforcing contracts that restrict access to information is not a role that requires librarians or a library. We can conclude that this approach will inevitably lead to these licensing services being provided more efficiently (and less expensively) by a business or legal office of a parent institution. We can also predict that, if we rely on this approach, it will encourage more fee-based government information services (e.g., DARTS, National Climatic Data Center Online Document Library, the Homeland Security Digital Library, Public Health Reports, USA Trade Online, etc.).
In the case of GPO, its success as a sole source of access appears to be hastening, not ending, its own disintermediation. Agencies appear even less likely to use GPO as a “publisher.” This results in more information being “fugitive” (outside of GPO and Title 44 control). And, as a sole provider of digital preservation, GPO has no Congressional guarantee of long-term funding to preserve everything forever nor to provide free access forever.
2. Provide New Services. The second way to re-intermediate is more commonly advocated publicly by librarians: It is the idea of creating new services that will attract users. The report mentions many of these “new services”: promoting “the library as a place” and as an “information commons,” providing internet access and help using the internet, providing computers and software and help using them for specific tasks such as job-hunting, and, in general, offering “higher-value services targeting the particular needs of local constituents.” In the area of government information specifically, some librarians strongly advocate libraries promoting themselves as an intermediary between the public and faceless government bureaucracies. Some suggest that government information specialists will do this, others say that all librarians will be trained in government information and there will be no more specialists.
Effectiveness. The report provides little or no evidence that these services are effective in attracting or maintaining users, nor that librarians are uniquely qualified to offer such services. In fact, the evidence in the report’s section on “Changing research behaviors and use of libraries” suggests that users are content with the information they can gather without the help of libraries or librarians. Although librarians may wish that users would ask for help and may believe that librarians could help users find better information, there is no reason to believe that users will change their behavior. This approach is little more than an unsubstantiated hope that users will turn to intermediaries at a time when all the evidence demonstrates that users prefer disintermediation.
“Local loading” and building digital collections
The other strategy described by the report is the building of local digital collections, which the report refers to as “local loading.” This is a relatively new approach since many libraries have avoided building digital collections or moved slowly to do so. The advantages of doing so are clearly stated in the report as the motivation for choosing this strategy. These include: the need by researchers for dynamic data repositories, the need for curation and digital preservation, the need by users of all kinds to link documents and data, the need of users to have information systems that are enhanced beyond what publishers and producers and distributors provide, the desire by users to have integrated selections of quality resources from different sources, the need by colleges to have enhanced course management applications, the need for better ILL and citation management tools, and the desire of libraries to offer better services and value to a library’s users.
Effectiveness. Evidence suggests that organizations that select and acquire digital content and build digital services on top of those collections are successful. Although the report mentions some of these, it neglects to mention some of the key players in this area, giving, perhaps, a diminished impression of their importance. Some of the successful projects include commercial information vendors (e.g, LexisNexis, ProQuest), non-profits (e.g., the Sunlight Foundation, OpenCongress, Govtrack), universities (e.g., the University of Virginia’s historical census browser, the Public Papers of the Presidents project at the University of Michigan and University of Wisconsin’s “Foreign Relations of the US”), consortia (e.g., CIC, LOCKSS, HathiTrust, OCUL), governments (e.g., data.gov, Thomas, FDsys), special projects (e.g., National Security Archive at George Washington University, collections at the Federation of American Scientists, OpenCRS), as well as projects in the sciences such as arXiv and the collections at the Los Alamos National Laboratory. Perhaps the biggest and most influential in this area are the Google scanning project and the HathiTrust.
All of these share the common strategy of building unique services on top of digital collections that they curate. One of the most important lessons of these projects is that their successes are based on their providing something that no one else does. This is not an artificial “re-intermediation” or a mere hope that users will recognize their need for librarians, but an actual, concrete provision of collections and services that attract users because of their actual value to users.
Benefits to users
It is worth noting that those who oppose building digital library collections often argue that the user does not recognize or care where digital information resides as long as they can access it. This argument misses two essential characteristics of local collections, however.
First, users will see the advantage to using the local digital library collection when the library does something with the content that the remote provider does not (and often cannot) do. Libraries with their own digital collections can provide search and discovery tools that integrate information from many sources; they can provide computational analysis and data mining tools; and they can provide APIs that reflect their users needs for using and repurposing information. Perhaps most importantly, they can combine information from many sources to build unique collections that reflect the interests and needs of their designated user communities. This will make it easier for their users to find what they need without having to sift through irrelevant material on the open web and without having to visit multiple, isolated, proprietary, deep-web/hidden-web sites.
Second, users will derive the benefit of locally maintained collections when libraries keep something online that would have otherwise disappeared. This is where libraries can address the second major trend we identified above: digital preservation. The Ithaka S+R report, unfortunately, confuses this issue by bringing up the old cliche of “access vs. ownership.” This hackneyed adage is, today, out of date and misleading. It a false dichotomy because we cannot ensure access unless we have control over the content. Simple “access” to information over which we have no control is at the mercy of those who control the information. In addition, preservation and access are inseparable in the digital world and preservation is inseparable from “ownership,” i.e., control. When something is removed from the web (or altered), whether it is accidental or intentional, whether it is done for economic or political reasons, whether it is the “right” decision for one organization or not, the result is the same: the user can no longer access the information he or she needs.
Libraries can prevent information from disappearing and can ensure its long-term preservation. Even smaller libraries that do not see themselves as having long-term preservation as a primary mission will realize that their size often means that their community has information needs that are overlooked by large, monolithic preservation projects. Cooperation and coordination of many small, medium, and large collections will help ensure that information needed by even a small group of information users will not fall through the preservation cracks.
While surveys may not reveal that users understand this as a need today, users will appreciate and understand it in the long run. In the short term, it is librarians who must act now with this long-term vision in mind. Later will be too late. If we fail today, users of tomorrow will recognize our failure and not see a reason to support institutions that did not look after the interests of their communities.
As the Ithaka Findings document notes, “GPO cannot on its own serve as the single trusted party to ensure the preservation and integrity of the digital and digitized FDLP collections.” The only question that leaves us with is, Who will work with GPO to preserve and ensure free access for the long-term?
Attempting to make libraries relevant to users in an era of disintermediation by attempting to artificially re-intermediate libraries will fail. Building local digital collections will make libraries relevant to users (in ways that no one else can match) and accomplish digital preservation (which no one else can accomplish alone).