Paste your Google Webmaster Tools verification code here

Home » post » EPA Tagging Results and Future Directions

Our mission

Free Government Information (FGI) is a place for initiating dialogue and building consensus among the various players (libraries, government agencies, non-profit organizations, researchers, journalists, etc.) who have a stake in the preservation of and perpetual free access to government information. FGI promotes free government information through collaboration, education, advocacy and research.

EPA Tagging Results and Future Directions

Back in January we asked people to use del.icio.us to tag a sample of 32 documents taken from the 100 EPA documents posted by the Government Printing Office (GPO) to http://www.gpoaccess.gov/harvesting/index.html.
We asked people to tag documents from 1/18/2008 through /18/2008. A spreadsheet of the results is available at http://spreadsheets.google.com/pub?key=pybymZBlZ80PVat2ggty2GA.
This brief article informally discusses some of our results, offers some lessons learned, and offers suggestions for future projects. Finally, a short list of articles on other research relating to tagging is presented.

1) Findings

  • Number of tagged documents – 31
  • Average number of people tagging a given document – 2.5
  • Highest number of taggers for a document – 8, for the document “Environmental Results Under EPA Assistance Agreements”
  • Average number of deduplicated tags per document – 11.25
  • Number of documents with descriptions – 31, with a majority of documents having more than one human generated description.

2) Some Promising Results

While we would have liked to have seen more participation (see below under “study limitations”), these initial results are somewhat positive. There is some interest in tagging. Tagged documents tended to receive meaningful descriptions beyond what a brief bibliographic record would provide. For example, for the document “Air Sealing: Building Envelope Improvements”, we have the following descriptions from five users:

* Mount Desert Spring Water was able to win a bid to provide bottled water and water coolers to the University of Maine. Mount Desert Spring Water was successful because the water coolers it provided were energy efficient and the lowest cost to the Universi – samchap

* Describes the benefits of proper air sealing for homes. EPA awards the EnergyStar when legal minimum standards are exceeded. – mkvs

* Conserving energy in your house by having it sealed correctly – bookswoman

* “Air sealing the building envelope is one of the most critical features of an energy efficient home.” “25-40% of energy” “ENERGY STAR qualified homes, constructed to exceed [building] codes with air sealing, can offer a better quality product.” – keyvowel

* This Energy Star news release describes ways homeowners can reduce home heating and cooling costs by implementing air sealing techniques. – tadamich

Without question, the first description is problematic, but the other four descriptions are in agreement about what this document is about AND provide more relevant information than a brief bibliographic record.

For the most part, the tags we got were also meaningful and descriptive. Staying with the document “Air Sealing”, we have the following tags:

Air, air-sealing, airsealing, building-insulation, efficient, energy,
energy-efficiency, Energy-Star-Branding, energyconservation, energystar, epa, EPA-advertising, globalwarming, greenhousegases, home-building, home-building-techniques, home-construction, home-improvement, homes, hvac, indoor, leakage, money-saving, quality, sealing, ventilation

Contrast that with a brief bibliographic record that simply has title, agency, and URL. How would people know that this document is part of the EnergyStar initiative, or that it was related to home building or energy efficiency? Clearly, in this instance and in a number of other project documents, there was a clear value added.

3) Limitations of current study

Our promising results were limited by three factors, the most important was the lack of participation. We estimate that about ten people participated in our tagging project. The available research on tagging is pretty firm on stating that good social tagging requires many users. Some say 100 or so is good, others suggest higher numbers. Our numbers are clearly too low. There are also too many instances (12) when a document was tagged by a single user. This could greatly bias how a document gets tagged. Consider if the only description of “Air Sealing” had been the mistaken one about water coolers. That would have been worse than useless. But even in this instance, a user pulling up this document while searching for water coolers could have provided a more accurate description.

The low number of taggers also made it difficult to see how much tag agreement existed among the various taggers.

Another problem was self-inflicted. We forgot to instruct people on tag construction. These were our original instructions:

1) Visit http://www.archive.org/search.php?query=epapilotproject and go to a document on the list. Open the pdf file in a separate browser window.
2) In del.icio.us, tag the page for the Internet Archive record (i.e. not the PDF file) after examining the PDF file.
3) In the del.icio.us “notes” field, write a one or two sentence description of what the document is about.
4) In the tags field, please use epapilotproject, for:freegovinfo and then any tags that you feel describe this document.

del.icio.us uses a space separated tag system. In other words, a space begins a new tag. So tagging something as “air quality” results in the two tags of “air” and “quality” and not the more helpful tag of “air quality” This resulted in some of the tagging becoming meaningless. If we had asked people to put dots or dashes in multiple word tags, we would have gotten more meaningful tags. We still got some useful tags because some of our taggers were used to the del.icio.us system, but we shouldn’t have assumed that everyone tagging would know how to construct multiword tags in del.icio.us. On the other hand, this problem might have been less noticeable if we had more taggers per document.

Our final problem is one we think could be avoided in future projects. That is people tagging different files with the same document title. We asked people to bookmark the Internet Archive page for a given document, which has a link to the PDF file. We specifically asked people NOT to tag the PDF file because del.icio.us doesn’t populate the title field of bookmarked PDFs. But one person in our project consistently bookmarked a document’s PDF file instead of the Internet Archive page and this separated that person’s tagging from everyone else’s and made it more difficult to compile tagging info for every document.

4) What next? Some suggestions

Our findings indicate that tagging does have potential to add value to web harvested documents that do not receive full cataloging, but for this benefit to be fully realized, there must be more taggers. When we realized we didn’t have the number of taggers we wanted, we headed for the literature and found some articles
listed below under “References Consulted.” They offer some interesting guidance for other document tagging efforts.

While all of the papers below talked about user motivation, I think Tim Spalding said it best in a post titled “When tags work and when they don’t: Amazon and LibraryThing”:

“Something is going on here—something with broad implications for tagging, classification and “Web 2.0” commerce. There are a couple of lessons, but the most important is this: Tagging works well when people tag “their” stuff, but it fails when they’re asked to do it to “someone else’s” stuff. You can’t get your customers to organize your products, unless you give them a very good incentive. We all make our beds, but nobody volunteers to fluff pillows at the local Sheraton.”

The EPA documents are sort of like fluffing pillows at the local Sheraton, to me at least. My primary interest isn’t environmental documents and EPA documents are not a major component of my library’s depository collection. In addition our particular sample was unintentionally heavy on flyers, applications, and brochures. It could be that another agency’s documents, say NASA or DoD might get more attention.

There’s another angle too. In my anecdotal experience, librarians don’t see web stuff as theirs, so they don’t spend much processing time on it. Of if they are concerned about web documents, perhaps their administration does not. So how could we make them owners and think of web harvested materials as “their stuff” so they’ll make their “documents beds”? A few suggestions follow:

1) For the EPA documents, GPO could partner with libraries that do have a strong environmental collection. Perhaps candidate libraries could be determined through item selection analysis.

2) GPO might wish to consider doing a depository survey to see what agency depositories would most like to see web-harvested. The survey could include a question asking libraries if they would tag if the desired content was harvested.

There wouldn’t have to be a commitment to tag every document, but to tag some of the documents.

While GPO should continue with web harvesting no matter what, we wouldn’t blame them for not moving forward with a documents tagging initiative if the depository community failed to register interest in such a project.

3) If GPO re-harvests EPA or moves on to another agency, it should consider setting up RSS feeds for newly harvested documents. Subject specialists from inside and outside the library community could take part in tagging. Again, GPO would need to start with some broadly popular agencies to have a chance of recruiting a significant number of taggers.

4) If GPO or another organization does a large scale tagging project, significant thought should go into tagging conventions. Not the vocabulary itself — research seems to show that once an item reaches 100 tags or so, the proportion of tags stays constant. That is to say that agreed upon terms appear to predominate over idiosyncratic or spam tags (See Golder and Huberman below for details). What needs to be spelled out is how multi-word tags should be constructed — is it air-quality, air.quality, or air_quality? They all mean the same thing, but del.icio.us and other tagging services interpret them differently. A consistent new word marker or a choice of tagging site that supported spaces inside tags will make any tagging project go smoother.

These are our thoughts. What are yours? Look at our spreadsheet. Check out the item pages on del.icio.us and read the articles below. Then let us know what you think about the future of social tagging for government documents.

References Consulted

– “HT06, Tagging Paper, Taxonomy, Flickr, Academic Article, ToRead” by Cameron Marlow, Mor Naaman, danah boyd, Marc Davis http://www.danah.org/papers/Hypertext2006.pdf

– The Structure of Collaborative Tagging Systems
by Scott A. Golder and Bernardo A. Huberman
http://www.hpl.hp.com/research/idl/papers/tags/
http://www.hpl.hp.com/research/idl/papers/tags/tags.pdf

– “Can Social Bookmarking Improve Web Search?” by Paul Heymann, Georgia Koutrika, and Hector Garcia-Molina
http://heymann.stanford.edu/improvewebsearch.html
http://dbpubs.stanford.edu/pub/showDoc.Fulltext?lang=en&doc=2008-2&format=pdf&compression=&name=2008-2.pdf

– “When tags work and when they don’t: Amazon and LibraryThing”
Thingology Blog, posted by Tim Spalding Tuesday, February 20, 2007
http://www.librarything.com/thingology/2007/02/when-tags-works-and-when-they-dont.php

CC BY-NC-SA 4.0 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Archives