Home » Posts tagged 'Internet archive'
Tag Archives: Internet archive
This is an amazing offer from Brewster Kahle and the internet Archive. Kahle just wrote a letter to the House Subcommittee on Courts, Intellectual Property and the Internet Committee on the Judiciary stating unequivocally that they will “archive and host — for free, forever, and without restriction on access to the public — all records contained in PACER.” The “Public Access to Court Electronic Records” or PACER system is the supposedly publicly accessible system of federal court records that charges exorbitant fees to download, thus making it for all intents and purposes blocking meaningful access to federal court records. But with this letter, the whole system could become actually accessible, for free and in perpetuity!
By this submission, tile Internet Archive would like to clearly state to the Judiciary Committee, as well as to the Administrative Office of the U.S. Courts and the Judicial Conference of the United States, that we would be delighted to archive and host — for free, forever, and without restriction on access to the public — all records contained in PACER…
In order to recognize the vision of universal free access to public court records, the Federal Judiciary would essentially have to do nothing. We are experts at “crawling” online databases in an efficient and careful fashion that does not burden those systems. We are already able to comprehensively crawl PACER from a technical perspective, but the resulting fees would be astronomical. The Federal Judiciary has a Memorandum of Understanding with both the Executive Office for us Trustees and with the Government Printing Office that gives each entity no-fee access for the public benefit. The collection we would provide to the public would be far more comprehensive than the GPO’s current court opinion program- although I must laud that program for providing a digitally-authenticated collection of many opinions.
By making federal judicial dockets available in this manner, the Federal Judiciary would enable free and unlimited public access to all records that exist in PACER, finally living up to the name of the program. In today’s world, public access means access on the Internet. Public access also means that people can work with big data without having to pass a cash register for each document.
The OpenGov Foundation wrote just released their “Statement on Internet Archive Offer to Deliver Free and Perpetual Public Access to PACER” in which they said:
“The vital public information in PACER is the property of the American people. Public information, from laws to court records, should never be locked away behind paywalls, never be stashed behind arbitrary barriers and never be covered in artificial restrictions. Forcing Americans to pay hard-earned money to access public court records is no better than forcing them to pay a poll tax.
“The Internet Archive’s offer to archive and deliver unrestricted public access to PACER for free and forever is the best possible Valentine’s Day gift to the American people. The Internet Archive is proposing a cost-effective and innovative public-private partnership that will finally fix a clear injustice. There is no reason to do anything but accept this offer in a heartbeat.”
This just came through my twitter feed from @MuckRock. Through a FOIA request which shook it loose from the notoriously difficult NSA, we now have access to NSA’s 2007 Untangling the Web: a guide to Internet research. It kind of reads like a Terry Pratchett novel if Terry was having a psychotic/psychedelic episode. As MuckRock notes, “you don’t have to go very far before this takes a hard turn into ‘Dungeons and Dragons campaign/Classics major’s undergraduate thesis’ territory.” Read on, you’ll thank me later!
And if you’re interested, I collected and cataloged a version for our library. The original NSA link to the document no longer resolves (and it was put up just last year!!), but there’s an archived copy in the WayBack Machine.
The NSA has a well-earned reputation for being one of the tougher agencies to get records out of, making those rare FOIA wins all the sweeter. In the case of Untangling the Web, the agency’s 2007 guide to internet research, the fact that the records in question just so happen to be absolutely insane are just icing on the cake – or as the guide would put it, “the nectar on the ambrosia.”
The End of Term 2016 collection is still going strong, and we continue to receive email from interested folks about how they can help. Much of the content for the EOT crawl has already been collected and some of it is publicly accessible already through our partners. Last month we posted about ways to help the collection process. At this point volunteers are encouraged to help check the archive to see if content has been archived (i.e., do quality assurance (QA) for the crawls).
Here’s how you can help us assure that we’ve collected and archived as thoroughly and completely as possible:
Step 1: Check the Wayback Machine
Search the Internet Archive to see if the URL has already been captured. Please note this is not a specific End of Term collection search and does not include ALL content archived by the End of Term partners, but will be helpful in identifying whether something has been preserved already.
You may type in specific URLs or domains or subdomains, or try a simple keyword search (in Beta!).
1a: Help Perform Quality Assurance
If you do find a site or URL you were looking for, please click around to check if it was captured completely. A simple way to do this is to click around the archived page – click on navigation, links on the page, images, etc. We need help identifying parts of the sites that the crawlers might have missed, for instance specific documents or pages you are looking for but perhaps we haven’t archived. Please note that crawlers are not perfect and cannot archive some content. IA has a good FAQ on information about the challenges crawlers face.
If you do discover something is missing, you can still nominate pages or documents for archiving using the link in step 3 below.
Step 2: Check the Nomination Tool
Check the Nomination Tool to see if the URL or site has been nominated already. There are a few ways to do this:
- View all reports here
- Check this list here for a list of everything nominated or search here.
- You can also check our bulk lists here
Step 3: Nominate It!
If you don’t see the URL you were looking for in any of those searches, please nominate it here.
Questions? Please contact the End of Term project at eot-info AT archive DOT org.
I was honored last week to be part of a panel hosted by OpenTheGovernment and the Bauman Foundation to talk about the End of Term project. Other presenters included Jess Kutch at Coworker.org and Micah Altman, Director of Research at MIT Libraries. I talked about what EOT is doing, as well as some of the other great projects, including Climate Mirror, Data Refuge and the Azimuth backup project, working in concert/parallel to preserve federal climate and environmental data.
I thought the Q&A segment was especially interesting because it raised and answered some of the common questions and concerns that EOT receives on a regular basis. I also learned about a cool project called Violation Tracker, a search engine on corporate misconduct. And I was also able to talk a bit about what are the needs going forward, including the idea of “Information Management Plans” for agencies similar to the idea of “Data Management Plans” for all federally funded research. I was heartened to know that there is interest in that as a wider policy advocacy effort!
The full recorded meeting can be viewed here from Bauman’s adobe connect account.
Here’s more information on the EOT crawl and how you can help.
Coalitions of government, university, and public interest organizations have been working to ensure as much information as possible is preserved and accessible, amid growing concern that important and sensitive government data on climate, labor, and other issues may disappear from the web once the Trump Administration takes office.
Last Thursday, OTG and the Bauman Foundation hosted a meeting of advocates interested in preserving access to government data, and individuals involved in web harvesting efforts. James Jacobs, a government information librarian at Stanford University Library who is working on the End of Term (EOT) web harvest – a joint project between the Internet Archive, the Library of Congress, the Government Publishing Office, and several universities – spoke about the EOT crawl, and explained the various targets of the harvest, including all .gov and .mil web sites, government social media accounts, and more.
Jess Kutch discussed efforts by Coworker.org with Cornell University to preserve information related to workers’ rights and labor protections, and other meeting attendees presented some of their own projects as well. Philip Mattera explained how Good Jobs First is using its Violation Tracker database to scrape and preserve government source material related to corporate misconduct.
Micah Altman, Director of Research at MIT Libraries, presented on the need for libraries and archives to build better infrastructure for the EOT harvest and other projects – including data portals, cloud infrastructure, and technologies that enhance discoverability – so that data and other government information can be made more easily accessible to the public.
PBS NewsHour recently ran this very good piece on the fragility of Internet information and what the Internet Archive is doing about it. This is a good short piece that succinctly explains why in the digital age lots of copies are necessary to keep information safe. And the corollary to lots of copies is that there needs to be lots of libraries continuing the work of digital collection development.
What’s online doesn’t necessarily last forever. Content on the Internet is revised and deleted all the time. Hyperlinks “rot,” and with them goes history, lost in space. With that in mind, Brewster Kahle set out to develop the Internet Archive, a digital library with the mission of preserving all the information on the World Wide Web, for all who wish to explore. Jeffrey Brown reports.