Home » Posts tagged 'data rescue'
Tag Archives: data rescue
Check out Episode 01 of Data Remediations from the Data Refuge
Check out the great new podcast initiative called Data Remediations from our friends at the Data Refuge. If the first episode shows anything, it’s that this podcast will be a must to listen to going forward as it’ll be chock full of interviews with interesting people working on various aspects of data use and preservation – and by the way, Jim Jacobs and I will be on a future episode to talk about our Free Government Information (FGI) efforts.
In this episode, hosts Patricia Kim and Bethany Wiggin introduce Data Remediations, a podcast connecting data with people and places through stories and art. Interviews with Eric Holthaus, Michael Halpern, Denice Ross, Margaret Janz, Tad Schurr, and the Environmental Performance Agency further contextualize the origins of Data Refuge and its storytelling project.
These Advocates Want to Make Sure Our Data Doesn’t Disappear
Here’s another story about data rescue and the preservation of government information, this time from PC Magazine UK. Though the last data refuge event was in Denton, TX in May and the 2016 End of Term crawl has finished its collection work and will soon have its 200TB of data publicly accessible, there still remains much interest — and not a little bit of worry — about the collection and preservation of govt information and data. And with stories continuing to come out — eg this one from the Guardian entitled “Another US agency deletes references to climate change on government website” — about the US government agencies scrubbing or significantly altering their Websites, this issue will not be going away any time soon.
“Somewhere around 20 percent of government info is web-accessible,” said Jim (sic.) Jacobs, the Federal Government Information Librarian at Stanford University Library. “That’s a fairly large chunk of stuff that’s not available. Though agencies have their own wikis and content management systems, the only time you find out about some of it is if someone FOIAs it.”
To be sure, a great deal of information was indeed captured and now resides on non-government servers. Between Data Refuge events and projects such as the 2016 End-of-Term Crawl, over 200TB of government websites and data were archived. But rescue organizers began to realize that piecemeal efforts to make complete copies of terabytes of government agency science data could not realistically be sustained over the long term—it would be like bailing out the Titanic with a thimble.
So although Data Rescue Denton ended up being one of the final organized events of its kind, the collective effort has spurred a wider community to work in concert toward making more government data discoverable, understandable, and usable, Jacobs wrote in a blog post.
via Feature: These Advocates Want to Make Sure Our Data Doesn’t Disappear.
Lunchtime listen: “Storing Data Together” by Matt Zumwalt at Code4Lib2017
Drop everything and watch this presentation from the 2017 Code4Lib conference that took place in Los Angeles March 6-9, 2017. Heck, watch the entire proceedings because there is a bunch of interesting and thoughtful stuff going on in the world of libraries and technology! But in particular, check out Matt Zumwalt’s presentation “How the distributed web could bring a new Golden Age for Libraries” — after submitting his talk, he changed the new title to “Storing data together: the movement to decentralize data and how libraries can lead it” because of the DataRefuge movement.
Zumwalt (aka @FLyingZumwalt on twitter), works at Protocol Labs, one of the primary developers of IPFS, the Interplanetary File System (IPFS) — grok their tagline “HTTP is obsolete. It’s time for the distributed, permanent web!” He has spent much of his spare time over the last 9 months working with groups like EDGI, DataRefuge, and the Internet Archive to help preserve government datasets.
Here’s what Matt said in a nutshell: The Web is precarious. But using peer-to-peer distributed network architecture, we can “store data together”, we can collaboratively preserve and serve out government data. This resonates with me as an FDLP librarian. What if a network of FDLP libraries actually took this on? This isn’t some far-fetched, scifi idea. The technologies and infrastructures are already there. Over the last 9 months, researchers, faculty and public citizens around the country have already gotten on board with this idea. Libraries just have to get together and agree that it’s a good thing to collect/download, store, describe and serve out government information. Together we can do this!
Matt’s talk starts at 3:07:41 of the YouTube video below. Please watch it, let his ideas sink in, share it, start talking about it with your colleagues and administrators in your library, and get moving. Government information could be the great test case for the distributed web and a new Golden Age for Libraries!
This presentation will show how the worldwide surge of work on distributed technologies like the InterPlanetary File System (IPFS) opens the door to a flourishing of community-oriented librarianship in the digital age. The centralized internet, and the rise of cloud services, has forced libraries to act as information silos that compete with other silos to be the place where content and metadata get stored. We will look at how decentralized technologies allow libraries to break this pattern and resume their missions of providing discovery, access and preservation services on top of content that exists in multiple places.
New App Will Aid Data Rescue
A custom-built app will soon make it easy for anyone check government URLs to determine if they have been archived outside of government control and, if not, submit them for archiving.
- Guerrilla archivists developed an app to save science data from the Trump administration, by Zoë Schlanger, Quartz (February 09, 2017).
The article in Quartz also reports on “data rescue events,” all-day archiving marathons, that have been held in Toronto, Philadelphia, Chicago, Indianapolis, Los Angeles, Boston, New York, and Michigan. Scientists, programmers, professors and digital librarians are meeting to “save federal data sets they thought could be altered or disappear all together under the administration of US president Donald Trump.” Jerome Whittington, a professor at NYU who organized one data rescue event, said about data on air and water pollution, contaminated soil, toxic spills, and the violation of rules against dumping harmful waste, “If you don’t have the data, you’ll be told your problem doesn’t exist. It is in a way a struggle over what we consider reality.”
Latest Comments