The World Wide Web Consortium (W3C) is the international community organization that develops standards to ensure the long-term growth of the Web. Its Incubator Activity division, which fosters rapid development, has formed a new group on Library Linked Data.
The mission of this group will be "to help increase global interoperability of library data on the Web."
This will bring together people involved in Semantic Web activities in the library community and beyond.
The Library of Congress and OCLC are among the members of this new group.
- W3C Library Linked Data Incubator Group.
The group will explore how existing building blocks of librarianship, such as metadata models, metadata schemas, standards and protocols for building interoperability and library systems and networked environments, encourage libraries to bring their content, and generally re-orient their approaches to data interoperability towards the Web, also reaching to other communities.
It will also envision these communities as a potential major provider of authoritative datasets (persons, topics...) for the Linked Data Web.
The charter of the new group says:
The changing landscape of librarianship increases their need for visibility and interoperability on the Web. In particular, the ubiquitous growth of digital libraries has led them to broaden their practices, standards and activities. Shared catalogues are going online on the open Web, digitized items collections are being brought together in worldwide initiatives, digital resources like online serials, preprints or web archives are driving the need for re-inventing librarianship.
...A re-orientation in the library perspective on information interoperability is needed, building on existing Web architecture and standards, in order to bring this content to the Web. A lot of structured data is already available within library systems and could be released as Linked Data, using Semantic Web technologies. Cultural heritage institutions could be a major provider of authoritative datasets (persons, topics...) for the Linked Data Web.
Wow that was fast! The Gov2.0 Expo in Washington DC just wrapped up 2 days of speakers, panels, discussions etc and already the video from the Expo is up online (youtube and blip.tv channels). Here's just 2 of the many interesting talks that I've only begun to absorb. Enjoy!
Tim Berners-Lee, "Open, Linked Data for a Global Community"
Carl Malamud, "Law.Gov: America's Operating System, Open Source"
How researchers enhanced Data.gov using semantic technology, by Sean Gallagher, GCN, May 18, 2010.
A three-person team at Rensselaer Polytechnic Institute, however, has demonstrated how one approach can make greater use of the massive sets of data available on Data.gov, using the power of the semantic web. The conversion project has shown how quickly and inexpensively visualization and mash-up applications can be built from government data when it’s put into a web-friendly form.
If you haven't looked at data.gov lately, you should. It was launched one year ago and has had a bit of a makeover recently and has added lots of new data.
OMB Watch has a quick overview and comment about the current state of data.gov (Data.gov Celebrates First Birthday with a Makeover, by Roger Strother, OMB Watch. 05/24/10).
Check out these highlights:
- Apps where developers are creating a wide variety of applications, mashups, and visualizations. From crime statistics by neighborhood to the best towns to find a job to seeing the environmental health of your community...
- Semantic Web where they highlight a set of data.gov resources reformatted into Resource Description Framework (RDF) format. These allow new kinds of rich interaction with the data. See, for example, the White House Visitor Search. Also see the Thetherless World Weblog from the Rensselaer Polytechnic Institute where some of this work is being done.
And don't forget, at Data.gov, "data" can mean just about anything, even the Foreign Relations of the U.S.
The State Department has an RSS (ATOM) feed that lists new releases of the important series Foreign Relations of the United States: history.state.gov/open/frus-latest.xml.
The Office of the Historian is responsible, under law, for the preparation and publication of the official historical documentary record of U.S. foreign policy in the Foreign Relations of the United States series. This dataset is a feed for the latest ten volumes in the Foreign Relations of the United States series. Each record in the dataset contains a volume's title, year of publication, summary, and link to the online volumes. The feed will be updated when current volumes are edited or new volumes are published.
I had never noticed that statement (above) that current volumes might be edited after release. Does that mean that material might be deleted or changed? Or does "editing" only mean adding new content?
A good story with a slide show.
- History Detectives, by Emily Long, NextGov (05/24/2010)
- slideshow of some of the photos posted to Flickr with sample comments submitted by the public about the image.
"In January 2008, the Library launched a pilot project with the photo-sharing website Flickr to display publicly held photography collections. The site, called the Commons, offers the public easier access to collections housed in organizations such as the Library of Congress and the Smithsonian Institution, and, hopefully, gathers more details about specific images."
Digital Dreams and Dashboards: Notable Government Documents 2009, By David N. Griffths, Library Journal (5/15/2010).
Though budget cuts have squeezed government information services at the local level, major digital initiatives and steps toward open governance at the federal level have compensated for some of these losses.
- Digital library of the week: The National Park Service E-Library, American Library Association "ILoveLibaries.org" (May 20th, 2010)
The collection houses information on all aspects of the NPS mission since its August 25, 1916, founding. Subject matter includes archeological and anthropological research, history and natural history, urban ecology, wildlife, and geology.
On a lighter note today: here are the winning recipes from the healthy recipe challenge, which was part of the FedsGetFit (FGF) wellness initiative. Really. Enjoy!
FEMA Launches New Mobile Web Site For Smartphones, news release, Federal Emergency Management Agency, April 28, 2010.
FEMA Administrator Craig Fugate announced the launch of FEMA's new mobile Web site, m.fema.gov. The mobile Web site makes it easier to access critical information regarding emergency preparedness and what to do before and after a disaster right on a smartphone.
Following the Money: How the 50 States Rate in Providing Online Access to Government Spending Data, U.S. PIRG, the federation of state Public Interest Research Groups, April 13, 2010.
This report evaluates states' progress toward "Transparency 2.0" - a new standard of comprehensive, one-stop, one-click budget accountability and accessibility. At least 7 states have become leaders in the drive toward Transparency 2.0, launching easy-to-use, searchable Web sites with a wide range of spending transparency information. Twenty-five additional states have made initial steps toward online spending transparency by launching Web sites with checkbook-level detail on state spending that nonetheless have much room for improvement.
Wish I could be in Washington DC next tuesday for CopyNight when Carl Malamud will speak at ALA Washington Office. If any of our readers go, please leave comments here on your thoughts/ideas/brainstorms/concerns etc. Thanks!
On Tuesday, May 25, the ALA Washington Office will host DC’s “CopyNight” group for an evening with special guest Carl Malamud about the future of public information.
Malamud is the founder of Public.Resource.Org, a foundation dedicated to making public information accessible. His latest project is an effort to bring all of the United States primary legal sources, such as legal codes and case law, online for free public access. Currently, access to many legal sources is only available through commercial databases that are extraordinarily expensive to use – making these materials inaccessible to most of the public. Malamud has been holding a series of public workshops and symposia, with help from a variety of thought leaders in law and technology, presenting the issues and challenges facing the project.
He’ll also talk about the International Amateur Scanning League, a group of DC-area volunteers digitizing government-produced DVDs currently only available from the National Archives in College Park, which he is making available through YouTube, the Internet Archive, and Public Resource’s own Public Domain Stock Footage Library.
On January 21, 2009, as one of his first acts as President, President Obama released his Memorandum on Transparency and Open Government. The memorandum instructed that government should be transparent, government should be participatory, government should be collaborative. On December 8, 2009, Peter Orzag, the head of the Office of Management and Budget (OMB) issued the Open Government Directive (PDF) establishing deadlines for application of those three principles of open government -- readers will remember that Federal CIOs were only lukewarm about the administration's transparency goals. The memorandum requires executive departments and agencies to take the following steps toward the goal of creating a more open government:
- Publish Government Information Online
- Improve the Quality of Government Information
- Create and Institutionalize a Culture of Open Government
- Create an Enabling Policy Framework for Open Government
A group of non-profit govt transparency organizations -- including OpenTheGovernment, Sunlight Foundation, American Library Association, American Association of Law Libraries, Center for Democracy and Technology and several other groups -- got together to measure how federal agencies were doing to meet the open government directive. They evaluated federal agencies based on a set of criteria (here's their methodology for how the scores were derived) and found that NASA, the Department of Housing and Urban Development, and the Environmental Protection Agency scored highest while Department of Treasury, Department of Defense, Office of Management and Budget (OMB), Department of Energy, and the Department of Justice ranked last in terms of meeting the goals of the open government initiative. Those interested should check back at the site as the organizations will continue to evaluate agencies' improvements over the next year. By the way, here's more on the Mendoza Line, the baseball measurement for threshold of incompetence.
We commend the President for his commitment to openness and for providing detailed elements in the OGD that can be used to hold federal agencies accountable. Many of the federal agencies have approached implementation of the OGD requirements with energy and enthusiasm and some have taken innovative steps in their plans. If implemented with spirit, vigor, and innovation, the Open Government Plans can serve as a vehicle for fundamentally changing the way the federal government interacts with the public. This, in turn, may prove to be a catalyst for shifting public trust in government.
At the same time, many of the agency plans as unveiled on April 7 have a long way to go to create this transformational potential. As this audit demonstrates, there is wide variation in the agency plans. Some are exceptional; others are quite weak. Most are somewhere in between. Many of the plans that currently do not meet the minimal requirements identified in the OGD can do so with only modest improvements, such as providing more specificity on deadlines or identifying where certain items mentioned in the plans can be found. An overview of what we found is below.
I've been going through documents from the Spring 2010 Depository Library Council meeting from last month and was giddy at finding the following section in the Spring 2010 Library Services & Content Management Update (Statistical findings not bolded in original).
DOCUMENT DISCOVERY (LOSTDOCS)
Locating all content that falls within scope of the FDLP that has not yet been incorporated into the FDLP is an important initiative. For about a year now, GPO staff have been examining how these documents are brought into the Program in order to track, measure, and improve our business processes.
In quantity, the monthly lost/fugitive submissions continue to rise. Last year, GPO was receiving an average of about 80 lost/fugitive document submissions per month. This year, so far the average is about 125 per month, an increase of more than 50 percent.
The number of submissions undercounts the titles, because some single submissions for documents can represent multiple publications—it is not unusual to receive an entire web page listing or a bibliography in one lost/fugitive request. GPO staff work to unitize the submissions, research them, and consider each title for possible addition to the CGP.
GPO staff are analyzing the current lost/fugitive document workflow, to better understand where a title may get stalled. To establish a baseline for how long it takes for a typical lost/fugitive document request to get through technical processing from beginning to end (with current methods) staff took a sample of records that were cataloged in the last three months. The entire technical process includes scope determination, research, brief preliminary record, classification, cataloging for the CGP, and creation of OCLC record.
Results from the study included:
• It can take as little as two days for the entire process, but there is a wide variation, depending upon the title, the agency, and other factors such as additional required research with the agency, requiring a new class, requiring management review, and identifying a title based on partial submissions, to name a few.
• 20% were completed within 20 working days.
• 40% were cataloged within 40 working days.
• Within 60 working days, about half went through the process, most within 40 working days.
• The other half took much longer, up to 100 to 120 working days (see 1st bullet).
GPO anticipates that this processing time can be reduced with new procedures. Staff will continue to monitor and track the lost/fugitive documents through the workflow to verify whether the new procedures are helping to move titles through the technical processing steps more quickly.
For some time, GPO staff have been looking at making a number of technical processing improvements including utilizing tracking and management reporting tools. We are
• Mapping the workflow;
• Creating new forms to more precisely identify these titles and to elicit more information that will reduce research time; and
• Identifying key points in the process when FDLP librarians may want status reports.
The goal is to generate management reports for GPO to be able to identify where in the workflow lost/fugitive requests are at any time, and how many requests may be waiting for some specific action in the technical processing workflow.
As a future step, GPO is looking at ways to utilize the askGPO system to track and report on all lost/fugitive submissions and serve as one point of submission.
As we undertake improving the LostDocs processes, we also want to improve communications with FDLP librarians and with federal agencies that help us locate content not yet incorporated into the FDLP system.
The input form for submissions from librarians will be revised. Clear definitions for what is considered lost/fugitive documents will be provided. The process for handling submissions will also be clarified. Additionally, GPO will identify key points in the workflow when librarians would like to receive feedback in the form of emailed status reports. GPO will also develop improved methods of outreach and documentation of agency information for staff to use.
As part of this revitalization, GPO will be changing the name of the LostDocs Program to “Document Discovery Program.”
To me, this is terrific news for a number of reasons:
- GPO has gone public about what has happened to a number of "document discovery" submissions.
- They've admitted that the current workflow is a problem and that about half of reported documents are taking many months to catalog.
- They''ve outlined steps that, if followed, will probably result in better document discovery.
- They seem to have committed to better public reporting of what happens to documents submitted to them.
We at FGI will be watching with eager anticipation to see how these steps are carried out and will encourage our readers and submission heroes to go by new guidance issued by GPO when it becomes available. We also await the new Document Discovery reports with anticipation and may have a few suggestions about them in the coming weeks. In the meantime, we just want to say THANKS GPO for taking a hard look at this problem, admitting the problem to the community and starting the process to make things better.
We'd also like to encourage people submitting links to publications pages to GPO to instead try to submit one askGPO report per document. At least identify your top 3-5 for cataloging and then make note of where GPO can find the other publications. There are more of us documents librarians and document enthusiasts than there are GPO acquisitions staff. We should do some of the title level separation work. Perhaps dividing huge publication pages into manageable amounts could be a multilibrary or library school project.
Do you have any reactions to this news? What kind of statistics do you want to see? What points in GPO's process should trigger and e-mail notice? Should titles submitted be posted publicly as soon as they're received. Leave a comment or drop a line to lostdocs AT freegovinfo DOT info.
Some good news for those who value public policy based on well informed science, "the possibility of reconstituting OTA itself is gaining new momentum."
- A New Push for the Office of Technology Assessment, by Steven Aftergood, Secrecy News (May 12, 2010).
Steven notes that there is a comprehensive archive of OTA publications from 1972-1995 available on the Federation of American Scientists web site.
There is also, of course, the " OTA Legacy" collection at the University of North Texas Libraries, "CyberCemetary."
Milestones: Yale University Library Celebrates 150 Years as a Government Documents Depository, Resource Shelf, May 10, 2010.
Gary has been very busy assembling an excellent set of links on the new Supreme Court nominee over at Resource Shelf:
In the course of my work of maintaining the Lost Docs Blog, I came across the following publication:
Substance Abuse and Mental Health Services Administration. After an [suicide] Attempt: A Guide for Taking Care of Yourself After Your Treatment in the Emergency Department. (SMA 08-4355; CMHS-SVP06-0157), Rockville, MD:
Center for Mental Health Services, Substance Abuse and Mental Health Services Administration, U.S. Department of Health and Human Services, 2006. Reprinted 2009.
I noticed it had this public domain notice that I've seen on some government publications:
Public Domain Notice
All material appearing in this publication is in the public domain and may be reproduced or copied without permission from SAMHSA. Citation of the source is appreciated. This publication may not be reproduced or distributed for a fee without the specific, written authorization of the Office of Communications, SAMHSA, HHS.
It is a fact that not enough people realize the public domain nature of most government materials and so my reaction to this notice was initially very positive. I instinctively like the idea of labeling govdocs as public domain so that people and organizations (Google, I'm talking to you!) would feel free to reuse and remix without fear of consequences and not lock up content not meant to be locked up.
On the other hand, if only a handful of agencies use such notices for public education, it is conceivable that an environment would be created where only govdocs with public domain notices would be treated as public domain. I'm not sure if that's a danger, but I worry. The possible danger would be less if a public domain notice was required governmentwide.
What do you think? Are public domain notices on govdocs a good idea? Are they a good idea whether done governmentwide or by a few agencies? Would we be better off if there was a governmentwide policy to label the minority of copyrighted material in govdocs?
Note: Thanks to Vicki Tate for reporting this document to GPO and sending a copy of her receipt to the Lost Docs blog.
Since 2007, on behalf of EMC Corporation, IDC has been sizing what it calls the Digital Universe, or the amount of digital information created and replicated in a year. The newest report is now available:
- The Digital Universe Decade – Are You Ready? By John Gantz and David ReinselIDC, Sponsored by EMC Corporation (May 2010) [PDF, 16 pp excerpted from the IDC multimedia presentation, "The Digital Universe Decade – Are You Ready?" (May 2010)
- The Digital Universe Decade – Are You Ready? (The multimedia content)
These reports estimate the size of everything digital. IDC looks at the installed base of devices or applications that could capture or create digital information and estimates (based on their research and "other sources") how much information was created in a year. They also estimate the number of times a a unit of information is replicated. The include devices such as mobile phones and bar code readers and video games as well as cameras, scanners, email, office applications, databases, GPS, medical imaging, and lots more. A lot of this is estimates and I found it hard to tell how much was gathered evidence and how much was speculation (see their methodology in the first IDC Digital Universe paper, published in 2007). To me this means that the figures they come up with may not be very accurate. Predictions of the future based on these estimates are, I think, very speculative.
Nevertheless, I've been following these ever since I noticed that Fran Berman quoted an earlier report and referred to 2007 as the "cross-over" year: the year in which more digital data was created than there was data storage to host it. (Berman, Francine, Got data?: a guide to data preservation in the information age, Commun. ACM, 51 (2008), 50-56.)
Even if you don't believe that the IDC numbers are 100% accurate, the general ideas that they promote are probably not that far off the mark. Some of those ideas:
- Last year, despite the global recession, the Digital Universe set a record growing by 62% to nearly 800,000 petabytes.
- The average file size is getting smaller. The number of things to be managed is growing twice as fast as the total number of gigabytes.
- The growth of the Digital Universe is like a perpetual tsunami. How will we find the information we need when we need it?
- How will we know what information we need to keep, and how will we keep it?
That last item is my favorite. Regardless of exactly how much digital information is created each year, regardless of how much storage space we have, regardless of the fact that a lot of the "digital universe" that IDC describes is throw-away information that no one would think is worth keeping, we are still faced with Lots of Stuff and we need to figure out What to Preserve. That, I believe, is the next big challenge for digital preservation.
One way to face that challenge is to rely on producers to decide what to save. If a government agency produces something digital, allow that agency (or GPO, or LoC, or NARA, or OMB or OPM, or your favorite TLA) to decide for you if that information is worth saving.
Another way to face the challenge is to rely on a few big organizations. That is: pool our resources and outsource preservation to a few big organizations that will do this for us. Some of the same players pop up here: LoC and NARA, for example, but there are also organizations like Portico, and the Internet Archive, and ICPSR.
Both of the above solutions hope that someone else will take into account the needs of all possible users and make the right decisions. That model can work for some classes of information with appropriate governance and decision-making structures in place.
But, I believe, the lesson from the IDC report is that the "digital universe" is so large that we should not assume that any single solution will be enough. There is just too much information and there are too many decisions to make about what is worth saving. While information producers and a few big preservation organizations can do a lot, they cannot do everything. And, their size alone will constrain their decisions. It will be harder for big organizations to respond to the needs of smaller communities of interest.
What is the alternative? I think that we need (what shall we call them...?) Libraries. Public Libraries, Special Libraries, College and University Libraries, and School Libraries. These can work together or independently. They can address the needs of their particular communities of interest. This will accomplish three things:
- It will aid preservation by making the preservation community bigger. This will not only increase redundancy, but will also help ensure that there is less chance that a single system or financial or governance failure will mean a loss of all information.
- It will help deal with the scale of the preservation problem (as identified by the IDC report). With more players and more stake holders, there will be more voices and more variety in the decision making process when we collectively decide what to save. This will mean, for example, that a group of School Libraries working together on digital preservation could ensure that an item of essential use to K-12 will be saved even if no university saves it. And vice versa.
- It will help users find and use the information they need. Today, it seems that everyone understands what librarians have always known: that there is a lot of information in the world. It used to take a library degree to get an appreciation of all the sources of information in the world. Today, everyone that uses the Web has that same appreciation. It seems like every day there is another newspaper article or blog posting about how great it is to have access to "everything." But the "everything" people see on the Web is really only a subset of everything and it only appears to be "everything" because there is so much in this very large subset of everything. And, when your only option is to search "everything" you quickly discover that that is not always the best way to find just what you want. (Even Google has segmented information into categories like movies, blogs, books, and scholarly information.) Having community-of-interest collections will enable libraries to build user-interfaces that work best for those communities and that provide access to the information those communities most want.
Libraries won't replace "everything" collections. They will complement each other and unfocused "everything" collections. They will enrich us all and help ensure that we will preserve what needs to be preserved as the "digital universe" expands more rapidly than we could otherwise deal with.
[Update 5/9/10: Thanks to Debbie Rabina for sending me a copy of her article and allowing us to post here (PDF). On a side note, how long will it be until ALA goes open access with all of their publications? Librarians should be walking the open access walk!]
I'd like to briefly commend this article from the Spring 2010 issue of DttP: Documents to the People:
Rabina, Debbie. "Ted Kennedy's Speech at the 1980 Democratic National Convention: Researching Pre-digital government information in the Digital Age." DttP: Documents to the People (2010) v. 38, no. 1: 18-22.
This article is notable for two reasons. It is a fine example of using current events to leverage interest in government information. The article also serves as a good "how-to" guide on evaluating factual claims past and present. Aside from these two main benefits the article demonstrates the continuing relevance of print resources while showing the usefulness of electronic resources. It rejects a "paper vs. electronic" version of the world in favor of a "both/and" approach.
As far as I can tell, this article is not available electronically, but could be acquired through interlibrary loan at your local library.
DttP: Documents to the People is aimed at government information librarians, but I believe it would be useful to transparency advocates and researchers of all stripes. Check it out if you can. I find it an important benefit of my membership in the Government Documents Roundtable of ALA.
Our friend Gary Price sent a lot of great links you'll want to know about from Resource Shelf.
***Top of the List***
New: Extremely Useful: NARA Releases List of Digitized Records (NARA Partners & Their Records) http://bit.ly/bjOVyI Source: NARA
1. List: Most Popular Baby Names of 2009 and Two Tools to Get "Most Popular Names" back to Late 19th Century. Source: SSA http://bit.ly/baby2009
2) New: Searchable Database: Venomous Snakes and Antivenoms Search (yes, a specialized dbase for every topic) Source: World Health Organization http://bit.ly/cRFiEm
3) California (3 Items): 2010 State Fault Activity Map, State Geological Map, 150 Geological Facts About California Source: CA Department of Conservation http://bit.ly/cali2010map
4) Interview with Archivist of the U.S., David Ferriero: What Happens to Social Media Records? Source: Smart Planet http://bit.ly/interview121
5) Public Printer with GPO Budget News & Graph: 10 Years of GPO Financial Performance Source: GPO http://bit.ly/gpobudget
6) EPA Launches New Web Tools to Inform the Public About Clean Water Enforcement Interactive Web tool allows the public to check water violations in their communities http://bit.ly/b5IwbE Source: EPA
7) NOAA Incident News In Left Column, access to database of Oil Spills NOAA has been involved with since late 50's (Pre-NOAA) http://bit.ly/d8WzGG Source: NOAA
8) Research Paper: From Obscurity to Prominence in Minutes: Political Speech and Real-Time Search http://bit.ly/bTMBwm Source WebSci 2010 Conf.
9) FBI Now Accepts Freedom of Info Requests on the Web http://bit.ly/av7XTB Source: FCW
10) U.S. Embassy in Haiti Now on Twitter http://bit.ly/cXs5Vh
also on Facebook http://bit.ly/ayXXij
and Haiti: Legal Bibliography from Law Library of Congress http://bit.ly/haitilawbib
The National Digital Information Infrastructure & Preservation Program now has its own Facebook page: facebook.com/digitalpreservation.
courtlistener.com is a new service that has daily information regarding all precedential opinions issued by the 13 federal circuit courts and the Supreme Court of the United States and the non-precedential opinions from all of the Circuit courts except the D.C. Circuit.
The site was created by Michael Lissner as part of a masters thesis at the University of California, Berkeley School of Information.
The site supports highly advanced boolean queries.
Anil Dash leads a nonprofit, non-partisan project called Expert Labs, which has a goal of being an incubator to fund technologies that bring citizens and the government closer together.
Watch his twelve minute, stirring presentation at Fast Company's sold-out Innovation Uncensored conference from April 21. He talks about how networks allow governments to listen as well as talk. He gives an example of how, using Twitter and Facebook, the White House got 2000 responses in three days to a question about the big challenges of science. Compare that to the usual 200 responses in four months.
- Government in the Digital Age: How Anil Dash's Expert Labs Is Speeding Democracy, Noah Robischon, Fast Company (May 5, 2010)
As I suggested in my tweet a few minutes ago, wouldn't it be great if lots of depository libraries bought cheap book scanners like the Decapod (A Mellon funded project), digitized government documents and uploaded them to the Open Library? There are tons of records for government documents just waiting for the attachment of a digital file. And GPO could help by sharing their records from the Catalog of Government Publications (CGP) with the Open Library where librarians and others could enhance to make more robust metadata (which could be fed back in to the CGP!). Lots of libraries with Decapods make light work!
(Full disclosure: I'm on the board of QuestionCopyright, a 501(c)(3) non-profit which has its own book scanning hardware/software project called Book Liberator. BL developers are in close contact with Decapod folks. But I get no economic benefit from either Book Liberator or Decapod.)