Rethinking Library Linking: Making OpenURL Better with Data, Data, and More Data

October 4, 2010

OpenURL link resolvers have become a vital part of many libraries’ offerings, especially those of academic libraries. As resolvers have become more important, they have undergone the same iterative usability testing and interface improvements that are common for library websites and catalogs.

Only recently has effort been devoted to improving the functionality of resolvers by examining in detail the accuracy of the data that drives them. Also of critical importance is how the standard is implemented within the source databases from which OpenURLs originate. The solutions to OpenURL failures vary widely from library to library and depend on local citation database use and the scope of each library’s collection. Improving the resolver at a library that licenses many custom electronic journal packages directly from publishers might require a different approach than would a library that relies more heavily on aggregated databases for full text.

In “The Myths and Realities of SFX in Academic Libraries,” published in The Journal of Academic Librarianship, the authors summarized user expectations of Ex Libris’s SFX resolver, with an eye toward exploring librarians’ opinions of the service as well as the impact of this system on the user experience. The authors, librarians at two California State University campuses, analyzed data gathered in an online survey and in-person focus group. They compared these findings with those garnered by analyzing SFX use statistics and test searches. They found the most important issue for users to be the availability of full-text articles, while librarians were more concerned with the accuracy of results.

The librarians’ confidence in SFX was negatively impacted by this concern; they often felt the need to double-check the results by searching a citation database or the library catalog. The article concluded with the statement that user expectations were “slightly higher than” the statistics showed their experiences to be. Causes of linking failures include inaccurate holdings data, absence of selected articles in a target database, or incorrectly generated OpenURLs from a source database. These categories are useful in understanding the inner workings of SFX, but the authors did not analyze their data more deeply to identify the exact causes of errors in each category or where the responsibility for these causes lies.

Industry initiatives

In 2008, NISO and the United Kingdom Serials Group (UKSG) launched a joint working group charged with creating a set of best practices to address specific problems identified in the UKSG report “Link Resolvers and the Serials Supply Chain.” The group, dubbed KBART (Knowledge Bases and Related Tools) published its “Phase I Recommended Practice” document in January, aimed at assisting content providers in improving the serials holdings data that they supply to link resolver vendors. This document contains an excellent summary of the OpenURL process and format specifications that knowledge-base supply-chain stakeholders can employ for the consistent exchange of metadata. Stakeholders include publishers, aggregators, subscription agents, link resolver vendors, consortia, and libraries. Phase II of KBART’s work will expand the data exchange format to encompass e-books and conference proceedings, actively seek publisher endorsement and adoption of the best practices, and create a registry and clearinghouse for KBART formatted data files. See chapter 5 for links to all these resources.

In the final report of a 2009 Mellon planning grant, Adam Chandler of Cornell University investigated the feasibility of a fully automated OpenURL evaluation tool. He recommends that librarians, publishers, NISO, and OCLC develop this tool jointly. Such a tool would fill “a critical gap in the OpenURL protocol: objective, empirical, and transparent feedback [on OpenURL quality] for supply chain participants.” To this end, Chandler proposes that libraries work with vendors to analyze OpenURLs created in source databases, identifying the elements required for successful linking and the frequency with which those elements appear. This analysis of OpenURLs sent from a source database to a link resolver could increase the rate of successful linking. In 2009, a NISO workgroup was created that will build on this work. The Improving OpenURL Through Analytics group (IOTA) project is devising and testing a program to analyze libraries’ source URLs so that vendors can improve the metadata they are sending to resolvers.

The two initiatives described above primarily address the early steps in the OpenURL process, the building of the knowledge-base and source URL processing. A piece not yet addressed is the standardization and quality of how target URLs are parsed by target databases. This is inarguably the least standardized component in the link resolution chain and deserves a similar or greater level of attention than the preceding elements. If more publisher platforms were configured to support incoming links that conform to the OpenURL standard, we could expect to see a significant improvement in target link success rates. Combining an indicator of a publisher’s ability to accept standard target URL syntax with the KBART publisher registry would be a significant first step.

RELATED ARTICLES: