LJ Index: Too Much Too Late

March 3, 2009

Back in January 1999, American Libraries published the first installment of Tom Hennen's HAPLR Index, ranking America's public libraries using statistics collected by the Federal-State Cooperative System (FSCS). We made the decision to publish Hennen's study based on our assessment of his research and the conclusion that the rankings would be useful to the libraries that came out on top. We were careful to characterize the rankings as the work of an independent researcher, and the article was replete with caveats about the shortcomings of the FSCS data. "Data measurement cannot capture a friendly smile and warm greeting at the circulation desk," Hennen said at the time, "nor can data alone measure the excitement of a child at story time or a senior surfing the internet for the first time." At the 1999 Midwinter Meeting, John N. Berry, then Library Journal editor-in-chief, joked with me that he was wildly jealous that American Libraries had published the HAPLR Index, and wondered how we'd pulled it off. He knew, of course, that opposition from ALA's Office for Research and Statistics (and just about everyone else in the building) would be strong to publishing anything that sent even the hint of ALA rating libraries, no matter how many times I insisted that the rankings were Hennen's work and Hennen's work alone (something we now call a "branding" issue). That said, we expected libraries to be able to use their rankings as a publicity hook for local media. And use them they did. I have never fielded as many media inquiries over an article in American Libraries as I did for the Hennen ratings. And to almost every reporter, I emphasized that these rankings were the result of an independent researcher and not ratings by the American Library Association. Now, 10 years later, here comes Library Journal with the "LJ Index of Public Library Service," a new ranking system that LJ Editor-in-Chief Francine Fialkoff announces in a headline is "Better Than Hennen" and dubs "America's Star Libraries." Following a build-up that lasted longer than the last U.S. presidential campaign, the new database (sponsored by Baker & Taylor's Bibliostat) claims to be "an index of public library service output only…determined equally by four related per capita output indicators: visits, circulation, program attendance, and public internet computer use." Fialkoff says the index creators, Keith Lance and Ray Lyons, claim that "by combining and weighting so many variables, from input stats like funding to output data like circulation, the [Hennen] rankings obscured the most important measure of all: public service." I talked with Tom Hennen this morning about the new rankings (which take up eight pages in the February 15 issue of LJ ). "Imitation is the sincerest form of flattery," he laughed, but he also said he was "perplexed" by many of the claims that Lance and Lyons make for the superiority of their system. "They make a big point of saying my weighting of the factors is arbitrary," Hennen said, " but not weighting them is just as arbirtrary because they end up saying that visits, circulation, electronic resource use, and program attendance are all of equal weight, which is in itself a value judgement." Hennen also noted that Lance once said that the right way to rank is "to figure out what it takes to make a good library and use those elements and not just take readily available elements and turned them into an index." But readily available elements are precisely what he and Lyons have used. There are many differences between the HAPLR rankings and the LJ Index, Hennen told me, "but the fundamental difference is that HAPLR includes input measures while the LJ Index does not. The LJ Index looks at only one side of the library service equation, while HAPLR looks at both sides." The new index winds up saying that input measures such as staffing, materials budget, and funding levels are not essential to the measurement of the all-important output: public service. I'll leave further comparisons and criticisms of the methodology to the statisticians, but I will say that after 10 years of criticism, Hennen's major detractors have come up with a ranking system that adds little to our understanding of what makes a public library successful. They have both been after me for years to renounce Hennen and publish a superior system of their design instead. Library Journal has figured out how to turn their criticism into a competing system; however carefully constructed, the new system seems like too much fuss way too late. At a time of alarming economic uncertainty, how a public library ranks in a new survey may be the least of its concerns. And at a time when statistical studies are clearly much better suited for electronic publication than print, dumping money into yet another ranking of America's public libraries' performance three years ago (2006 is the latest data available) seems much less important than advocating for their future, and this new system has landed on my desk with a dull thud.