*** Welcome to piglix ***

Public library ratings


There are several national systems for rating the quality of public libraries.

The basic public library statistics (not rankings) are published by the National Center for Educational Statistics; the most recent version was published in July 2006, using data from fiscal year 2005. As of October 1, 2007, the Institute of Museums and Library Services assumed responsibility for the publication of public library statistics.

A commercial product, Hennen’s American Public Library Ratings Information (HAPLR), is prepared by Thomas J. Hennen Jr. [3], Director of Waukesha County Federated Library System in Wisconsin. It is published annually in the November edition of American Libraries, and rates over 9,000 public libraries in the United States based on this data. Libraries are ranked on 15 input and output measures with comparisons in broad population categories.

An alternative system (the LJ Index of Public Library Service) developed by Keith Curry Lance and Ray Lyons, was introduced in the June 15th 2008 issue of Library Journal. Libraries are rated on four equally weighted per-capita statistics with comparison groups based on total operating expenditures. The four statistics, library visits, circulation, program attendance, and public internet computer uses, were chosen based on correlation analysis. The system awards 5-star, 4-star, and 3-star designations rather than library ranks, due to recognized imprecision of the library statistical data. The LJ Index measures how the levels of services a library provides compares with other libraries. Creators of the LJ Index stress the fact that it does not measure service quality, operational excellence, library effectiveness, nor the degree to which a library meets existing community information needs.

The HAPLR ratings have drawn significant criticism and praise from members of the library community.

In Library Journal, Oregon State Librarian Jim Scheppke notes that the statistics that HAPLR relies on are misleading because they rely too much on output measures, such as circulation, funding, etc. and not on input measures, such as open hours and patron satisfaction. He adds "To give HAPLR some credit, collectively, the libraries in the top half of the list are definitely better than the libraries in the bottom half, but when it gets down to individual cases, which is what HAPLR claims to be able to do, it doesn't work."


...
Wikipedia

...