Way back when, a library I worked at had a standard survey we gave to every student at the end of an instruction session. It included a bunch of Likert scale questions like “How satisfied were you with the session?” and “How useful was the session?” We dutifully collected the surveys and someone went through each one and entered the responses into a spreadsheet.
And there it sat.
We never used the data for anything, and I’m not sure what changes we could have made based on a satisfaction survey that didn’t tell us whether and what the students learned.
Putting data to use
These days, more and more people understand that assessment data can be used to improve library services and that it's a critical tool in the effort to remain a vital part of our communities. It can also help us learn more about our patrons and advocate for things like increased staffing, improved facilities, and more library instruction. It can be used to demonstrate the value the library provides to the community.
We already collect a lot of data in libraries: collection size, book checkouts, database hits, gate counts, reference desk transactions, and much more. While the data most libraries collect tells a story, it rarely tells us how the library is used, by whom, and what impact it has on their lives and learning. Often the things that are easiest to measure are not the ones that provide the information we need most.
Now, I often use minute papers in my teaching. A minute paper asks students to quickly answer two or three questions about the instruction session, such as “What is the most valuable thing you learned today?” and “What was unclear or what do you still have questions about?” They’re quick and easy for students to fill out, whether on paper or online, and I learn so much from them. I discover what topics I covered that students found valuable and what I either didn’t cover well enough or should cover the next time I teach.
Sometimes the problem isn’t that you’re collecting the wrong data; it’s that you’re not sharing it with the right people. For example, interlibrary loan data is incredibly useful to share with librarians responsible for collection development. At many libraries, this simply isn’t a regular part of the workflow. ILL data gives me, as a subject librarian, a clear sense of the subject areas in which our collection is not meeting patron needs and where I should be focusing my purchasing efforts.
As these examples demonstrate, assessments don’t always have to be intricately designed and time-consuming for respondents. Librarians should approach any assessment effort by first asking themselves what information they are seeking. They may find that they already have the data they need; if not, they will be better able to design an assessment tool with a specific goal in mind.
Meaningful assessment requires a work environment where it’s okay to fail, so long as you learn from it. Assessments will sometimes tell you that your project did not have a positive impact, and that may scare people away from doing valuable assessment work. In a true learning culture, where experimentation and failure are accepted, assessment will be focused on improvement, not accountability, and people will not fear what they may learn from the results.
In this era of accountability and accreditation, it’s easy to lose sight of why we collect data and do assessment. Keeping the focus on learning and improvement is the key to doing meaningful assessment that will make your library better. And in an ever-changing information environment, any library not assessing its services runs the risk of becoming irrelevant to its community.
MEREDITH FARKAS is head of instructional services at Portland (Oreg.) State University. She blogs at Information Wants to Be Free and created Library Success: A Best Practices Wiki. Contact her at librarysuccess[at]gmail.com.