What is the right amount of data to share with vendors? Who sets the defaults? And, at a time when most users have already opted into corporate platforms that aggregate and share data for profit, is the cat already out of the bag when it comes to protecting users’ library data?
Panelists at “Identity and Authentication: Enhancing the Research Experience with Responsible Use of Patron Data,” moderated by Justine Withers, electronic resources librarian at Western New Mexico University in Silver City, addressed these and other questions around privacy policies in the academic library. The panel took place on Saturday, June 30, at the American Library Association’s 2024 Annual Conference and Exhibition in San Diego. Withers was joined by Adam Chandler, director of automation, user experience, and post-cataloging services at Cornell University, and Kim Hill, strategic account manager for Emerald Publishing, who shared the academic librarian and vendor perspectives, respectively.
Withers kicked off the session with an explanation of authentication basics. On the most basic level, vendors look at user data in order to authenticate their access: Who are they, are they actually that person, and what are they allowed to access? This is the basis for most traditional single sign-on and IP-authenticated solutions. In these cases, Hill noted, vendors just use the data to verify that a user is allowed to access given content and don’t store additional data.
However, vendors are increasingly offering personalized tools and “enhanced research” services, like recommendations, that require additional user information to access, whether that’s through an institution’s authentication services or a separate user ID and password. Libraries need to be aware of what their institution is sharing with vendors, make an effort to limit that information to only what is necessary, and help patrons, students, and others be aware of what they are opting into when they use these services.
Chandler shared the story of a Cornell student who had chosen to keep their records private through the Family Educational Rights and Privacy Act (FERPA). However, they were unable to access a platform that was necessary for one of their courses because—in accordance with FERPA—the library would not disclose their name and email address to the vendor. “The student gave up and said ‘Just turn it on’ because it gave them the content,” Chandler said. But that compromise “shouldn’t be the case for library content.”
Vendor privacy policies are often long and opaque, and, in an effort to share useful data with students to help them understand what would be shared when they use a platform, Chandler attempted to analyze and compare 192 privacy policies. “It’s pretty wild,” he said. In many cases the information on how your data is used on a particular platform is folded into the vendor’s overall privacy policy. “It’s not clear if they’re talking about their website or the product,” he notes. “This is actually very problematic if one is trying to summarize that and present the information to the patron.”
Chandler also discovered that the university’s single sign-on solution defaulted to sharing more personal information than is strictly necessary for students and faculty to access tools and resources.
Why does this matter?
One of the dangers of personalization is “siloization,” according to Hill. “You’re prompted down a certain rabbit hole based on the information they know about you. That, as a librarian, is dangerous to me. I want people to have all of the information to make their own decisions.”
While some campuses may have the purchasing power to pressure vendors to modify privacy policies, many are in a position where they are pressured to provide seamless access at the expense of user data. There’s much education still to be done on this topic, and Chandler acknowledges that he, too, is still learning. “I’m persistent and a little ornery,” he says. “You can ask questions, and keep going.”