We Can’t Ignore AI

Library workers tackle bias in AI and slow adoption of the technology

June 26, 2022

Martha Alvarado Anderson, director of diversity, equity, and inclusion and head of digital services department at University of Arkansas
Martha Alvarado Anderson, director of diversity, equity, and inclusion and head of digital services department at University of Arkansas addresses a packed session. Photo: Rebecca Lomax/American Libraries

“Librarians have a long and proud history in applying and adopting emerging technologies,” said Soo Young Rieh, professor and associate dean for education at University of Texas at Austin, at American Library Association’s 2022 Annual Conference and Exhibition in Washington, D.C. Artificial intelligence (AI) is a part of our lives now—if you’ve used digital assistants like Siri or Alexa, or taken an Uber or Lyft, you’ve interacted with AI-enabled technology, noted Clara M. Chu, director of the Mortenson Center for International Library Programs at the University of Illinois at Urbana-Champaign. And yet, libraries have been slow to adopt AI.

Chu and Rieh, alongside other collaborators, developed a one-week professional development workshop designed to bring library workers up to speed on AI through collaborative learning and a clear-eyed look at the technology’s shortcomings. At the June 25 session “Artificial Intelligence (AI) in Libraries: From Training to Innovation,” Chu and Rieh discussed some of the takeaways from the first IDEA Institute on Artificial Intelligence, and two of the institute’s fellows shared details of their projects.

Some of the reasons libraries are lagging on AI adoption may be lack of expertise, the social issues of AI—including privacy, safety, and algorithmic bias—and financial constraints, Chu said. The IDEA Institute (which stands for Innovation, Disruption, Enquiry, and Access) brought together 17 fellows to learn about AI while designing with a human-centered approach, mitigating data and algorithmic bias, and ensuring equity, diversity, and inclusion. Fellows at the institute worked together on small projects that they could implement at their libraries. “Finding out the questions is more important than trying to find the solutions immediately,” Rieh said.

“Would you like AI to simply be applied to you, rather than you being a contributor in the decision making?” asked Martha Alvarado Anderson, director of diversity, equity, and inclusion and head of digital services department at University of Arkansas and a 2021 IDEA Institute on AI fellow. Her project explored AI and artificial cultures—representations of what it means to be human—and what can happen when people at all levels of society don’t have input.

“Who decides what it means to be human?” she asked. She highlighted problematic AI implementations: a hiring tool that rejects women, facial recognition software that can’t recognize Black women. “I wonder if some of those people deciding what to enter the algorithm, if those were women, would this happen?”

Still, “when we bring ourselves into this decision making, we are bringing our own biases,” Anderson added. It’s important to document the decision-making process and what the project’s intended outcomes were so it’s easier to fix errors of bias. “Just because it’s created by a computer doesn’t mean that it’s perfect and unbiased.”

Trevor Watkins, teaching and outreach librarian at George Mason University Libraries in Fairfax, Virginia, and a 2021 IDEA Institute on AI fellow, used the institute to work on the university’s orientation chatbot, MOCA (Mason-Libraries’ Orientation Conversational Agent), and to find a way for it to recognize and reject trolling. “Most of you, if you have a social media account, you’ve probably been trolled,” he said. He recounted working virtual reference and receiving nonsense questions and one-word requests like “book.”

The MOCA chatbot uses an “expert system” design, which—while somewhat outdated technology—was preferred because of known issues with more complex chatbots repeating racist and offensive commentary from trolls. Expert systems model an existing human expert and use a logic engine to parse requests. MOCA has two modeled experts: a staff member who’s been with the university for more than 20 years and a student who considers herself an expert on trolling. In addition to the orientation information that forms the chatbot’s core purpose, the team built a taxonomy of trolling and are using machine learning to build on it.

“We can’t just leave this to big tech companies,” Watkins said. “Libraries have an opportunity to become kind of gatekeepers of AI.”

RELATED POSTS:

Teen participants in Boston Public Library’s “Drag vs. AI” program test their makeup and props against facial recognition software. (Photo: Kathy Pham/American Civil Liberties Union of Massachusetts)

Dragging AI

Teaching teens to resist facial recognition software

Libby the Librarian greets students at University of Pretoria Libraries in South Africa. Photo: Mariki Uitenweerde/University of Pretoria in South Africa

What the Future Holds

Library thinkers on the most exciting technology and noteworthy trends