Last October, President Joe Biden released an executive order detailing guidelines for various aspects of artificial intelligence (AI), with the aim of driving inquiry, regulations, and policy around current and emerging tools.
A hot topic in many industries, generative artificial intelligence (generative AI) has increasingly occupied our cultural consciousness since the large language model ChatGPT debuted for public use in November 2022. Some libraries are playing a unique role in charting a path through this new technological territory as the boundaries of AI’s uses and impacts continue to shift.
“Librarians are asking if AI will render us obsolete—it won’t,” says Nick Tanzi, library technology consultant, author, and assistant director of South Huntington Public Library in Huntington Station, New York. “We are information professionals, and our information landscape has just grown in complexity.”
AI’s critics have sounded the alarm about the models’ tendency to reinforce and amplify any biases found in the data they are trained on. Others have raised concerns about false information and privacy, as well as plagiarism and copyright, issues of particular concern to academic and school libraries. How can users be sure the output generated by AI tools is legal, ethical, and accurate?
“There’s an old saying: ‘Garbage in, garbage out,’” says Elissa Malespina, teacher-librarian at Union (N.J.) High School, who writes the AI School Librarians Newsletter. “In the world of AI, it’s a matter of ‘data in, data out.’ Make sure you’ve got a clear sense of not just how AI operates but also where it’s drawing its knowledge from. It’s all about being an informed user.”
American Libraries spoke with five technology experts, educators, and librarians who are pioneering the use of generative AI at their institutions. They discuss how it’s being used in libraries, what ethical concerns have emerged, and how librarians can educate their communities on navigating these powerful technologies.
What top questions have librarians and educators raised about AI?
Tanzi: Perhaps it’s because I started working in public libraries in the 1990s, but I see many of the same questions and concerns that emerged with the rise of the internet. Librarians are approaching AI with a high degree of skepticism, which is good and necessary. They want to know how we can trust the quality of the information, since AI systems can hallucinate. We often don’t know the source of the information AI systems are providing us with, and they can make up fake sources that come with fake citations.
Malespina: Every time new technology rolls out, I believe there are two kinds of educators. Some are like kids in a candy store, eager to bring it into the classroom. Others are more hesitant, thinking, “Maybe this is not for me.” I’ve seen both types right in my school. There are those who are using it in really creative ways, and then some who’d rather keep generative AI at arm’s length. There’s also this sizable group in the middle, kind of on the fence. They think the tech is cool but are scratching their heads on how to integrate it. At the same time, they are leery about some of the negatives associated with AI.
How are you or how is your institution or community already engaging with generative AI?
Watkins: We created an AI Salon Series at George Mason University (GMU) to discuss AI and the use of AI tools in research and the classroom led by our computing librarian, Heidi Blackburn. We have also created an AI Community of Practice and an AI Task Force. The salon series provides a platform that helps bring together librarians and staff at GMU libraries with varied AI experiences and facilitates discussions. In our AI Community of Practice, we provide a space for discussions and time to use generative AI tools like ChatGPT and Bard for professional and research purposes. It gives staff the chance to acquire hands-on experience with those tools they may use in the classroom or in their jobs. We opened the AI Salon Series and AI Community of Practice sessions to the wider GMU community. I also recently used ChatGPT and Bard in the classroom for the first time with one of our English professors I work with each semester. While students liked using those tools, they preferred the traditional research practices I taught. I am working on an article that details that project.
Malespina: I’m a big fan of generative AI, and I lean on it quite often. I had it help me edit the answers to these questions. I also love using it when crafting social media posts for the library. I shoot over a basic idea, and AI generates the content. It’s also helpful when I want to tweak the tone or style of something I’ve written. Thinking of titles for my presentations or drafting conference proposals is something I hate to do, and AI does it for me.
Have you received any pushback about the use of AI in the library? What has been your response?
Hennig: When University of Arizona Libraries (UAL) published our Student Guide to ChatGPT last summer, one writing instructor got in touch with us. His main concern was that one of the items in our list was to use ChatGPT to come up with topic ideas for a research paper. He felt strongly that should only be done with the student’s own mind. He didn’t mind the use of ChatGPT to narrow a topic, but he wanted students to start with their own ideas. After I learned more about his teaching, I volunteered to remove that bullet point since I didn’t want to make life more difficult for instructors. However, when I showed him the sample assignment UAL staff came up with about generating topics for your research paper with ChatGPT, he decided what we described was acceptable, since it was mostly about narrowing down a topic and coming up with keywords for searching in library databases.
Watkins: What I like more than anything about working for GMU is there is a culture of innovation, where trying new things is encouraged. However, there is a big difference between encouraging innovation and providing the support needed to bring life to that innovation. It involves buy-in from the top down, and if you don’t have that initial buy-in, you will need a strong advocate and the intestinal fortitude to ascertain those resources on your own. I am currently working on a conversational agent—I hate the term chatbot. That project has expanded into an interactive 3D tour of GMU’s Fenwick, Mason Square, and Mercer libraries with the conversational agent embedded as a tour guide, using both virtual reality and augmented reality technology. As in most academic and public libraries around the country, budget cuts affect what can and can’t be done.
There are those who are using it in really creative ways, and then some who’d rather keep generative AI at arm’s length. —Elissa Malespina, teacher-librarian at Union (N.J.) High School
What ethical questions has generative AI posed at your institution? Should libraries be establishing policy guidelines for use?
Tanzi: I, along with some colleagues I’ve spoken with, have ethical concerns about how these training models are being built. Generative AI is often trained on various artists’ copyrighted works. Some text-to-image generators can easily imitate styles of a living artist. We have concerns about algorithmic bias, where one group of users is privileged over another. For example, AI detection tools will consistently flag a book report written by nonnative English speakers as AI-generated. Limited-memory AI systems use human and environmental data to sometimes improve the quality of their training model, but this can come at the expense of user privacy.
Boughida: Internally, we at Stony Brook University Libraries (SBUL) are actively deliberating establishing a university-wide AI governance framework in which SBUL assumes a central role. Leveraging the high trust traditionally placed in libraries by campus stakeholders, we aim to act as mediators, contributing to the collaborative development of policies through a shared governance approach.
Karim, as former dean of libraries at University of Rhode Island (URI), you helped establish the first-ever multidisciplinary AI lab at an academic library in 2018. What prompted its development, and what AI functions were the focus?
Boughida: The lab served a dual purpose. It provided tutorials and workshops for students at various skill levels, covering topics such as robotics, natural language processing, and machine learning. Simultaneously, it functioned as a hub for faculty, students, and the community to examine AI’s social, ethical, economic, and artistic implications with an underlying emphasis on preparing students for the workforce of the future. This strategic move emphasized responsible and ethical AI integration and commitment to a balanced approach.
URI Libraries has a traditional ethos of caring about diversity, ethics, privacy, fair use, and so on. We aimed to position ourselves as a pivotal intermediary stakeholder. We didn’t use ethics and privacy as an excuse to avoid engaging with rapidly evolving AI. We observed that some peers resort to the ethics framework to justify inaction.
As more students and patrons use different types of AI in the library, what are some potential risks and benefits?
Watkins: One benefit is that we are serving as a resource to help students and patrons who may not have the technology needed at home to access these tools or someone who can demonstrate how to use them effectively. They will get that assistance in the library. For students who are going to enter the job market soon, having a working knowledge of AI in their field is crucial. One of the potential consequences could be an overreliance on the technology, specifically generative AI. Technology comes and goes, and its longevity is unfortunately predicated on the profit margins of the companies that create them.
Boughida: Benefits include enhanced user experiences through personalized recommendations, streamlined searches, productivity enhancement, language learning, and study assistance. We know that students integrate AI into their daily routines through widely used applications such as TikTok, Snapchat, Google Maps, and others. Potential consequences include concerns regarding user privacy, algorithmic bias, and data security. In particular, students extensively employ AI and prefer to keep their usage undisclosed from instructors and faculty. Some students have expressed the perception that instructors lack awareness of their AI utilization, highlighting a distinct digital divide in this context. We need AI literacy for all.
There have been concerns about copyright infringement and plagiarism with language learning models. How do you advise faculty, students, and researchers on those issues?
Hennig: When it comes to plagiarism issues, we refer students to their instructors. Each faculty member will have different attitudes and policies about the use of generative AI. In our Student Guide to ChatGPT, we discuss academic integrity, giving credit, and what to do if you are falsely accused of cheating with generative AI, because sadly, that happens. Tools advertised as being able to detect AI writing have been found to be very unreliable, especially if your first language isn’t English. For instructors, we recommend thinking about ChatGPT as a pedagogy problem, rather than a plagiarism problem.
As for copyright, we discuss those issues in our LibGuide for instructors, AI Literacy in the Age of ChatGPT. Since the question of copyright is not settled yet, it’s difficult to create specific policies. But we can point people to this information and have discussions with students about it. Some lawyers say that it may take years for this to be determined in the courts.
Watkins: This goes back to AI literacy. As library professionals, we must create learning objects that focus on critical thinking, especially when analyzing any AI-generated content. We also must reinforce what plagiarism and copyright are and the consequences of violating them. Plagiarism and copyright are viewed differently globally, so we should take that into consideration when we consult with faculty, students, and researchers who may come to the institution with a different perspective than what is taught in America. When I work with faculty, students, or researchers interested in using ChatGPT, I point to the bottom of the interface and remind them that according to OpenAI, “ChatGPT can make mistakes. Consider checking important information.” So, I encourage everyone to use it ethically.
This is the next chapter of information literacy. AI represents both an enormous challenge and opportunity for libraries. —Nick Tanzi, assistant director of South Huntington Public Library in Huntington Station, New York
Nicole, you have drafted resources to help librarians navigate AI, including a checklist for evaluating generative AI for purchase. What prompted its creation?
Hennig: I recently met online with a group of faculty librarians at another academic library to lead a discussion about generative AI. I learned that they had been approached by a vendor of an AI tool to consider purchasing it. So that got me thinking: “How will we, as library professionals, have enough knowledge of generative AI technologies to effectively evaluate these tools?” That led me to think about how useful it would be to have a list of questions to ask when purchasing a tool based on generative AI models. So I created one. It’s an early draft at this point and open for comments.
What is your response to President Biden’s recent executive order regarding AI?
Watkins: Prior to President Biden’s AI executive order, California Gov. Gavin Newsom was followed by Pennsylvania Gov. Josh Shapiro and Virginia Gov. Glenn Youngkin in issuing executive orders for their respective states. What I find troubling in all these executive orders are potential loopholes that could be exploited by corporations because of the ambiguity of the language in each executive order. Corporations have armies of lawyers that could easily pick these apart, which throws enforcement of any of these orders out the window. I do not understand why libraries are not more involved.
Tanzi: In part, the executive order is a plan to develop a plan. It directs a number of federal agencies to develop standards and best practices involving artificial intelligence, as a way to minimize its harms and maximize its benefits. Some of the more consequential actions from the government will likely take place in the coming months, as the aforementioned federal agencies weigh in. For example, we should all keep an eye on the US Department of Education, as it will need to produce a toolkit on the safe, nondiscriminatory use of AI in the classroom within the year. Those types of actions will prove highly impactful and can build some guardrails as the regulatory environment takes shape.
As is the nature of executive orders, it is no replacement for legislation; it can be undone by a current or future president at the stroke of a pen. Overall, I think it’s a positive development, in that it concerns itself with things that we should be concerned about: privacy, algorithmic bias, job market disruption, disinformation, and safety.
What do you think is most exciting about generative AI technology? Most concerning?
Malespina: I’m thrilled about the timesaving perks of AI and its potential to level the playing field for everyone. However, my excitement is tempered with a bit of caution. I worry about students becoming too dependent on it, which could lead to them not grasping material as deeply as they should. From an educational perspective, AI throws open the doors to reimagine how we evaluate students. The old standbys, like five-paragraph essays, won’t cut it anymore. We’ll need to shift our focus from the end product to the journey. In other words, assessing the student’s process and approach. This change could pave the way for more inventive and varied methods for students to demonstrate their understanding. Frankly, it’s a transformation I believe is overdue, and I’m eager to see where it takes us.
Boughida: Being the foremost generative AI application, ChatGPT has fundamentally altered how AI is perceived and utilized. It demonstrates AI’s ability to generate content traditionally associated with human capabilities. The recent advancements in multimodal AI capabilities and the emergence of more tailored applications are particularly noteworthy. The accelerated turnaround time for generating software code content is also a significant advantage. I remain unconcerned about artificial general intelligence, viewing it as a fabricated concept designed to generate hype around the sale of AI products. On the other hand, I am vigilant about potential issues such as hallucinations, misinformation and disinformation, potential harm to marginalized populations, and the rise of deepfakes.
Tanzi: AI and its contextual awareness capabilities can be used to perform diversity audits on library collections but can also be used as a tool of exclusion—particularly concerning in a time of book bans and library collections composed of licensed (rather than owned) digital media. We have seen examples of ebook publishers modifying content after it was sold. AI could conceivably be used to conduct particularly insidious forms of censorship by modifying, rather than removing, a title. The tone or theme of a book could be altered after purchase. A character could be erased. AI represents an extraordinarily impactful and disruptive technology that will transform our information landscape.
Librarianship, with its commitment to privacy and providing accurate, unbiased information, will prove essential. This is the next chapter of information literacy. AI represents both an enormous challenge and opportunity for libraries, and I am confident we will rise to the occasion.
Glossary
Artificial general intelligence » Artificial intelligence that could accomplish or even surpass any intellectual task performed by humans or animals; contrasted with artificial narrow intelligence, which focuses on a specific type of task.
Artificial hallucination, or hallucination » Inaccurate or misleading information that an AI platform presents as fact.
Augmented reality » An interactive experience in which a real-world environment is enhanced with computer-generated elements.
Bard » A free generative AI chatbot that draws from internet resources, similar to ChatGPT, that was released in spring 2023 by Google. (See Common Forms of AI.)
ChatGPT » A free generative AI large language model–based platform created by OpenAI that was released to the public in fall 2022. (See Common Forms of AI.)
Deepfake » Imagery, video, or audio, now often AI-generated, that convincingly represents a person’s likeness. It can be used to spread malicious or false information.
Generative AI » AI technology that can create text, images, and other types of content based on a pool of data.
Multimodal AI » AI that can simultaneously understand and generate various forms of data input, including text, images, and sound.
Natural language processing » A subtype of AI technology that gives computers the ability to understand, analyze, and manipulate language much as humans can.