Peter Broadwell and Lindsay King write: “Facing a digital accessibility compliance deadline, we wondered whether and how AI tools could be implemented to generate alt text for existing online images within the Stanford Libraries’ digital exhibits. New large language models called ‘vision language’ models have a sophisticated understanding of the relationship between text and image. Could we bring vision language models’ understanding of the images together with existing metadata to create alt text that would be useful to patrons who need it?”