Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call to Action for the Libraries, Archives and Museums Community
Today’s guest post is from Kate Murray of the Digital Collections Management & Services Division and co-founder of the C2PA for G+LAM Community of Practice.
Released in February 2026 as a product of the C2PA for G+LAM Community of Practice, the white paper “Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community” (download PDF) advocates for libraries, archives, and museums (LAMs) to take proactive and pragmatic steps to ensure that digital collections content, especially content impacted by AI at any point in its lifecycle, remains authentic, transparent, and verifiable from creation through access in order to meet the LAMs community’s mission of public trust.
While content authenticity and provenance (CAP) have long been archival principles, existing processes are increasingly impacted, or have the potential to be impacted, by AI-mediated workflows. There are growing expectations from researchers, donors, the public and heritage practitioners to document these impacts comprehensively and consistently by expanding/extending traditional content authenticity and provenance data to address/respond to/consider AI impacts.
This is a critical and decisive moment. The impact of AI on collections imposes risks that demand thoughtful collective attention. Even with the best intentions of maintaining transparency with CAP data, AI technologies introduce novel ethical, legal, and privacy threats. At the same time, AI is transforming, in real-time, the creation, organization and analysis of data at a pace that defies the LAMs community’s traditionally deliberative response to change.

The white paper is not a “how-to” manual nor does it advocate for or against the use of AI. Rather, it examines the central question: Why should LAMs institutions and users care about content authenticity and provenance (CAP), especially for collections material impacted by AI? And more directly, what should my organization do? A few examples to ground the discussion are laid out in the introduction:
For example, if you are an archivist receiving a set of photographs from a photojournalist documenting a newsworthy event, how can you verify that the images were not altered by AI in such a way as to distort what occurred? If you’re a researcher studying a digital capture of an ancient artifact, how can you confirm that the capture device and settings are accurately documented? If you’re a special collections librarian receiving a set of chatbot transcripts illustrating an author’s creative process, how should you document its provenance? If you are indexing and transcribing a recorded interview, what measures should be taken to guarantee that the voices are authentic and words or whole passages not synthetically inserted or deleted? If you’re a documentary filmmaker, what steps should you take to distinguish historical from AI-generated content?
The first section of the white paper provides a brief primer on trust as LAMs’ most essential currency followed by a history of CAP principles as applied to digital preservation. After this scene-setting overview is a selection of recent and ongoing research projects experimenting with how to secure CAP either using, or in response to, emerging AI technologies. The next section covers some of the overarching risks that AI poses for the LAMs community, culminating with a set of four “pillars” that serve as the foundation for a collective call to action.
In summary, the four pillars are:
Research and Development: The LAMs community must invest in sustained R&D that extends established content authenticity and provenance principles to the new AI realities while ensuring that humans remain meaningfully in the loop across workflows and institutional contexts of all sizes.
Partnerships and Collaboration: LAMs must deepen cross-institutional and cross-sector collaborations to avoid duplication, accelerate learning and co-develop shared frameworks, tools and standards that reflect diverse digital preservation needs.
Advocacy with Industry, Vendors and User Communities: LAMs should actively shape standards, specifications, practice models and technologies by asserting their requirements for open, vendor-agnostic and trustworthy CAP data, leveraging their unique authority as long-standing stewards of public trust.
Open Distribution of Results and Lessons Learned: The rapid pace of AI innovation demands more transparent and collaborative modes of sharing experimental approaches and outcomes to complement traditional scholarly channels.
It’s important to acknowledge that the research and perspectives represented within the report are a snapshot of the state of these issues as of early 2026. Like the development of AI itself, this is a fast-moving field subject to change. To date, the white paper has generated significant community interest in knowledge sharing about these issues and inspired further commitment to collective discussion. Comments are welcome via this blog post or to [email protected].
“Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community” is co-authored by Kate Murray (Library of Congress) and independent scholar Joshua Sternfeld (see also Josh’s take in his Substack article on Encoding the Past). Additional contributors included David Cirella, Head of Digital Preservation at Yale University; Ann Hanlon, Head of Digital Collections & Initiatives at the University of Wisconsin-Milwaukee Libraries; Nick Krabbenhoeft, Assistant Director of Digital Preservation at the New York Public Library; and Eric Lopatin, Product Manager for Digital Preservation at the California Digital Library.
Source of Article