Safeguarding Trust and Dignity in the Age of AI-Generated Media - Partnership on AI

notion image
These recommendations stem from PAI’s analysis of case studies submitted by 18 organizational supporters of PAI’s Synthetic Media Framework. The themes and recommendations are PAI’s own and derive from this limited sample of multistakeholder input. Click on each logo below to read that organization’s case study.
notion image
In May 2019, years before ChatGPT brought AI into public consciousness, PAI convened a cohort to grapple with a then-emergent threat to information quality, public discourse, and civic life: AI-generated media. At the time, political deepfakes were only beginning to affect democratic discourse, AI-generated intimate imagery was victimizing women, and newsrooms were increasingly concerned their artifacts would be discredited as fake. Misrepresentation was not new, but AI was adding fuel to the fakery fire.
Now, in 2025, that fire continues to burn: synthetic media impacts not only the technology and news industries, but also fields like finance, online dating, film, and advertising, thereby affecting the entire fabric of a person’s life. That’s why PAI has worked to ensure audio-visual AI technologies support trustworthy information and human dignity.
We brought together experts across technology, academia, civil society, and media, who not only understood the technical realities of how to make and respond to AI-generated media, but also the ways in which it affects civil liberties and human flourishing.
This collaboration resulted in our Synthetic Media Framework, which puts forward guidelines for the responsible creation, development, and distribution of synthetic media technology. The Framework prioritizes transparency, preventing deception, and mitigating audio-visual harms through actionable practices.
With commitments from 18 diverse supporters, we documented real-world implementation through detailed case studies spanning journalism, entertainment, human rights advocacy, social media, and AI tool design.
Based on this body of work and our years collecting insight into the Framework’s implementation across sectors, we’ve crystallized key actionable takeaways for specific stakeholders within seven themes.
These case studies allowed us to expand on our Framework practices and synthesize the real-world impact of the guidance. A dating app better authenticated profiles as real and prepared for an influx of AI profiles. A newsroom better grasped how to weave AI risks into their existing journalistic standards. A social media platform was empowered to explain synthetic media to audiences and identify associated harms. A human rights organization could better articulate how artistic projects should get consent for synthetic media.
And importantly, through in-depth analyses of these case studies and summarized policy recommendations, we identified paths forward for the AI field: on how to responsibly differentiate between creative and malicious content, what context different stakeholders owe audiences making sense of media, and how to build infrastructure to support trustworthy media overall.
Now, based on this body of work and our years collecting and observing insight into the Framework’s implementation across sectors, we’ve crystallized key actionable takeaways for specific stakeholders within seven themes.
Through collaborative commitment from policymakers, industry, civil society, media, philanthropy, and academic stakeholders to enact the recommendations below, we can build a future where synthetic media serves creativity and communication while preserving truth, trust, and shared reality.
These recommendations crystallize how the case study and Synthetic Media Framework themes can be implemented and achieved in practice.
For descriptions of stakeholders and other terminology, see the Synthetic Media Framework itself and our glossary of transparency terms.
The path forward is clear: we need coordinated action across every sector to ensure AI supports human flourishing. Technical standards, user education, industry alignment, and human-centered design must evolve together as AI becomes increasingly sophisticated, imitative, and convincing.
The window for proactive solutions is narrowing. PAI is responding by pushing for these recommendations and forecasting what new more “person-like” AI systems mean for misrepresentation, deception, and communication and how existing guidelines may need to adapt.
Ready to make an impact? Join the conversation. Share your expertise. Implement these recommendations in your organization.
The future of trust in digital communication and information depends on what we do today, not tomorrow.