Guide

Navigating the AI landscape: How news organizations are creating trust-based AI policies

By David Chivers

February 6, 2025

This image was generated using DALL-E.

We launched the Lenfest Institute AI Collaborative and Fellowship Program to help news organizations explore how artificial intelligence could unlock new possibilities for journalism and drive business sustainability. From enhancing local government coverage and mining decades of archives to transforming advertising training and go-to-market processes, the participating news organizations have ambitious plans to push the boundaries and experiment with new solutions.

Harnessing AI’s potential requires more than innovation. It demands that the solutions uphold high ethical standards. In our earliest session, one grantee captured the balance perfectly: “We want to go into this with an open mind, giving our teams room to experiment while ensuring we’re mindful about how AI aligns with our values.”

During this same gathering, the conversation quickly turned to organizational AI policies: what principles should we codify to protect trust, encourage transparency, and support the ethical use of AI in newsrooms?  

No matter if your newsroom has an established AI policy or if you are just getting started, here are 5 key challenges to consider and 5 practical recommendations for establishing a trust-based AI policy based on insights from the publishers participating in the AI Collaborative and Fellowship: Chicago Public Media, The Minnesota Star Tribune, Newsday (Long Island, New York), The Philadelphia Inquirer, and The Seattle Times. 

5 key challenges in AI policy for journalism 

Trust and brand integrity – At its core, journalism is built on trust. Audiences rely on newsrooms to provide accurate, fair, and authentic information. That trust can erode if AI-generated content is mishandled, misrepresented, or inaccurate. As one grantee explained, “One of our key tenets is we never publish anything without somebody looking at it. AI can help get us there faster, but it’s not a replacement for human judgment.” 

Example: Newsday’s AI policy explicitly bans publishing fully AI-generated content under its brand to preserve authenticity. 

Bias and fairness – AI systems can inherit the biases of the data they’re trained on. For newsrooms, this raises risks of amplifying stereotypes or unintentionally excluding underrepresented voices. Addressing these biases is essential to ensure fairness. 
 
Example: In 2018, Amazon’s AI recruiting tool was found to favor male candidates because it was trained on biased hiring data, underscoring the need to carefully monitor AI outputs. 

Transparency and explainability – Transparency builds trust with audiences and within newsrooms. Both staff and readers need to know when AI is being used and how it contributes to reporting. “We’re building transparency into our workflows. If AI helps with a story, that fact is disclosed to our readers, and internally, everyone knows how tools are being used,” said one grantee. 

Example: The Seattle Times AI policy explains that readers can expect to find clear labeling on any content in which AI tools played a significant role in production. “We may use AI tools for limited tasks (such as transcribing notes or helping analyze data), but only after they go through a rigorous process of evaluation for potential risks and guardrails.” 

Data privacy and security – Newsrooms often handle sensitive information, from confidential sources to proprietary archives. AI tools can introduce risks, especially when terms of service permit third-party access or legal discovery. Tools like Otter are great for transcription, but their terms of service explicitly allow transcripts to be shared in legal discovery. That’s a big concern for us when dealing with whistleblowers or confidential sources,” shared one participant. 
 
Example: Lenfest Institute AI Collaborative and Fellowship grantees are prioritizing legal reviews of AI tools to avoid exposing sensitive data. 

Workforce impact – The rise of AI raises questions about its impact on newsroom roles. While automation can improve efficiency, it shouldn’t come at the cost of sidelining human expertise. “AI is a tool, not a replacement. We see it as a way to free up our reporters’ time so they can focus on the investigative stories that matter most,” one participating newsroom leader shared. 
 
Example: The Philadelphia Inquirer is testing AI to help reporters research their stories with previous Inquirer coverage as seen in this demo video.  

Practical recommendations for newsrooms 

Start with clear guidelines – Have a policy! Even if it’s basic, a policy that encourages experimentation while safeguarding ethical standards is essential. Here’s an AI Policy Template from the Poynter Institute to get started.  
 
Tip: Think of AI as a teammate or “intern.” It can help start the work, but all outputs should be reviewed and fact-checked by humans before publication. 

Be transparent – Disclose AI’s role to both audiences and staff.

Examples of disclosure:

  • Translation: “This translation was generated using AI and reviewed by a human for accuracy.” 
  • Images: “This photo was generated using AI tools and curated by our editorial team.” 
  • Polls: The Seattle Times recently launched a weekly News Quiz that is generated by AI (and then edited by their team). Their disclosure: “News Quiz questions and answers were created with help from an artificial intelligence tool that’s carefully steered and overseen by our very human editors.” 

Build governance structures – Assign managers and legal teams to oversee AI usage. All experiments should have sign-off before deployment. “Our legal team reviews terms of service for any tool before we even start experimenting. It’s a non-negotiable step for us,” said one grantee. 

Prioritize data security – Work with IT and legal teams to ensure no proprietary or sensitive data is compromised. 

Train your team – Provide training to help staff understand AI’s capabilities and limitations. 

AI represents an incredible opportunity for journalism if we approach it responsibly. As one grantee reflected, “We don’t know how people are going to use these tools yet. That’s why having a clear policy and open lines of communication is so critical. By embracing transparency, safeguarding data, and fostering collaboration, newsrooms can unlock AI’s potential while staying true to their mission.  

Additional resources  

About The Lenfest AI Collaborative and Fellowship Program  

The Lenfest AI Collaborative and Fellowship Program is an initiative led by The Lenfest Institute for Journalism in partnership with OpenAI and Microsoft. The program supports local news organizations in exploring and implementing artificial intelligence solutions to enhance business sustainability, audience engagement, and newsroom innovation. Through two-year fellowships, selected newsrooms receive direct funding, AI expertise, and Microsoft Azure and OpenAI credits to develop tools that improve reporting, data analysis, content discovery, and revenue generation. The program fosters cross-industry collaboration, enabling participating organizations to share best practices, product developments, and technical insights to benefit the broader news ecosystem. By equipping local newsrooms with cutting-edge AI capabilities, the Lenfest AI Collaborative and Fellowship aims to create a more sustainable, ethical, and innovative future for independent journalism. 

Local News Solutions

The Lenfest Institute provides free tools and resources for local journalism leaders to develop sustainable strategies to serve their communities.

Find Your News Solution
news solution pattern