Artificial intelligence is reshaping how news is gathered, verified, written, and distributed, and this transformation brings both extraordinary promise and serious risk. AI tools can help journalists uncover stories they couldn't find alone, identify bias in coverage, monitor misinformation at scale, and make information more accessible to more people. But the same technology that empowers journalism can also accelerate disinformation, blur the line between authentic and synthetic content, and erode the public trust that democracy and human dignity depend on. A well-informed public able to access reliable and fairly represented information is a precondition for meaningful democratic participation and for the recognition of every person's worth in public life. When that information environment is distorted, the consequences fall on real communities and real people. AI deployed without care can deepen those harms.
It is for this reason that trust must always be safeguarded, exercised, and promoted. Deployed responsibly, AI tools can help rebuild a culture of trust that is under pressure from many directions at once. Thus, AI must serve journalism, not replace the human judgment that makes it trustworthy. The AI News Analysis Suite is built in that spirit, and the principles below govern how we develop, deploy, and maintain our tools. The tools found in the Suite do not only adhere to these principles, but seek to promote these very principles within the journalistic field itself.
Users of the AI News Analysis Suite (both journalists and media consumers) have a right to understand what the tool is doing, how it reaches its conclusions, and what data it draws on. Opacity in AI systems – even well-intentioned ones – undermines trust. Meaningful transparency is a foundation for responsible AI use in journalism and public life.
This means being clear about what our tools produce, their limitations, the sources they analyse, and the assumptions built into their design. Where AI is used substantially in generating an output, that should be visible to the user, not buried in documentation. Users exploring how news is framed and presented deserve to know how those insights are reached.
Our tools clearly identify which analyses are AI-generated and explain in plain language how those analyses are produced.
Users are informed which data sources are being analysed; no hidden datasets or undisclosed inputs.
We publish accessible documentation on how our models are trained, updated, and evaluated.
Where our tools have known limitations or weaknesses, we say so clearly rather than overstating accuracy or capability.
Any significant update to our methodology will be communicated openly to users.
AI can assist analysis, surface patterns, and accelerate research. But it cannot replace the expertise, ethical judgement, and contextual understanding that journalists, editors, and informed readers bring to their work. Every output of the AI News Analysis Suite is a starting point for human reflection, not a final verdict. The tool assists, but ultimately the journalist or reader decides.
Those using our tools retain full responsibility for how they interpret and act on results. And we, as developers, are equally accountable – for the choices made in training, the categories used in analysis, and the impacts of our outputs on public understanding. Accountability runs in both directions.
All outputs are designed to support human decision-making, not to automate editorial conclusions.
We maintain an ongoing programme of review and audit, including regular testing for errors, inconsistencies, and unintended effects.
Users are encouraged and supported to critically assess, challenge, and report any result that appears inaccurate, misleading, or unexpected.
We take responsibility for errors in our tools and commit to transparent correction processes when problems are identified.
Feedback from journalists, researchers, educators, and civil society is systematically gathered and incorporated into how the tool develops.
A tool designed to support fair journalism must itself be fair. The AI News Analysis Suite analyses media coverage across a wide range of topics, outlets, and communities. AI systems can embed and amplify the biases present in their training data, and the effects of biased analysis fall unevenly, most often on communities already underrepresented or misrepresented in the media. We take this risk seriously.
We are equally committed to protecting the privacy of individuals whose data may be processed in the course of analysis. Our tools exist to help users better understand how news is framed, presented, and distributed. The interests of preserving and upholding the common good (as opposed to simple commercial gain or convenience) guide every design decision we make.
We actively test our models for bias across demographic groups, media outlets, and topic areas, and publish the results of those evaluations.
The tool does not produce outputs that identify, profile, or expose private individuals; analysis focuses on media content and institutional patterns, not personal data.
We comply fully with applicable data protection legislation, including GDPR, and apply privacy-by-design principles throughout the tool's architecture.
Our tools are designed to help users explore how news content is framed and presented across diverse sources, supporting the transparency and informed engagement that a healthy information environment requires.
Our ethical vision has been inspired, notably, by:
Reporters Without Borders, Paris Charter on AI and Journalism (November 2023): a ten-principle framework establishing that journalism ethics must guide all AI use in media, with particular emphasis on human agency, transparency, accountability, content traceability, and the protection of the right to information. https://rsf.org/en/paris-charter-ai-and-journalism
Full Fact, AI Programme and 2024 Policy Report: Full Fact's public documentation of how AI tools are built and governed in a fact-checking context, and their recommendations to governments and platforms on transparency, disclosure, and access. https://fullfact.org/ai/ and https://fullfact.org/policy/reports/full-fact-report-2024/
BBC Editorial Guidelines on Artificial Intelligence (June 2025): the BBC's published requirements that AI use never undermine audience trust, always involve human oversight, and remain consistent with editorial values of accuracy, impartiality, fairness, and privacy. https://www.bbc.co.uk/editorialguidelines/guidance/use-of-artificial-intelligence
New York Times, Principles for Using Generative AI in the Newsroom (May 2024): guidelines affirming that AI must serve the Times' journalistic mission, always with human guidance, review, and transparent disclosure to readers. https://www.nytco.com/press/principles-for-using-generative-ai-in-the-timess-newsroom/