Back to Blog
BLOG

What Works: Effective strategies for reporting on AI

11 June 2024
Image is a portrait head and shoulders shot of Felix M Simon, Contributing Author to the 2024 EBU News Report
Felix M. Simon, Contributing Author, EBU News Report

This short guide on what works - and what to avoid - when reporting on AI is taken from the upcoming EBU News Report 2024: Trusted Journalism in the Era of Generative AI, available to download in June 2024.

When reporting on rapidly developing technologies like AI, it can be difficult to balance providing accounts of real progress with criticism of potential effects as well as questions of power, control and benefit. However, accurate reporting on generative AI is crucial as it influences public perception, policymaking, and industry decisions – especially during this pivotal adoption phase. 

To help reporters and managers cover this complex topic effectively, here’s our short guide to the essential dos and don’ts! 

DO

1.) Develop a basic technical understanding

While you don’t need to be a computer scientist (ignore those who suggest otherwise), aim to grasp the technical fundamentals of AI and its capabilities. You don’t need to fully understand how machine learning works mathematically, but it is useful to understand, for example, the distinction between various types of AI and AI systems, such as machine learning, neural networks, and foundation models. It’s also important to differentiate between genuine applications of AI and simpler statistical models to avoid misrepresentation. Not everything that is labelled ‘AI’ actually contains artificial intelligence!

2.) Consult a diverse range of experts 

Avoid relying solely on contributions from AI companies or a single expert’s view. To gain a comprehensive understanding of AI and provide balanced coverage, seek input from a variety of sources, including academics, regulators, and industry professionals. When evaluating the intelligence of an AI system, don’t limit yourself to just computer scientists. Involve cognitive scientists, experts in child development and learning, and linguists in your reporting as these often have important views to add to current debates. Be cautious of company representatives aiming to promote their products and services.

3.) Pay attention to both benefits and risks

Whenever you highlight the potential advantages of AI, pay the same attention to discussing ethical concerns, risks, and challenges, including around questions of bias and fairness, privacy, copyright, and harm. Or as the AP’s Garance Burke puts it, ask: “Where are they [AI systems] deployed? How well do they perform? Are they regulated? Who’s making money as a result? And who's benefiting? And also, very importantly, which communities may be negatively impacted by these tools?” 

4.) Recognize the dynamic nature of AI

When reporting on AI, it is crucial to understand and convey the rapidly evolving and dynamic nature of this field. The development of AI systems is an ongoing process characterized by constant advancements, setbacks, and shifts in trajectory. One key challenge stems from the uncertainty surrounding the long-term effects of many decisions being made in the AI domain at this moment. Choices made today, whether related to model deployment and use, regulation, data policies, open-source initiatives, corporate acquisitions, or research partnerships, can have significant long-term ramifications that are difficult to fully anticipate. As a result, AI coverage must strike a balance between providing timely and accurate information while acknowledging the inherent uncertainty in this field. 

DON’T

1.) Overhype capabilities

The story of AI is ripe with examples of exaggerated claims about AI’s abilities, including many in the media. This can create unrealistic public expectations, but also lead to over-regulation or misdirected regulation. It can also lead to poor decisions regarding investing in AI, including in journalism, with reality not matching expectations. It’s important to report on setbacks and failures in AI development to provide a balanced view.

2.) Humanize the technology

It is easy to ascribe human feelings or capabilities to AI systems or to imply that these systems can ‘think’. Avoid terms like ‘AI is thinking’ or ‘AI feels,’ which anthropomorphise technology and can mislead about its nature and its limitations. Instead, describe AI in terms of its algorithms, data processing abilities, and programmed functions to provide a more accurate representation of how these systems work and what they do. This goes for the depiction of AI, too. As researcher Maggie Mustaklem reminds us, all too often there is a ‘one-size-fits-all sci-fi fantasy’ around AI,  with the technology portrayed as “white robots typing on a keyboard” or “a blue graphic of a human brain connected to some colourful lines” (the artist and technologist Neema Iyer has collected some of these tropes on her website ) – yet neither does the complexity of the technology justice and both depictions are misleading.           

3.) Ignore the human element and contextual factors

Do not neglect the role of humans and the implications on human lives. AI is not just a story of technology, but a story of technology working in society. And AI does not exist in the ether. To enable AI, a multifaceted supply chain  must function, from mining rare minerals for chips to data centres consuming energy and water for cooling. Humans are integral to every aspect of this chain and are affected by its components. It’s crucial to acknowledge these factors and incorporate the individuals who work ‘behind the scenes’ to facilitate AI as well as those directly impacted by its deployment in your coverage. This includes broadening the lens beyond developments in Western countries, which often take centre stage in discussions of AI.

4.) Do not treat AI just as an innovation or technology issue     

AI’s implications extend far beyond just questions of technological innovation in some specific domains. Instead, like climate change, AI is a topic that cuts across numerous domains, including business, law, healthcare, education, politics, and the environment. Comprehensive AI coverage requires recognizing this, exploring how AI does and doesn’t reshape various areas, and helping audiences arrive at a more holistic understanding of both the opportunities and challenges AI presents. Treating AI solely as a technological advancement is no longer enough.

[1] https://reutersinstitute.politics.ox.ac.uk/news/focus-humans-not-robots-tips-author-ap-guidelines-how-cover-ai

[2] https://www.oii.ox.ac.uk/news-events/can-ai-visuals-move-away-from-blue-brains-and-cyborgs/

[3] https://neemaiyer.com/work/how-do-we-picture-ai-in-our-minds          

[4] https://www.adalovelaceinstitute.org/resource/ai-supply-chains/

Relevant links and documents

Written by


Felix M.  Simon

Research Fellow in AI and News, Reuters Institute, University of Oxford

Resources

The 'AI Myths' website debunks common myths and misunderstandings about artificial intelligence. It is structured into eight distinct sections, each exploring different facets of AI, including its portrayal, definition, governance, and practical applications. Each section provides links to additional resources on the topics discussed.

The Leverhulme Centre for the Future of Intelligence at the University of Cambridge has collected a range of guidelines for reports on how to better cover AI and what to look out for, each of them worth reading in detail. They also provide links to various databases of AI experts and voices

The ways AI is depicted can obscure the real and significant societal and environmental impacts of the technology, can set unrealistic expectations and misrepresents the actual capabilities of AI. It can also obscure the responsibility of the humans behind the technology. The Better Images of AI project provides alternatives.