Back to Stories

“We have a moral duty to be optimists”

17 April 2024
Portrait image of Ezra Eeman, Strategy and Innovation Director, NPO
Ezra Eeman, Strategy and Innovation Director, NPO

  • We talk to Ezra Eeman, Strategy and Innovation Director, NPO, as part of our series of interviews with contributors to our forthcoming news report – Trusted Journalism in the Age of Generative AI
  • The lead author and interviewer is Dr Alexandra Borchardt. 
  • The EBU News Report 2024 will be available to download from June 

In which ways do you think generative AI is a game-changer for journalism?

Down the line it will certainly impact every aspect of the journalism value chain. It will take a while to move from shiny new things to changes of workflow, but it will redefine who is media, how it is created, who is a media maker, how value is created.

What doesn’t change?

AI is not good at reporting, the human-centred side of journalism. AI cannot handle hard and live news very well. It is good at structuring language, that means it is good at text analysis, it can do summaries, service content, it works well for search optimization. We will see workflows that skip the creative process, because AI models can generate output directly from raw data. It impacts personalization, how news is presented. Why present it in an article form when you can have a conversation? Then again people like current experiences and will go back to them. People like newspapers, there is even a revival of magazines. 

Are you rather delighted or worried about generative AI for your company and in general?

I would say I am a pragmatist. We have a moral duty to be optimists and convey a sense of opportunity rather than despair. With generative AI we can fulfil our public service mission better, it will enhance interactivity, accessibility, creativity. AI helps us to bring more of our content to our audiences. My biggest concern is that it will decrease the trust in information systems even more. The feeling that you cannot believe your eyes any more will also reflect on trusted brands. 

You are talking about deep fakes and misinformation?

The danger is that the narrative of misinformation itself will impact trusted environments. Distrust then becomes the default modus for any news.

How can news organizations fight this potential loss of trust?

The inherent danger with AI is that it creates more distance. It takes out the human element, the boots on the ground, the assurance of reporters who say, ‘I understand your reality, I am here to listen.’ That is the role public service media could and should play. We also have a responsibility to bring everyone along. Technologies create gaps. There will always be a part of the population that doesn’t understand it. Explain how you use it. 

What kind of mindset and behaviour do you encourage in the newsroom and company?

We usually see five to ten percent of early adopters and geeks, then a broader group that is not very negative but reluctant. At this point we are trying to foster as much understanding as possible. We are doing a lot of training in the company. People get hands on experience. From there we need to have a more strategic dialogue. How can we take away pain points?

Can you tell us a little bit about what you are already using AI for and what you are exploring? 

We are using AI on backend processes. This is about productivity, efficiency, we reap the low hanging fruit where machines can do a better job: transcription, archives, meta data, subtitling. The second category is where we can add intelligence. We are exploring how we can unlock some of our video archive of one of our consumer programmes with a generative AI interface that allows you to ask question to that specific archive. The idea is for users to have a conversation with our archive rather than entering search queries. Additionally, we are reaching out to audience groups that we haven’t served very well by translating news in simpler language. We use AI tools to improve our workflow by breaking down complex words in easier options. To help hearing impaired kids, we produced podcasts with generated video so they could follow the narrative better. We are also exploring synthetic radio voices but have yet to define where and how they could be of use. 
What’s your favourite GenAI product or use case – in your company or beyond?

The easy language offer is a very nice example. My favourite is maybe: We had a recent podcast on the murder of JFK. William Altman a Dutch journalist who recently passed away had direct leads to witnesses, he kept excessive diaries about this. At the anniversary of the assassination, we recreated his voice to reconstruct the investigation, this was fascinating. We consulted the family first, of course. 

What is the biggest challenge in managing AI in your organization? 

Encouraging people to experiment but not put it out for production that can be a challenge. We allow for failure and don’t expect perfect output. Still, we had a warning shot: in one of our TV news bulletins an image was taken from an image bank, the picture was a generative AI picture. Some of our viewers saw it. 

Do you have AI guidelines – and what’s special about them?

NPO is an umbrella organization with 13 broadcasters, they are all independent. The newsroom guidelines are set by independent organizations, but we have umbrella principles that define how we want to work with generative AI as public service media? There are three broad categories. First, we care about our audience by being transparent and don’t neglect the human aspect; second, we are committed to quality, like reliability and accuracy and third, to ethical values. This means we make sure that we use it for good and minimize bias and harm. We are worried about the climate impact of these technologies, for example, and have an institute monitoring sustainability. 

Let’s talk about the business side of this. Do you think companies should do deals with Open AI or others as the news agency AP and German publisher Axel Springer have done?

This makes sense for these big companies, but there are just a few companies out there who will be able to negotiate these deals. 

Do you see a space for the EBU to negotiate on behalf of public service media?

I was Head of Digital at the EBU for five years. You are getting a seat at the table, but it is difficult to have a common agenda. The bigger parties like the BBC or France Télévisions have their own agenda. In some countries the debate about technology is more advanced like in the Nordics. Others are more reluctant. 

Some of the dynamics are beyond the influence of the media industry. In which ways do you think AI should be regulated?

It helps for the EU to set certain guardrails. Companies look for the EU to set safety regulations, transparency requirements. It will never be quick enough, but it is good that it is there. The agenda I am more interested in is: how can we shape a stronger European media innovation landscape? There could be European language models, collaboration on data sets, more media innovation funding. With the European elections there is an opportunity to shape that innovation agenda. 

There is a huge AI hype going on in the media industry. What is missing from current conversations?

A sense of reality. We are still surfing the hype wave. We talk to little about the nitty gritty details that are needed to go from strategy to implementation. More importantly, we have to ask: How can we deliver value with this, what is really necessary, what is our ambition and vision with this? Our aim should be to deliver value in a more granular way to those people we are missing out now. We still have a long way to go from a broadcast model to a model that is involved in people’s lives. 

Relevant links and documents


Jo Waters

Head of Content Communications