“AI is an incredible accelerator of change…it’s up to us to use this new technology responsibly.”
03 June 2024
We talk to Kai Gniffke, Director General, German SWR, and Chairman, ARD, as one of a series of interviews with contributors to the EBU News Report: Trusted Journalism in the Age of Generative AI
Lead author and interviewer is Dr Alexandra Borchardt.
The EBU News Report 2024 will be available to download from June
Would you call generative AI a game-changer for journalism?
It is definitely an incredible accelerator of change. It is now up to us to use this new technology responsibly, especially as public service media.
Are you delighted or worried about what it can do?
I always try to stay curious and ask: what are the opportunities? How can AI help our journalists with their research, for example? But of course, it is in our DNA as journalists to be sceptical and consider the risks. I recently started a meeting with a video of myself greeting everyone. I had recorded it and then had an AI read it in six different languages. I admit, I enjoyed watching myself speaking fluent Mandarin and my lips didn’t miss a beat. At the same time, I imagined it could as easily be Joe Biden “announcing” that he was about to attack Russia. That is not a future scenario of what these technologies might be able to do, that is what we are dealing with today.
You are Director General of Southwestern German SWR and chairman of ARD, one of the biggest public service networks in the world. What kind of mood do you sense in your organization when it comes to generative AI?
It varies and is not even a generational issue. Some people's eyes light up and they say: great, things are moving forward here. But there are also sceptics who say it's very dangerous. My job then is to remind them that if we don't tackle AI and learn how to deal with it, we'll leave it to others who won't feel committed to democracy or a sense of togetherness and community as we are.
What do you hope to achieve with AI?
Overall: Improvement of quality. First, it could help us detect misinformation. One of our main tasks is to distinguish between fake and facts, truth, and falsehood. This is our job; this is the service to our audience. AI could be the basis for perfect fakes but also for debunking lies and misinformation.
Second, data journalism. AI will help us deal with very large amounts of data. I’m convinced that this will deepen and therefore improve our research.
Third, regional reporting. With the help of AI, we can cater much better to local needs, for example, tell people in the Eifel region or the Black Forest what a certain development means for the future of their area.
Fourth, efficiency gains. AI relieves us of routine tasks, for example, evaluating content from our archives. AI can transcribe speech to text. It can also recognize and index visual content. Today we still need staff to do that. AI can shorten and summarize texts. That often clarifies them and saves time when studying things.
Last, not least: accessibility. We can use AI to generate audio description of visual content.
Are you worried about the role of humans in all of this?
Of course, we are discussing the role of humans in the future. For example, we are debating whether we can use AI to automate weather and traffic reports in night-time radio programmes. We believe this is responsible if it is done transparently. However, it would also be possible to have these reports read with the voices of the most popular presenters. But then we're entering a grey area because that could confuse people. They might think: “What, is she also awake at night when I always hear her on the morning programme?” If you listen to your favourite voice 24/7, it might lose its value.
This sounds like job cuts, too. Will job profiles change?
Journalists have had to deal with technology for many years. Today, they need to know much more precisely than before: what audience are we working for and on what platform? AI adds a new quality. But that won't change the basic virtues. The professional handling of information, conscientious research – that will stay.
What specific actions have you already taken in your organization?
At SWR we have developed guidelines for dealing with AI. We are now in the process of doing this throughout ARD with its nine independent media institutions. We need common standards for all of them. We also have set up a network of competence to bring employees in different roles and in various departments together. All member broadcasters of ARD who are experimenting with AI should know about each other. Not everyone needs to reinvent the wheel. It's all about sharing knowledge, lessons, and experience. There are different speeds within organizations, of course, the know-how of the front runners should spread quickly. What we have also set up at ARD level is a second competence centre for reporting: This is where journalists who report on AI and AI-related trends and developments meet to keep each other up to date.
What is the biggest challenge in managing AI in your organization?
Actually, getting the people who want to work with it to do so. They shouldn't have to wait for instructions by top management.
Are there red lines in the use of AI that your colleagues are not supposed to cross? Some news organizations have clearly defined those.
I’m not in favour of limiting change processes with red lines. But transparency is crucial. We must clearly inform the audience about every instance where we have resorted to AI. For example, we will not use images that have been modified by AI in news programs such as the “tagesschau”, Germany’s most prominent news programme. And when we do make changes, we will indicate this.
Do you think AI will help journalism develop from being a push activity – news is pushed at people – to a pull activity: People will demand customized news that fit their needs? Some hope that this kind of on-demand journalism will help them to reach different audiences, particularly younger ones.
AI offers huge opportunities for personalization as long as we offer our users a variety of perspectives. Then again personalization might lead to isolation. This makes large events where society comes together even more important. I’m thinking about big sports events, major shows and so on. But I'm actually quite optimistic. Just imagine how the media world has changed since the launch of the iPhone in 2007. In a tiny amount of time, we have massively changed our communication behaviour. We take photos everywhere and post them, use social media and apps for everything. And so far, we've managed quite well. The venerable “tagesschau” is the most successful German media brand on TikTok and Instagram. What we do have to worry about, however, is the incredible acceleration of technological development.
Will the dependence on big tech increase with AI, or can big media organizations even gain something? After all, they are sitting on a huge treasure trove of content.
We have been confronted with this development for many years. Most of us work with Microsoft products. Of course, this makes us dependent on one company. Nevertheless, it is easier if everyone is on Microsoft Teams. The same applies to social media. The platforms with the highest reach belong to Meta. But what would the alternative be? Saying goodbye to the audiences whom we can only reach via these platforms? It will ultimately come down to what regulation will look like. Only the EU can prevent us from total dependency. However, as a major producer of valuable content, it must also be in our interest to make it available to AI, at least for training purposes.
When it comes to copyright, do you lean more towards the German publisher Axel Springer and others, who have struck deals with Open AI, or the New York Times, which has filed a lawsuit against the company?
We are somewhere in between. We certainly won't be going to court.
In most countries, public service media enjoy the highest levels of trust with audiences. In the context of AI, there are two schools of thought: One says that AI will destroy trust in the media altogether, because no one will be sure what is true and false anymore. The other argues that this is a great opportunity for quality media, particularly those brands who enjoy a high level of trust.
I am still under the impression of what the Club of Rome said last year: The biggest threat to our societies was the increasing inability of people to distinguish reality from fabrication, facts from lies. This could destroy societies and communities. I would count ARD among the institutions that people trust in this country. So that they don't question the truthfulness of every video. That they say: “These are big brands, they've never lied to me. If something is important to me, I’ll go to them.”
That was the pandemic effect: in the first year of Covid, trust levels in traditional media skyrocketed.
That is true. We must guard this trust like the apple of our eye. We must be and remain a reliable companion and verifier for people.
Relevant links and documents
Contact
