The AI Act’s final phase - trilogue negotiations - has kicked off in Brussels. Below, we explore how public service media are using AI and particular areas of concern in the AI Act’s evolution.
However, the recent acceleration in AI development and the rise of generative AI are bringing some new challenges which will profoundly affect media: from the use of new AI tools in support of journalism to the need for an update of ethical frameworks.
The EU’s AI Act - the first cross-border AI rulebook – is reaching a critical phase. While an agreement is expected by the end of the year, many important issues still have to be decided upon, with the European Commission, Council and Parliament taking divergent approaches on key topics such as biometric identification and generative AI.
Public service media have specific concerns on the protection of intellectual property rights. Generative AI systems are incredible tools, but it is now well-known that they are grown out of – and fed in real time with – content scraped from the whole web, in particular quality content and news produced by public and private media organizations.
Media organizations are therefore facing a dilemma. On one hand, allowing the use of their content to train generative AI systems can improve the quality of the information provided to citizens, while offering a way to remain findable (if the content/URL is sourced in the output). However, on the other hand, there is a huge risk of cannibalization, as the analysis and reprocessing of media organizations’ content feeds Generative AI companies’ business models. With time, generative AI models/assistants could become the default information access point, therefore cutting media services from their audiences and revenues. This has led a number of public and private media organizations to opt-out from ChatGPT’s crawler, to prevent their data from being harvested for AI training.
Crucially, there is a lack of transparency about generative AI’s training data. Since the launch of ChatGPT 4, OpenAI no longer provides information about which datasets are used to train its models, citing both competitive and safety reasons. For media organizations, this means that they have no way / possibility to assess whether their content is processed or not.
Notably, the proposal made by the European Parliament mandates the disclosure of information on copyright protected data used to train generative AI models. This is a step in the right direction. The European Broadcasting Union stands ready to support the EU co-legislators in fine-tuning the provision to turn it into an actionable and practical tool.
Public service media equally support the obligation to inform users about the use of certain AI systems to fight against disinformation and welcome the co-legislators’ efforts to make this provision workable while protecting the user experience.
Remaining open issues in the AI Act could also inadvertently and unexpectedly impact the media sector such as the proposed extension of the prohibition to use AI for biometric identification, which could affect:
the emerging use of such tools for journalistic investigations.
the increasingly automated tagging and archiving of content.
Time will tell whether and how these challenges have been overcome, but it is indisputably a very exciting time for the media industry.