GUIDES published on 31 Jan 2024

AI regulation and public service media: a look ahead

In Europe, we are currently witnessing a scramble for artificial intelligence (AI) regulation, both nationally and internationally. The European Union (EU) is finalising a proposal to establish common rules and obligations for providers and deployers of AI-based systems in the EU internal market. In parallel, the Council of Europe (CoE), a different inter-governmental organisation made up of 46 member states, including the 27 EU member states, is negotiating an international treaty – the so-called “Framework Convention” – on the development, design and application of AI systems based on the Council of Europe’s standards on human rights, democracy and the rule of law. The Framework Convention is expected to become the leading global normative instrument for AI, as states such as the United States, Canada, Israel and Japan have committed to ratifying it.

At present, the main challenge of AI regulation is to foster innovation and encourage the use of AI while protecting fundamental rights. This is why the EU and the Council of Europe are proposing a "risk-based" approach, based on AI's potential to harm individuals and society: the higher the risk, the stricter the rules. The rapid development of artificial intelligence technologies poses challenges in all regulatory areas affecting public service media (PSM), as well as opportunities. The key is getting the balance right. In our new series, the EBU will shed light on all the relevant developments. Here is a first overview.

The rise of disinformation and the role of public service media

Recent developments around generative AI will make misinformation the main challenge for PSM and our societies in the years to come. Regulating misinformation carries significant risks for freedom of expression, and EU policymakers have approached the issue cautiously by encouraging transparency and self-regulation.  

For instance, the Digital Services Act requires online platforms to disclose the main parameters of their algorithmic content recommendation systems and asks major online platforms to implement risk assessments and mitigation measures. Similarly, the AI Act imposes various transparency obligations on providers of AI systems to inform users that they are interacting with an AI system or that content has been generated by AI. Generative AI systems posing a systemic risk will also be required to carry out risk assessments and put mitigation measures in place.

Some estimate that up to 90% of online content could be synthetically generated by 2026. Such a proliferation of synthetic content will require PSM to adapt its role to preserve access to reliable and authentic information. It will be vital for PSM to introduce strict ethical guidelines on AI in order to continue to uphold the highest journalistic standards and ensure that editorial control and creativity remain human. But where such self-regulation is insufficient to preserve the integrity of public discourse, stronger regulatory interventions may emerge. It will be essential to ensure that these do not threaten media freedom and pluralism.

Ensuring respect for copyright and fair remuneration

All the major generative AI tools, such as Google's ChatGPT or Bard, are trained on huge datasets containing PSM content. This raises important copyright issues, including whether the PSM should allow AI developers to freely introduce their content into the training data, or whether it should instead make this activity subject to their prior authorisation.

EU law provides for an exception to copyright that allows AI systems to freely copy into their datasets publicly available content unless the owner decides that his prior authorisation must be obtained (so-called opt-out). The choice to opt-out must be assessed by a public service media based on its general policy, internal evaluation, and business strategy. In case a PSM decides to opt out, AI systems must abide by that decision and seek prior authorisation; this also includes paying a fair remuneration that is proportionate to the proceeds obtained from exploiting PSM content.  

So far, the EU has addressed some aspects relating to the use of artificial intelligence in its recent copyright legislation and is about to adopt new provisions on transparency for AI systems using copyright-protected content in its AI Act. These interventions are a first attempt to regulate AI and copyright and we expect this topic to be dealt with more holistically at the next mandate of the EU institutions.

Unfair behaviour could threaten the media ecosystem

Digital markets are often marked by anti-competitive strategies and “winner-takes-all” outcomes. For instance, tech giants sometimes acquire emerging start-ups not for their existing businesses, but to eliminate potential future competition. Similarly, AI systems demonstrate high levels of market concentration due to scale effects. Merger control will be essential to prevent digital incumbents such as Google, Amazon or Microsoft from using the acquisition of nascent AI technologies to strengthen their market power.

In the media sector, AI technologies will redefine the way we experience and consume media content. AI systems, particularly generative AI, are expected to be integrated into other digital platform services over time. Some response engines or virtual assistants could become a one-stop shop for accessing media content, with users integrating into the AI system provider's ecosystem through personalised services and content recommendations. This could jeopardise PSM’s relationship with their audience and, more generally, media pluralism.

Various competition authorities are already trying to understand how the development of AI technologies – in particular generative AI – could impact on business users such as PSM and end-users.  For example, the European Commission is examining how OpenAI or Google might limit the development of competing models of generative AI or favour the integration of their AI applications into other products and ecosystems. Given the transformative potential of AI and the power it gives to digital incumbents, competition authorities will have to intervene firmly to preserve the independence and pluralism of the media sector. 

What next?

AI is set to transform almost every area of human activity, including the media sector. Its effective regulation will require a broad and holistic framework complemented by targeted regulatory interventions on specific issues. While the Council of Europe Framework Convention and the EU AI Act provide a horizontal framework, we expect the EU to propose targeted initiatives on issues such as misinformation, copyright and competition in the next legislative cycle. The EBU is there to help and guide public service media through these changes.

Contact Details

Richard Burnley
Legal and Policy Director
Legal & Policy