Skip to main content

EMIL puts spotlight on Algorithm Literacy

EMIL is the EPRA Taskforce on Media and Information Literacy (MIL), for media regulators and other organisations committed to the promotion of MIL in Europe. Created as part of the EPRA network of European media regulators, EMIL aims to foster international cooperation on MIL related topics.

In May 2023, just as Artificial Intelligence tools – such as ChatGPT or DALL-E – were starting to make headlines and prompt questions related to privacy, copyright and the impact of AI on online information, EMIL hosted a joint meeting focusing on algorithm literacy, and, more specifically, on the explainability of algorithms, explored from the complementary point of view of media literacy practitioners, artificial intelligence experts and from the media.

The aims of this first joint meeting between the EMIL and the EPRA Roundtable on AI and Regulators were to:

  • raise awareness on the interplay between MIL and AI and algorithms
  • emphasise that algorithmic literacy is crucial to ensure that the regulatory provisions on
    transparency of algorithms can be effective
  • highlight the importance for media regulators to be actively involved in both spaces.

The perspective of the media literacy practitioner was presented by Divina Frau-Meigs, from Savoir*Devenir who explored The new frontiers of MIL via the algo-literacy Crossover project and concluded that algo-literacy forms an integral part of Media and Information Literacy which requires:

  • a user-based approach;
  • an adapted framework for competencies;
  • easy-to-use examples and sensible practices.

It was also noted that there was a need to push for more transparency on the use of algorithms to debunk the “black box” myth.

Algorithm literacy from an Artificial Intelligence experts’ perspective was presented by Ansgar Koene, EY Global AI Ethics and Regulatory Leader and illustrated AI as a blend of technologies emulating human cognitive functions which generates four types of outputs: Prediction of numbers – categorisation – grouping – detection. The presentation also considered why the explainability of AI is important.

This presentation concluded that:

  • AI explainability can only be achieved by focusing on the specific aspects that need to and should
    be explained.
  • There is a clear need to use and push for better, i.e. more appropriate, language in the way these
    systems are described and advertised (it is very important to refrain from the tendency to use
    inaccurate anthropomorphic words to describe AI).
  • The description of AI systems should also clarify the limitations of such systems to demystify AI
    while the responsibility to explain it should rely on all actors of the chain (including the ones creating
    the tool).

Algorithm literacy from the legal and media perspective was presented by Max Van Drunen, University of Amsterdam, AI Media and Democracy Lab who explored the potential use of AI by journalists and the impact on editorial standards, as well as access to and control of AI by the media and how AI could be explained to journalists, editors and the audience.

Conclusions sugested that:

  • Understanding the impact of explainability on trust requires more legal and empirical perspectives.
  • Editors and journalists need to understand the processes of the AI systems they use to preserve
    editorial values.
  • Accountability (and so transparency) to the audience is crucial given the media’s limited
    accountability to the State.

The full summary of the meeting is available on the EPRA website.