Skip to main content

We Expect Ethical Artificial Intelligence – Who will deliver it?

Artificial Intelligence (AI) is present in many of our media devices and services – from smart televisions to smart speakers, from our search engines to our mobile games. It is also central to how media content is created, how online communication is governed and how information spreads.

Research organisations and companies in Ireland and the UK, and the European Commission, are investing a lot of money in AI and big data technology development. Ireland added AI to its list of national research priorities in ICT in 2018 because of a belief in the potential benefits of data analytics and AI across a range of sectors – from agriculture and transport to the media.

The Cambridge Analytica scandal in 2018 revealed how mass data collection could be used to manipulate and target unsuspecting voters. A range of questions emerged for many people at this point. Can we trust these technologies, or those who develop them, to respect human rights and democratic processes? Can we trust automatically generated or retrieved information and content to adhere to established social norms and professional media practices?

In a recent journal article, we explored how public and private documents seek to shape public expectations of AI, and we surveyed members of the public in Ireland to assess their awareness of the ethical issues raised by AI. While both positive and negative expectations of AI emerge from our data, the most significant ethical issues for the public were transparency, privacy, safety, and security. Their salience varied from domain to domain – safety was highly significant in relation to autonomous cars, and transparency and privacy in relation to health. In the final part of our article we explore the use of AI in the media and the gap between public expectations of AI and practice. In this post we highlight the findings of relevance to the Media Literacy Ireland community.

Public Expectations of AI and the rise of ethics

Existing academic literature suggests that public expectations can influence the dynamic, direction and focus of technological innovation. Expectations are constructed by both formal (research prioritisation exercises and public consultations) and informal (images, statements, prophecies) mechanisms. Both experts and non-experts play a role in shaping expectations and the media play a role in amplifying them or questioning them.

Expectations are performative – but crucially, we need to distinguish between generic and effective performances. Effective performances are those that have a real impact on everyday practice while generic performances remain at the level of discourse. In our paper we delineate between generic, or weak, and effective, or strong, performances of expectations of AI.

In our paper we reviewed 40 public documents on AI released since 2011 in Ireland, the UK and by the EC. These were published by international consultancies, governments, research organisations, whistle-blowers and workers. We identified a dominant set of positive expectations around AI and a focus on legitimising public and private expenditure on AI research and innovation. We also found a focus on AI helping to achieve “cost reductions” and ‘’efficiencies.”

In Ireland there is a heavy reliance by government on commissioned reports on AI from international consultancy firms. In the UK and the EC, this approach is supplemented by regular surveys of public opinion. Public consultation in Ireland involves consulting with a small number of academics, existing research centres and companies.

However, over the period under analysis the documents increasingly noted negative issues relating to AI. Starting with the Snowden revelations in 2013, Cambridge Analytica in 2018 and high-profile interviews with workers from technology companies, a negative set of stories emerged revealing problematic uses and impacts of AI. While a small number of these stories focussed on the potential for machines to become super intelligent, more frequently the focus was on the negative impacts of AI on law enforcement, elections and jobs.

From 2016 we observed an increasing amount of attention being given to the ethical issues raised by AI. New research initiatives in the UK were launched to inform the public about AI and to create a “shared understanding.” Agencies started to call for ethical codes, guidelines and training. Professional bodies such as the IEEE and ACM issued new ethical codes and guidelines called for more attention to human harms and values in technology projects. By 2018 European Commission documents started to write about a ‘European approach to AI’ and in early 2020 the High-Level Expert Group issued its guidelines on Trustworthy AI.

What do members of the public in Ireland think?

Public surveys on attitudes to technology, robots and AI have been carried out by Euro Barometer across Europe and by the Royal Society in the UK. But in Ireland there is a distinct lack of widespread public consultation on technology related issues and AI. We conducted an exploratory face-to-face survey with 164 members of the general public to explore their awareness of AI and to establish what they considered to be the key positive and negative impacts. The survey was carried out in the Science Gallery Dublin, and respondents came from 25 different countries with diverse demographics.

Our analysis reveals that while people associate AI with robots and self-driving cars, most have only directly engaged with AI through videogames, chatbots and search engines. While respondents felt AI could contribute to economic and social progress, and replace repetitive and dangerous work, they were also concerned that automation would mean a loss of jobs and would impact on human security and privacy. They were also concerned about the lack of oversight of AI and the impact of AI on human subjectivity.

When offered a list of ethical principles to rank in importance, respondents chose safety, privacy, transparency and security as most important. Privacy and transparency dominate formal and informal statements on ethics of AI in Europe and are the focus of attention in current research projects and the media. However, the significance of safety to our respondents is notable and related to concerns about location-based services and autonomous cars.

A final finding to highlight is that almost two thirds of respondents agreed that it was possible to design AI in an ethical manner. They felt that AI designers should be responsible for the use and misuse of AI systems and that such systems were largely under the control of industry and academic researchers. When asked who should oversee AI development and use, respondents favoured a multi-stakeholder or public/private co-regulation approach.

What does this mean for Media and Communication?

Across both sources of data there was an expectation that we could design ethical AI. In the final part of the paper we examine how this expectation related to current practice in online media and communication.

AI is currently deployed by media companies for content generation, service personalisation and the removal of malicious or illegal content. However, there are numerous examples in academic and public commentary of where AI systems have failed to deal with the social and cultural complexity of media and communication. In other words, current AI systems are not quite as efficient and smart as we are led to believe That is why many systems of communication rely heavily on users to flag content. It is also why many low paid and difficult jobs have been created in content moderation and community management.

For example, media interviews with Facebook moderators in Ireland have revealed the difficult conditions under which content moderators often work (O’Connell, 2019). Interviews with community managers of online games located in Ireland also point to the long hours and negative experiences that workers encounter with users (Kerr and Kelleher, 2015). Other academic studies now reveal how platforms and their AI systems can result in gendered or racially biased outcomes.

Meanwhile approaches to transparency, accountability and responsibility vary widely across service providers. Most AI developers have not been trained to consider the social and ethical impacts of their systems, and their work processes are not designed to accommodate them. The companies who deploy these systems are largely unregulated. The governments or agencies who might regulate them are slowly realising that action is needed but are unsure how to address the issue. Most are still grappling with implementing the European General Data Protection Regulation (2018).

In Ireland there has been no national attempt to engage with the social and ethical implications of deploying AI systems across the public and private sectors. Our survey found that the public displayed good awareness of AI, but this may not be representative of the wider public. At present public expectations of AI in Ireland are largely shaped informally by the media and science fiction. AI education across all the disciplines and levels rarely engages with the ethical implications of AI.

Our findings suggest that it is time to put ethics guidelines into practice and develop meaningful multi-stakeholder systems of accountability and responsibility for AI. It is also time to prioritise making AI systems more transparent. This means understanding how these systems gather our data, what they do with our data, and evaluating the differential impacts of these systems on workers and citizens. This knowledge is essential if the turn to ethics in AI is to go beyond the level of discourse and generic performance.

The full paper is available open access in the journal Big Data and Society at journals.sagepub.com/doi/full/10.1177/2053951720915939

Bios:

Dr. Aphra Kerr is an Associate Professor in the Department of Sociology at Maynooth University, and institutional lead and collaborator in the SFI ADAPT Research Centre for Digital Content Technology (http://www.adaptcentre.ie/). She is a supervisor in the SFI Centres for Research Training in Advanced Networks for Sustainable Societies (https://www.advance-crt.ie/) and Digitally Enhanced Reality (http://d-real.ie). She has published on digital and data governance and policy, creative and cultural workers, digital inclusion and digital games. She is on the Media and Technology Committee of the AI4People Initiative (https://www.eismd.eu/ai4people/), and is a member of the Media Literacy Ireland Network.

Dr Marguerite Barry is Assistant Professor and Programme Director of the MSc in Communication & Media at the School of Information & Communication Studies at UCD. She is a collaborator with the SFI ADAPT Research Centre for Digital Content Technology (http://www.adaptcentre.ie/) and a supervisor in the SFI Centres for Research Training in Digitally Enhanced Reality (http://d-real.ie) and Machine Learning (https://ml-labs.ucd.ie). She has published on topics including interactive design, ethical design processes and information sharing.

Prof. John D. Kelleher is the Academic Leader of the Information, Communication and Entertainment research institute at Technological University Dublin, and institutional lead at the SFI ADAPT Research Centre for Digital Content Technology (http://www.adaptcentre.ie/) and the SFI Centre for PhD Research Training in Digitally Enhanced Reality (http://d-real.ie). He has published extensively on artificial intelligence, machine learning and natural language processing, including: 2018 Data Science (https://mitpress.mit.edu/books/data-science), 2019 Deep Learning (https://mitpress.mit.edu/books/deep-learning-1), and 2020 Fundamentals of Machine Learning for Predictive Data Analytics (https://mitpress.mit.edu/books/fundamentals-machine-learning-predictive-data-analytics-second-edition).