This article was written by Niamh Lenihan, Principal Solicitor, NLG Solicitors & Doctoral Researcher in Legal Philosophy. Niamh is Principal Solicitor at NLG Solicitors, a boutique practice specialising in EU digital regulation, AI governance, and data protection. She is also pursuing a doctorate in legal philosophy on the Harm Principle in the digital age.
As digital systems increasingly shape how we think and decide, traditional media literacy must evolve into autonomy literacy – the ability to recognise and resist subtle forms of manipulation that undermine independent thought, meaningful consent, and democratic participation.
In recent years, conversations about media literacy have focused on helping people distinguish fact from falsehood. Yet, in the age of artificial intelligence and algorithmic personalisation, the problem runs deeper. The challenge is no longer just identifying misinformation, it is recognising when our capacity for independent thought and choice is being quietly eroded.
Digital environments today are designed to capture attention and influence behaviour. Recommendation systems, targeted advertising, and emotionally charged content operate through what philosopher Shoshana Zuboff calls ‘instrumentarian power’.This is a form of control that shapes not what we think, but how we think. Every like, share, and click refines an invisible model of who we are, predicting and nudging our next move. These mechanisms do not simply mislead; they manipulate the conditions of human autonomy.
From a legal and ethical standpoint, this shift has profound implications. Under frameworks such as the EU General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), consent and transparency are intended to safeguard user choice. However, when design patterns exploit cognitive biases or emotional vulnerabilities, ‘consent’ risks becoming coerced compliance. Autonomy, the ability to reflect, decide, and act on one’s own reasons, is weakened when our informational environment is engineered to anticipate and pre-empt us.
As part of my research, I explore how traditional liberal ideas of freedom and harm translate into the algorithmic era. John Stuart Mill argued in On Liberty that the only justification for restricting liberty is to prevent harm to others, but what if the harm is not visible, physical or immediate? What if it lies in the gradual atrophy of independent judgement, the quiet substitution of human reasoning for machine-mediated convenience? When technology mediates our experiences, autonomy becomes a collective concern, not just an individual one, and its erosion carries democratic consequences.
That is why we may need to evolve from media literacy to autonomy literacy. Media literacy teaches critical evaluation of content; autonomy literacy goes further, helping individuals recognise when their decision-making processes are being subtly shaped. It involves understanding the architectures of influence, i.e. algorithms, persuasive design, and the economics of attention and developing resilience against them. Schools, workplaces and civil-society organisations all have a role to play in nurturing this deeper form of digital self-awareness.
My practice advocates for an integrated approach where legal protection, ethical design, and public education reinforce one another. Regulation alone cannot secure autonomy; it must be accompanied by cultural literacy and institutional accountability.
The future of media literacy lies not only in spotting falsehoods, but in defending the freedom to form authentic beliefs and make meaningful choices. In an era where manipulation can be automated and scaled, protecting autonomy may be the most urgent literacy of all.