Although media literacy has no globally accepted definition (Yue 2022), it is commonly understood as the ability to access, analyse, and produce information (Aufderheide 1993). Digital media literacy content can include explanations of algorithmic tracking and data protection on platforms. Competencies vary from learning to fact-check images through reverse-searching, to training in “netiquette” norms.
Like most theories, media literacy is largely “a value-added export from Euro-America” (Jackson 2019, 63) with regional variations in practice. North American models stress “critical autonomy in relationship to all media” (Aufderheide 1993, 1), demanding informed consumption and freedom to issue counter-messaging, usually as a critique (Jeong et al. 2012). In the Asia Pacific region, programs stress media literacy’s importance to the “participation, connection, and well-being” of citizens (Yue 2022, 191).
Wherever taught, the civic wager of literacy programs is that once people understand systems, skills and norms, they will interact with greater rationality. To settle arguments online, students learn debunking, which involves evaluating evidence against the authority of sources trusted by all. During the pandemic, digital media literacy experts advocated “pre-bunking” games as psychological inoculation (Roozenbeek et al. 2020)—an epidemiological spin on the hope that with enough communicative rationality, the internet can function like a true public sphere (Schaefer et al. 2013).
Online and off, there is value in teaching people how to check facts, evaluate arguments, and deliberate politely. Of rapidly diminishing value, however, is teaching that ignores the realities of social media platforms, where misinformation is frequently circulated by users who insist they are making rational arguments, endorsed by authorities they trust (Boyd 2017). In a recent study of online political discussions across Asia, researchers offer two possible reasons for this dynamic, which they call misinformed choice: the power of self-reflective information loops and the ferocity with which users condemn others as fake news spreaders (Berger and Rother 2023; Majumdar 2024a, b).
Far from being exclusive to Asia, the blaming and shaming of misinformed choices increasingly typifies public online discussions of Western academic research. A recent article for The Conversation discussed a viral challenge called #FilmYourHospital, where users shared videos of empty hospitals at the pandemic’s height. Originally thinking online bots were responsible for its spread, researchers later learned #FilmYourHospital’s main drivers were influencers with interests in conspiracy theories, using evidence not faked, but misunderstood. With protocols like visitor bans in place, a packed hospital can feature empty hallways and parking lots. Doubling down, online commenters insisted, “Calling people conspiracy theorists…seems nothing more than an attempt to discredit those with whom the authors disagree” (Gruzd and Mai 2020).
Here, we see digital media literacy’s first shortcoming: by valorising the power of rationality above all, it leaves people largely unprepared to manage moments where influence operates over social platforms by affecting affect—eliciting, spreading, and modulating “social emotions” primarily related to fear, trust, shame, and esteem.
I am not suggesting media literacy scholarship avoids emotion, a concern as old as Aristotle’s Rhetoric. Rather than scarcity of theoretical focus, the challenge is one of pedagogical locus. In early education, children are taught that feelings begin in the body and that their emotional reactions impact others (Gilliam 2021; Becerra and Campitelli 2013). To successfully adapt, they learn to emotionally regulate within mediated and other environments (Cracco et al. 2017). Indeed, a child’s ongoing inability to self-regulate is considered a hallmark symptom of possible neurodevelopmental disorders and/or mental health conditions.
After childhood, educational conversations about emotion move from engagement with specific bodies to concerns for abstracted publics. In classes on news, propaganda, and/or the attention economy online, students learn to spot trolling, clickbait and rage farming, mimicking the cool detachment of their teachers as they worry out loud about the emotional influence of these on unnamed others. In Big Critique (Burgess 2023) lectures on algorithmic control, anxiety raised by warnings that nobody escapes techno-determinist abstractions like data capitalism is soothed by workshops on keeping one’s online networks as free as possible from unwanted surveillance, bots and sources. Left unquestioned, hopes regarding the protective powers of detached critique can ossify to convictions about the unreliability of anything beyond one’s personally arranged online networks, as in a recent study showing people are more willing to believe misinformation shared by online strangers to whom they are connected than offline friends (Osatuyi and Dennis 2024).
This mindset has been linked to some troubling political effects worldwide, especially among young people. In the U.K. and Australia, educated young people made up the largest cohort of COVID-19 deniers during the pandemic, in part due to their information-sharing patterns online (Duffy and Allington 2020; Pickles et al. 2020) In Asia, these patterns have been likewise linked to young people’s growing turn toward political conservatism (Laksana 2020), illiberal populism (Said and Berghaus 2024), and “anti-woke” rhetoric (Keegan 2023.) Around the world, young people are at record-high levels of risk for recruitment by extremist groups online, who frequently begin by extending emotional validation for scepticism of all stripes, saving the highest esteem for those willing to reject all authority outside the group. (Lederer 2021; Adams and Sally 2021; UNESCO 2017).
If traditional media literacy spends too much time advocating critical disengagement from emotion, in classes focused on advertising, public relations, and strategic communication, the opposite problem exists. There, appeals to emotion are touted as the premier mechanism for capturing the attention of consumers, clients, patients, and/or stakeholders. On platforms, we are encouraged to view ourselves as a target market for newsfeeds and “for you” pages designed to provide us content similar to what we have clicked, dwelled, shared, and commented on in the past, and less of what we have skipped. In a recent article on social media literacy competencies, Schreurs and Vandenbosch (2021, 323) rightly argue that the global reach of platforms means “being literate cannot be about possessing the ‘right’ affective responses but instead relates to managing one’s affective responses”. To this end, they suggest social media literacy competency ought to include increased awareness of the emotional effects of one’s feed, and the skills to curate with an eye toward predominately positive interactions. It is benign enough advice, but for the inference that with a bit of training not unlike cognitive behavioural therapy, people should be able to cultivate their emotional landscapes online as simply as they do off. In actual practice, cognitive behavioural therapy requires substantial modifications to serve all sorts of individuals, especially trauma survivors (Arellano et al. 2014).
This brings us to media literacy’s second problem: its structuring assumption that by and large, people live their lives in emotional neutrality offline, and would do so online, were it not for the informational pollution and incivility permitted on platforms.
Decades ago, proponents of the now discredited hypodermic needle model argued that mass media influences by infusing feelings into otherwise peaceful minds (Tones 1996). Digital literacy programs largely continue to frame feeling as a sort of toxin to be removed, rather than the substance residing at the core of cognition and communication. Consider how phrases like “information overload” miss the vulnerability that comes with having one’s resources overstretched, insinuating that if messages were properly managed, people would receive them in a state of equanimity (Senft and Greenfield 2023).
It is perhaps ironic given a coinage-like infodemic that discussions of trauma are still largely absent from digital literacy programs. Trauma is commonly understood as an ongoing stress response to disaster, violence, abuse, neglect, or other harms. A defining feature of post-traumatic stress disorder (PTSD) is the feeling of being triggered by a memory-related image, sound, or scent, ushering in emotions like panic, anger, and/or overwhelm. (Wright 2021) In environments where PTSD is endemic, feelings like fear, shame, self-blame, alienation, and betrayal dominate (DePrince et al. 2011). Some characterize the pandemic as mass trauma, noting how “people today seem to be gradually moving into a hyper-vigilant stance…” (Horesh and Brown 2020, 322).
Of course, one need not be traumatized to turn to platforms hoping to relieve stress, only to become more emotionally distraught. Here, we see digital media literacy’s third shortcoming: its failure to equip users to communicate on platforms fuelled by the algorithmic generation, circulation, amplification and monetization of “emotions on the move” (Boler 2018). This process is detailed later, with four axioms worth underscoring now:
1.Social media’s advertised ethos is one of democratic social connection, but the guiding principle of platforms is algorithmic connectivity: the collection, packaging, and sale of user data for profit.
2.Hoping to keep users generating data for as long as possible, social media platforms frequently deploy algorithms tasked with recognizing content most likely to elicit reactions like outrage, disgust and sometimes laughter among users, amplifying its reach through mechanisms like recommendation pages and news feeds.
3.Behind every user who has gained influence online by “gaming” social media algorithms to spread a message, attract followers, or make money, there is a user who has “been gamed” (made anxious, shamed, harassed, threatened) by the spread of emotion online.
4.Just as positive engagement online can lead to positive experiences offline, negative engagement online can lead to suffering, threats and even physical harm offline.
To summarize, in its current iteration, digital media literacy fails to address the dynamics of social platforms in three respects. First, its continued advocacy of rational argument as the best defence against unwanted influence leaves people largely unprepared to manage influence stemming from the activation of “social emotions” like fear, trust, shame, and esteem. Second, its continued assumption that users begin their time online in a state of emotional neutrality is at odds with how frequently people scroll platforms hoping to alleviate offline stress, and ignores people negotiating conditions like trauma, depression, anxiety, and/or neurodivergence-related challenges, for whom baselines of emotional reactivity are common. Finally, digital media literacy’s growing penchant for Big Critique can leave students convinced the effects of algorithmic influence lie entirely beyond human control, leading them to side-step consideration of how their arousal might be contributing to a range of troubling behaviours that start online and move off.
Rather than continuing to encourage what increasingly seem like fantasies of rational autonomy within platformed environments dominated by emotional immersion, virality, and misinformed choice, we might instead consider what digital media literacy stressing critical embeddedness could look like. This could start with the question of how people come to assume the identity of a “user” on social platforms through a series of interactions designed to invoke “the self and its motivations, choices, and networks” (Cho et al. 2024, 909.) But first, we need to understand emotion.
Comments (0)