Menu
Aeon
DonateNewsletter
SIGN IN

Shoreditch, London, 5 October 2016. Photo by Stefan Wermuth/Reuters

i

The warped self

Social media makes us feel terrible about who we really are. Neuroscience explains why – and empowers us to fight back

by Mark Miller & Ben White + BIO

Shoreditch, London, 5 October 2016. Photo by Stefan Wermuth/Reuters

Levi Jed Murphy smoulders into the camera. It’s a powerful look: piercing blue eyes, high cheekbones, full lips and a razor-sharp jawline – all of which, he says, cost him around £30,000. Murphy is an influencer from Manchester in the UK, with a large social media following. Speaking on his approach to growing his fans, he says that, if a picture doesn’t receive a certain number of ‘Likes’ within a set time, it gets deleted. His surgeries are simply a way to achieve rapid validation: ‘Being good-looking is important for … social media, because obviously I want to attract an audience,’ he says.

His relationship with social media is a striking manifestation of the worries expressed by the French philosopher Guy Debord, in his classic work The Society of the Spectacle (1967). Social life is shifting from ‘having to appearing – all “having” must now derive its immediate prestige and its ultimate purpose from appearances,’ he claims. ‘At the same time all individual reality has become social.’ Debord recognised that individuals were increasingly beset by social forces, a prescient observation in light of the later rise of social media. But as a political theorist writing in the 1960s, Debord would have struggled to see how this shift towards appearances could affect human psychology and wellbeing, and why people such as Murphy might feel the need to take drastic action.

Today, social media is implicated in an array of mental health problems. A report from the Royal Society for Public Health in 2017 linked social media use with depression, anxiety and addiction. Some former influencers have turned against their platforms and chosen to highlight the dangers of curating a self-image with little purchase in reality. Meanwhile some platforms have trialled design tweaks aimed at protecting users’ health, such as limiting the visibility of ‘Likes’ on a post.

Concerns around social media have become mainstream, but researchers have yet to elucidate the specific cognitive mechanisms that explain the toll it takes on our psychological wellbeing. New advances in computational neuroscience, however, are poised to shed light on this matter. The architecture of some social media platforms takes the form of what some scientists are now calling ‘hyperstimulators’ – problematic digital delivery systems for rewarding and potentially addictive stimuli. According to a leading new theory in neuroscience known as predictive processing, hyperstimulants can interact with specific cognitive and affective mechanisms to produce precisely the sorts of pathological outcomes we see emerging today.

Predictive processing casts the brain as a ‘prediction engine’ – something that’s constantly attempting to predict the sensory signals it encounters in the world, and to minimise the discrepancy (called the ‘prediction error’) between those predictions and the incoming signal. Over time, such systems build up a ‘generative model’, a structured understanding of the statistical regularities in our environment that’s used to generate predictions. This generative model is essentially a mental model of our world, including both immediate, task-specific information, as well as longer-term information that constitutes our narrative sense of self. According to this framework, predictive systems go about minimising prediction errors in two ways: either they update the generative model to more accurately reflect the world, or they behave in ways that bring the world better in line with their prediction. In this way, the brain forms part of an embodied predictive system that’s always moving from uncertainty to certainty. By reducing potentially harmful surprises, it keeps us alive and well.

Consider the healthy and expected body temperature of 37°C for a human being. A shift in either direction registers as a spike in prediction error, signalling to the organism that it’s moving into an unexpected, and therefore potentially dangerous, state. This rise in prediction error is fed back to us as feelings of discomfort, stress and an inclination to do something to get a better predictive grip on reality. We could just sit there and come to terms with the changing temperature (update our generative model), or we might reach for a blanket or open a window. In these cases, what we’re doing is acting upon our environment, sampling the world and changing our relation to it, in order to bring ourselves back within acceptable bounds of uncertainty.

According to the emerging picture from predictive processing, cognition and affect are tightly interwoven aspects of the same predictive system. Prediction errors aren’t merely data points within a computational system. Rather, rising prediction errors feel bad to us, while resolving errors in line with expectation feels good. This means that, as predictive organisms, we actively seek out waves of manageable prediction error – manageable uncertainty – because resolving it results in our feeling good. The recent rise in jigsaw puzzle sales during the COVID-19 lockdown testifies to our love of manageable uncertainty. These feelings evolved to keep us well tuned to our environment, helping us to curiously feel out novel and successful strategies for survival, while also avoiding all of the stress and unpleasantness that comes with runaway uncertainty. This active, recursive and felt relationship with the environment is crucial to grasping how social media can be detrimental to our mental health, and why we often find it so hard to stop using it.

Living well, in predictive processing terms, means being able to effectively manage uncertainty – and that’s predicated on having a generative model that represents the world accurately. A generative model that poorly reflects the regularities of the environment would inevitably lead to an increase in bad predictions, and a flood of difficult-to-resolve errors. Predictive processing theorists are beginning to develop novel accounts of mental health conditions, focusing on the effectiveness of a person’s generative model. Depression, for instance, has been described as a form of ‘cognitive rigidity’, where the system fails to adjust how sensitive it is to corrective feedback from the world. For people in good mental health, emotional feedback allows them to flexibly tune their expectations: sometimes it makes sense to ‘write off’ a prediction error as just noise, rather than see it as something that demands a change in their generative model of the world; other times, it makes sense to change our model because of the error. In depression, researchers hypothesise that we lose this ability to move back and forth between more or less ‘sensitive’ states, which results in rising and unmanageable prediction error. Eventually, we come to predict the inefficacy and failure of our own actions – which in turn becomes a self-reinforcing prediction, which we achieve some minimal satisfaction from confirming. At the level of the person who is depressed, this manifests in feelings such as helplessness, isolation, lack of motivation and an inability to find pleasure in the world.

‘Snapchat surgery’ makes perfect sense within the predictive processing framework

Social media is a spectacularly effective method for warping our generative models. It overloads them with bad evidence about both the world around us and who we are. The space between being and appearing is potentially vast – with a few swipes, we can dramatically alter our appearance, or retake the same picture 20 times until our face exudes precisely the calm mastery of life we wish to project. As social media platforms develop features that allow us to present ourselves inauthentically, those platforms become all the more powerful bad-evidence generators, flooding the predictive systems of their users with inaccurate information, telling us that the world is full of impossibly beautiful, happy people, living wonderfully luxurious and leisurely lives. Typically, in the offline world, our generative model and expectations are encoded with information incoming from the immediate (unfiltered) environment, which means that most of the time the model accurately reflects the world. However, in cases of regular and heavy engagement with social media, incoming information about the world is carefully selected, curated and altered – we’re potentially engaging with a fantasy. In regard to Debord’s feared shift from authenticity to appearances, social media platforms act as a digital crowbar, prising apart our generative model from the offline environment. Instead, our model of the real world comes to take on the expectations generated through the online one, and the result is increasingly unmanageable waves of prediction error that the system must now strive to minimise.

The seemingly extreme actions of Murphy, the influencer above, are one strategy for resolving this kind of prediction error. A recent survey found that more than half of cosmetic surgeons had patients explicitly asking them for procedures that would enhance their online image, while some surgeons have also reported patients using enhanced images of themselves as an example of how they’d like to look. Murphy describes how filters allowed him to ‘preview’ the effects of cosmetic procedures and, while Instagram has now banned that specific filter, many other apps perform similar functions.

So-called ‘Snapchat surgery’ makes perfect sense within the predictive processing framework. If we become accustomed to our own doctored appearance, and to receiving all of the feedback associated with it, soon the level of validation available offline will be registered as mounting prediction error. That’s likely to result in feelings of stress, and inadequacy. Through the lens of predictive processing, we see that getting surgery to look more like a filtered image is just the system doing what it always does: it’s no different from grabbing a blanket as the temperature begins to drop. We’re sampling the world to bring us back into an expected state. But social media is capable of displacing our self-image so much that the only way to rectify the error and meet those expectations is to surgically alter the way we look.

Note, though, how high the stakes are in this scenario. If we’re unable to resolve the error, and continue to engage with social media, then this consistent failure is fed back to us, eventually teaching us to expect our own failure. Through a cascade of second-order predictions – predictions about our ability to predict accurately (or not) – we will be alive to the expected utility of our own actions. And when our actions consistently fail, we lose a sense of confidence in these actions. Eventually, we lower our predictions for success, which we feel as a complete hopelessness. This is precisely the scenario described by neuroscientists working on the computational accounts of depression mentioned earlier: if we consistently fail to reach our expectations, and then fail to readjust those expectations, we come to expect the failure of our own actions. The inauthentic content on social media, images of beauty and luxury, can pin those predictions in place, making it even harder for us to readjust our expectations for our own lives based on real-world feedback. Thus, social media can put us in a bind: either we bring the world into line with our new expectations, or we risk sliding into depression and despair.

Of course, there’s a more obvious way to alleviate these problems: spend less time online. For some of us, this is easier said than done, as mounting evidence supports the suspicion that social media can be addictive. A comprehensive review in 2015 defined social media addiction as a disproportionate concern with and drive to use social media that impairs other areas of life, and found that roughly 10 per cent of users exhibit symptoms of addiction. Interestingly, this is around the same percentage of people who have problems with alcohol – but while the addictive hooks of alcohol are relatively well understood, those of social media are not. Predictive processing might once again hold the key to understanding exactly how the features of particular platforms come to have such an effect.

Predictive processing offers a new understanding of addiction as a derailment of the alignment between predictive systems and their environment. Life contains many rewards, and the brain experiences these rewards as achieving a decreased prediction error: contrary to popular belief, it isn’t dopamine per se that’s rewarding, but the reduction in error that accompanies it. Neurotransmitters such as dopamine simply encode and entrench the behaviours that we learn to anticipate will deliver these rewards. Now, just like various addictive drugs, the developing landscape of digital technology is disrupting this relationship between reward and behaviour. In his important book Your Brain on Porn (2014), the science writer Gary Wilson argues that internet pornography presents itself as dangerously rewarding, an example of a ‘hyperstimulator’. Wilson points out that, in one evening, internet porn facilitates levels of sexual novelty that would have been unavailable to our ancestors across an entire lifetime: multiple tabs or windows, hundreds of different models, escalating fetishes, all of which conspire to have our reward circuitry screaming ‘Hell, we’re doing far better than we ever thought possible!’ when in reality we’re just staring at a screen, alone. The novelty is particularly enticing, as our brains are always seeking new ways of reducing error, novel ways of doing better than expected. Our brains register this as a huge resolution of uncertainty, and our reward circuitry in the brain goes into overdrive, reinforcing these particular reward-seeking behaviours.

What pornography is to sex, social media platforms are to our intrinsic appetite for socialising. Engaging in meaningful interpersonal bonding draws on all of the reward circuitry mentioned above: it feels good to socialise, and dopamine encodes learning for successful social behaviours. One major similarity between social media and pornography is that both are a powerful vehicle for taking carefully curated fantasy, and presenting it as an attainable reality. These presentations of ‘better than real life’ scenarios (eg, carefully staged and filtered images; maximally exciting sexual encounters in pornography) are highly alluring for predictive agents always on the lookout for ways to improve. On social media – just as with online porn – high levels of novelty and excess mean that the reward system is kicked into overdrive. It’s no wonder that a report in 2019 found that the average teenager in the US now spends more than seven hours a day looking at a screen. Through social media, hyperstimulation works to reorganise our predictive model and restructure our habits: we wake up and reach for our phone, never leave home without it, and constantly feel drawn toward our phones even when in the company of friends.

The hyperstimulating effect of social media, however, doesn’t emerge only from an excess of carefully edited content and potentially massive social feedback. It also comes from deliberate design features – features that place social media much closer to gambling than to pornography. In gambling, what’s so arousing (and habit-forming) is the anticipation of reward, or the expectation of an uncertain reward. Yes, offline social interactions are often unpredictable too, in that we don’t know when someone might contact us or interact with us in rewarding ways – but social media sites are engineered to compound this anticipation through gamification, in which features such as progression, points-scoring and risk-taking are introduced into a nongame setting. Social media gamifies social interaction, primarily through various highly interactive systems of ‘Likes’, ‘shares’, ‘upvotes’, comments and so on, which apply to user-created content. This feedback is the direct measure of the ‘success’ of a particular post, and allows for comparisons in popularity between posts and posters.

Debord was right: the detachment of appearance from reality can cause profound harm

Moreover, when feedback does come, it isn’t immediately communicated to the user. Rather, we receive notifications in the form of a shining button or exciting sound that delays the discovery of the precise nature of the incoming content. The simple act of pushing a button to reveal information has been shown to trigger arousal and compulsive behaviour, and newly developed features on smartphones add further layers of anticipation. The ‘swipe to refresh’ feature of the Facebook app’s news feed, for example, where users physically swipe the screen to generate a new stream of information, is a startlingly similar action to the pulling of a casino slot machine arm. In each case, users don’t know for sure what kind of content will spring up until they swipe. This feature, coupled with the fact that Facebook’s feed is now effectively infinite has led to the app being described as ‘behavioural crack cocaine’.

It’s worth noting how digital space dissolves the temporal and spatial restraints that govern offline interaction, offering an excess of novelty and validation that simply isn’t available in the real world. Even moderately successful Instagram profiles can count between 40,000 to 100,000 followers; users can instantaneously exchange direct messages with people who might well be complete strangers; and when users get bored of the content they’re currently interacting with, a quick swipe or message generates new, exciting, unpredictable content. These structural features – which deliberately elicit anticipatory states and facilitate near-endless potential for novelty – are something that deflationary accounts of social media addiction often fail to recognise.

Debord was right to be concerned about us: the detachment of appearance from reality can cause profound harm to our wellbeing, and drive us to take drastic action. Perhaps Debord’s take on these anxieties is most pithily summed up by the internet slang ‘pics or it didn’t happen’: experiences themselves are fully constituted by their appearance circulating across social networks. Moreover, Debord recognised that these harms don’t emerge from a vacuum. The danger of social media lies not only in the inauthenticity of content, but in its ability to grip us. There’s a powerful force driving the deliberate design of social media, which is their immense potential for monetisation. As the design guru Nir Eyal writes in Hooked (2014): ‘Companies increasingly find that their economic value is a function of the strength of the habits they create.’

If it turns out that engagement with hyperstimulants can lead to conditions such as addiction and depression, and as long as it remains the case that more engagement means more profit, then designers of social media will have a de facto interest in implementing designs that lead to human misery. This emerging scientific picture adds to the growing consensus that digital hyperstimulants are a threat to our wellbeing – and lends weight to those voices calling for change in how social media is designed, operated and regulated.

To read more about mental health, visit Psyche, a digital magazine from Aeon that illuminates the human condition through psychology, philosophical understanding and the arts.