Menu
Aeon
DonateNewsletter
SIGN IN

The unseen

Our crisis of work and technology is one in which too many people feel that nobody sees them as a fellow human being

by Allison J Pugh 

Paul was a gig worker in the San Francisco Bay Area.1 Formerly a project manager in tech until several companies in a row laid him off, he started working entirely for platforms like Lyft, Uber and TaskRabbit. He managed to eke out a living, but the jobs posed a different problem.

‘Honestly, a lot of times, I go out and the person doesn’t even know my name, even though I introduced myself as Paul,’ he told me. ‘Instead, customers just point and say: “OK, yeah, just put it over there,” and then I drop off the stuff, and they just tap it. I think they see it as more of an – I think they see it as automation. They see you as just a system.’ He paused. ‘I have friends that tell me: “You’re essentially working as a vending machine.”’

For Paul, it was his newfound invisibility that was searing. ‘I feel as if I don’t want to be a robot. I want to have some sort of – ’ he broke off. ‘It’s so much more enjoyable for me to talk to somebody.’ Paul’s struggle reflects a contemporary emergency that some are calling a crisis of loneliness.

There is widespread concern about loneliness, which scientists define as the feeling that one’s need for social connection is not being met. (They differentiate loneliness, a subjective experience, from social isolation, the objective fact of how many social contacts one has.) In 2023, the US Surgeon General declared a loneliness epidemic, and the World Health Organization established a Commission on Social Connection to recognise a ‘global public health priority’. In the United Kingdom and Japan, governments have appointed a Minister for Loneliness. Worldwide, loneliness has attracted enormous attention from policymakers and researchers alike.

Thanks to research, there is much we know about loneliness, first and foremost, that it has an enormous impact on wellbeing. Studies link loneliness and social isolation to increased mortality, dementia and stroke. Among adults, loneliness is linked to chronic diseases such as heart disease and obesity, and it is known to worsen job performance and commitment. Lonely children and youth are more likely to be anxious or depressed, addicted to games, or to suffer from sleep problems. Loneliness also worsens academic achievement, and lonely kids are more likely to drop out of school. Being lonely is undeniably bad for our physical and mental health.

While the consequences of loneliness are not controversial, its causes are. Scientists do not agree about the impact of screen time or age, nor do they agree about loneliness trends over time. While it is certainly important and pervasive, the ‘epidemic’ might not be new. Loneliness in older adults appears largely stable over time, although there has been a small, gradual increase in loneliness among young adults over the past 50 years. A glimpse at trends in Australia collected by the government tells the story: although there are differences by age group, the proportion of people aged 15 and over who are lonely is similar to two decades ago, at about 15 per cent. Even the COVID-19 pandemic did not have the devastating effect on people’s social relations that might have been predicted: recent research suggests that, overall, people’s social-gathering habits and number of confidants were resilient, dipping during the pandemic but then bouncing back to pre-pandemic levels. Loneliness is a serious health risk, then, but we are likely not lonelier now than we have ever been before.

Instead, pundits and policymakers are applying the word ‘loneliness’ to address a real and growing problem, but they are applying the wrong diagnosis. What they might call ‘loneliness’ is actually a different sort of crisis, one of depersonalisation. Depersonalisation is what happens when people feel not exactly lonely, but rather profoundly invisible. What is missing here is what scholars call ‘recognition’, ‘mattering’ or ‘being seen’ – the notion that you are seen and heard, even emotionally understood, by the people around you, as opposed to feeling insignificant or invisible to others.

The depersonalisation crisis reflects changes in both the supply and demand for this kind of attention. Anonymity has long been the curse of modernity, given enduring trends like industrialisation and urbanisation, but even contemporary developments such as the spread of standardisation in service work – like when the grocery checkout clerk asks ‘paper or plastic’ or the call-centre worker races through their closing spiel to get it in before you hang up – can make us feel like a number. At the same time, while infants may have a basic need for it, the sense that we deserve or require emotional recognition by others is historically new, reflecting the rise of a therapeutic culture and changes in what counts as ‘good enough’ parenting, among other trends. When Paul talks wistfully about not being a robot or about having customers just point out where he should put his delivery, he is talking about depersonalisation.

Sarah, a therapist in a veteran’s hospital, told me about how depersonalisation led to a surprising realisation in her practice about the power of mistakes. She’d had a patient, a woman who had experienced sexual trauma in the military, and, as Sarah told the story, at the end of her third or fourth week of therapy, the woman left the session with a comment that she might not be able to return, about how she ‘might get busy’.

She seemed to come from a lifetime of not mattering enough, of enduring misrecognitions from practitioners

‘Something was just kind of off,’ Sarah recalled. ‘It didn’t feel like the same. It just didn’t feel right.’ So Sarah called her before the next week, and told her about her sense that something was wrong. ‘I think I said something like: “The session felt different today. I’m wondering if maybe I missed something, or didn’t hear something right. I think it could be helpful to talk about that if you’re able to come in again.”’

That moment ended up being pivotal, Sarah said. ‘She ended up coming in again and we were able to talk, and at that point the relationship really shifted. She ended up being one of the most consistent people and she ended up making tons of progress that year.’

As the treatment came to an end, Sarah asked the woman what she thought had worked for her. ‘And she was like: “There was this point where you noticed that I wasn’t happy with whatever you did. The fact that you even noticed that was a big deal.”’ It was actually the rupture itself, and Sarah’s attempt to rectify it, that had helped the relationship. In fact, it was partly the client’s low expectations that led to this result, Sarah said. She seemed to come from a lifetime of not mattering enough, of enduring misrecognitions from other practitioners who did not stop to notice.

‘I think this was someone who hadn’t had her needs seen for much of her life,’ Sarah said. ‘So she was already used to it, and expected people to just not know or care to notice what was happening for her. And so the experience of actually having someone who was attuned enough to notice that there was something off, and then bring it up, I think, was very powerful for her.’ When Sarah corrected her mistake, she cut through the fog of depersonalisation.

There is ample evidence that being seen is in too short supply, that many are like Paul, or Sarah’s patient, the walking wounded beset by depersonalisation. A sense of feeling invisible clearly animates working-class rage in many countries, and may have powered Donald Trump to victory in the US presidential election last fall. One study analysing his speeches found that he systematically aimed to appeal to this group by affirming their worth as workers; in the 2024 election aftermath, an op-ed in The New York Times declared: ‘Voters to Elites: Do You See Me Now?’ Research finds that low-income people are more likely to feel isolated and depressed, and stigmatised due to their socioeconomic status, with some choosing to isolate themselves because of their feelings of self-doubt. But while the working-class and poor may endure more invisibility, the spread of being subject to someone else’s data collection, of scripted and standardised interactions with chatbots and AI agents, affects people up and down the class ladder. Depersonalisation has come for us all.

When we feel invisible, it can lead to a desperate yearning for recognition, one that targets those whose job it is to see others. ‘My patients – it’s just like they’re singing their siren song to whoever will listen because no one will take care of them,’ said Jenna, a primary care physician at a community clinic in the San Francisco Bay Area. She told me that her patients were frantic for her attention. ‘They’re used to not getting their needs met, and they’re just desperate.’

Their longing was so intense, so unrelenting, she said, that it overwhelmed her ability to meet it, given time constraints imposed by the extraordinary patient loads and limited resources at her community clinic. Researchers report that such working conditions contribute to clinician bias and stereotyping: in other words, to practitioners’ inability to see ‘the other’ well. To Jenna, the tragedy of these limits felt like a heartbreak. Patients ‘want so much more from me than I can give them,’ she said.

‘I don’t invite people to open up because I don’t have time. And that is such a disservice to the patients,’ she told me. ‘Everyone deserves as much time as they need, and that’s what would really help people, to have that time, but it’s not profitable.’

In an era in which AI is being proposed to do many human jobs, it takes a human to bear effective witness to humanity

Not everyone necessarily wants recognition, as the sociologist Freeden Blume Oeur discovered when he studied a school serving mostly Black, low-income boys. As described in his book Black Boys Apart (2018), Oeur found that while some sought respect or dignity, others actually wanted to ‘be unknown’, an urge strongest among those boys with prior formal contact with the criminal justice system. To them, relative anonymity felt like a privilege, the privacy of being free from others’ presumptions, a way of belonging to their communities without the mark of a criminal.

Despite some exceptions, however, the longing to be seen is widespread, acknowledged in popular culture, and supported by research. Depersonalisation overlaps with loneliness – it is surely disheartening to feel invisible, and it can make people feel alone – but it is not the same thing. And there is evidence that increasing numbers of people feel unseen or unheard by others, making it an unnoticed emergency, a crisis of the unsung.

Paul, Sarah and Jenna were among the scores of people I interviewed and shadowed as part of my research for my recent book The Last Human Job (2024). In my research, I looked at the work that people do to connect to others, and discovered that many of them use some version of seeing the other – or what I call ‘connective labour’ – to achieve valuable outcomes, from helping someone manage their chronic illness to teaching someone how to write an essay. I ended up talking to more than 100 people – most of them connective-labour practitioners such as therapists, teachers or physicians – and observing them for more than 300 hours.

While surveys can help us answer questions about the prevalence of something or its correlations with particular demographic characteristics, the kinds of stories people told me are available only through in-depth qualitative research. The experience of emotional recognition is one that involves sending and receiving messages, sometimes verbal ones that we can hear, and other times bodily ones through gestures as elusive as a nod, a chuckle or a wrinkle, or a ‘vibe’ or an ‘energy’. I was lucky to be able to experience these interactions firsthand, and to hear how people talked about these connections, how they accomplished them, what they got out of them, and what they hoped others did, too. To capture such an elusive, emotional target as the connections between people, there is no substitute for close observation and in-depth conversation. In an era in which AI is being proposed to do many human jobs – including in-depth interviewing – it takes a human to bear effective witness to humanity.

Where might the depersonalisation crisis come from? For many people, the answer must be technology. Surely our devotion to screens – the average global internet user spends 6 hours and 40 minutes online every day – is getting in the way of our capacity to see each other. Yet while technology has a role to play (more on that later), it is not the whole of the problem.

Katya, a therapist I spoke to at a busy veterans’ hospital in California, offers a glimpse of a different answer. Tasked with screening patients for mental health problems, she was supposed to use a questionnaire that the hospital assumed would take 15 minutes, and she hated that her job forced her to standardise her interactions with clients.

‘I’m the first person they’re talking to about mental health, and we have to do some stupid questionnaire, and we have to ask about suicidal ideation. I’ve had somebody totally shut down during that part,’ Katya recalled. ‘I was doing a suicide risk assessment, and when it came to the gun part, he said: “I’m not answering anymore.” I thought: “Oh shit, I’ve lost him.” Whatever connection we had was just severed in that moment.’

As Katya and her guarded patient might attest, one major culprit behind the wave of depersonalisation is the widespread reduction of individuals to data. Feeling invisible can stem from repeatedly experiencing standardised interactions – as a client, patient, student or even worker – a growing trend in even the caring professions, as clinics and firms have tried to systematise the messy interactions or creative inputs that are unpredictable or inefficient but can help workers and their clients feel like human beings.

We are dividing ourselves into two groups: the watched and the watchers

Depersonalisation can come from living in excessively standardised environments, such as the military or other mass institutions, as Sarah’s patient might attest. It can also emerge when people live in a community but are not of a community, perhaps due to a marginalised identity or a recent move.

Take the example of Courtney, a Black woman and a pregnant graduate student, who met her obstetrician for the first time at a prenatal visit, according to Patrice Wright, a reproductive justice scholar at Howard University in Washington, DC. The doctor made some comments about Courtney keeping the weight off and about the government’s WIC programme, a food subsidy for poor mothers and children. The comments revealed to Courtney that the doctor assumed she was receiving WIC assistance, didn’t understand basic nutrition, and would tend to gain too much weight. But while Courtney may have been low-income, she was not receiving WIC, knew a lot about nutrition, and was not overweight. Courtney felt profoundly unseen and did not return to the doctor; the misrecognition on display there – what therapists would call an empathic failure – caused her stress, anger and anxiety, Wright reports.

Finally, as we might surmise, screens matter, shaping and blocking what we see of each other, how we are viewed, and whether we are seen at all. As it turns out, the way we participate in online spaces contributes to their impact. For instance, while staying in touch with friends and family is the most common reason people give for using social media, around half maintain this is not their primary motivation; indeed, almost 40 per cent report they use social media to ‘fill spare time’, testament to the growing use of social media as entertainment as much as connection. Ultimately, depersonalisation can stem from endlessly scrolling past other people’s posts, serving as merely an audience for their experiences, bearing witness to other people while never being witnessed in return.

These trends at the root of depersonalisation – the standardisation of interactions and contexts, the exclusion of the marginalised in fractured communities, and the proliferation of screen time for a perennial audience – are not distributed evenly. The less advantage you have as a client, patient, student or worker, the more standardised your environment, the more likely you are to be subjected to bias, the more likely you are to be excluded. And while excessive screen time is surely an affliction for people up and down the class ladder, we are dividing ourselves into two groups: the watched and the watchers. Depersonalisation trends have come together to create a new class of the unseen.

If we are facing a depersonalisation crisis, why is all the talk about loneliness? I think this is, in part, because the focus on loneliness serves the interests of those who would sell us its solution – ironically, some of the same characters who are helping to cause the problem in the first place.

In spring 2025, Meta’s CEO Mark Zuckerberg made headlines when he reached for loneliness discourse to sell AI. In an interview with the podcaster Dwarkesh Patel, he said that most Americans have fewer than three friends, but want more like 15. ‘The average person wants more connection than they have,’ he said, suggesting that AI companions might step into the breach.

Technologists want us to focus on loneliness, not depersonalisation. Of course, social media platforms like Facebook, Reddit or Instagram are a bundle of contradictions for people’s relationships with friends and family, with plenty of implications for loneliness. The screens take us away from being fully present with people in our immediate environs, even as social media also enhances ties further afield. In fact, researchers say social media provides a form of ‘social snacking’, offering brief connections to other people that can help users tolerate a lack of ‘real’ (long-term or in-person) social interaction for longer. Particularly for lurkers and other passive users, social media contributes to both connecting and feeling disconnected, which explains the confusing fact that researchers report: social media use increases both satisfaction and dissatisfaction about one’s relationships. Just like the not-quite-filling calories from a fast-food snack, the social snacking of social media ensures a continued supply of connection-hungry customers coming back for more.

It is this contradictory knot of ambivalence that brings people back, again and again, to find interactions on these platforms, whose billionaire owners have a continued interest in stoking the so-called loneliness crisis. Marketers know: ‘Sell the problem you solve, not the product.’ This aphorism captures their one-two punch: before consumers will buy your solution, they first have to be convinced that they need it. Perhaps that is why Meta’s own research teams studied Facebook’s impact on loneliness, only to conclude that the platform was a ‘net positive’. The New Yorker recently quoted the tech entrepreneur Avi Schiffmann, whose startup is creating an AI wearable device dubbed ‘Friend’, as saying: ‘I do think the loneliness crisis was created by technology, but I do think it will be fixed by technology.’ Just like the purveyors of ‘feminine hygiene’ products, educational toys or body deodorant, then, technologists both sell a widely touted crisis and profit from its solutions. They have become merchants of loneliness.

A machine makes people less worried about its judgment, but also less interested in its opinion

When we understand the problem as loneliness, then it might make sense to assert that all kinds of connections, even those with machines, might help. But when we understand the problem as depersonalisation, the mechanised relationship becomes a harder sell. Of course, technologists do their best, apparently recognising the widespread yearning to be seen; their solutions, however, invite even more data and technology to step in.

They urge a strategy that is widely called ‘personalisation’, involving a process of ever more precise tailoring, in which data is harnessed by technology to analyse a person’s health history, how a person likes to drive, or even the content of someone’s sweat. ‘Personalised medicine’ and ‘personalised education’ – perhaps better called ‘customised’ – are each an effort to assess someone’s needs and produce recommendations tailored to the individual: akin to being seen, but by a machine.

Since ChatGPT burst onto the scene, large language models (LLMs) have taken mechanised recognition to a new level. Most recently, chatbots have been designed to teach, provide therapy, give medical advice and conduct qualitative interviews, in each case, allegedly better than humans. For example, researchers who designed a chatbot interviewer claimed that it demonstrated ‘cognitive empathy’, using follow-up questions to try to understand an interviewee ‘close to how they understand themselves’. People find bots convenient, relatively cheap, and better than nothing, but also less judgmental and sometimes even warmer than humans, who reflect all the time constraints and efficiency pressures that Jenna lamented at her clinic. Somehow, we have found ourselves at a particularly absurd moment in the industrial timeline, when people are too busy for us while machines have all the time in the world. And the tech solution to problems of peremptory or dismissive clinicians is sold as easier or cheaper than giving practitioners more time to do their work well.

A recent report on chatbot therapy is typical: researchers found an uneven impact. Some users suggested that they did indeed feel seen by the bot: ‘This app has treated me more like a person than my family has ever done,’ one client reported. But others complained about how the app’s imperfect witnessing made them feel invisible: ‘While I was in crisis, the responses do not make sense and do not really relate to what I wrote. It makes me feel like I am not being listened to. I know it is an AI program and not a real person but it still ends up making me feel worse and not better,’ wrote another user. Being unseen hurts even when it is a machine that is doing the seeing.

In these heady days of the proliferation of empathic chatbots, it is tempting to believe that machines can do the work of seeing the other – and that any imperfections are momentary blips on their way to being ironed out. But as engineers struggling with plummeting app retention know too well, humans make other humans interested, even at the risk of their judgment. When Jenna’s patients determinedly seek her out, they do so because they value her opinion, and they value it, in part, because her expertise brings with it the risk of judgment. When machines pretend to ‘see’ us, the fact that it is a machine on the other end, not a human being, matters: a machine makes people less worried about its judgment, but also less interested in its opinion.

When I asked Peter, an engineer, what he thought humans still had to offer in this work, he said: ‘an audience that matters’. In his view, robots would someday do most everything humans could do – in education, for example, that would include grading papers and answering questions about the material. He still wasn’t sure, however, if one could ‘project enough humanness onto a robot that you want to make it proud of you’.

Most important, however, even if machines could do that work well, why would we want them to? Carrie, a therapist at a veterans’ hospital, questioned whether apps or agents could ever offer the nonverbal acuity that she considered crucial to good therapy. And even if they could, she viewed their development as a political decision: ‘Even if machines could pick up nuances and facial expression and that kind of stuff, why are we doing that though? So that people can make money in tech? So that big, huge industry can continue to blossom? Why do we have to do that? So, that would be my question.’

Seeing others is how we experience connection, forge community, and even conduct democracy. Among the myriad human activities that we might ‘disrupt’, it is not clear why we would want to mechanise the relationships that give life meaning. The depersonalisation crisis is a social malady that begs for human intervention.

We are at a critical juncture where the decisions we make or fail to make will affect the trajectory of AI and connective labour. On the one hand, we are living in an AI spring, a moment in which artificial intelligence is being deployed to solve problems we thought were intractable, such as how to conquer drug-resistant bacteria in hospitals, how to predict earthquakes, or how to decode the language of sperm whales, with sometimes magical results. AI has ushered in a new era of enormous possibility. But it cannot do everything, nor should we want it to. Yet AI is also being actively deployed as an alternative to human witnessing, in fields from therapy to teaching to medicine.

We know that AI brings serious problems: the most common criticisms have involved algorithmic bias, surveillance and privacy, and job loss. We hear how AI turns historical correlations, often based on bias and stereotyping, into built-in assumptions, so that sentencing algorithms are more likely to predict recidivism for Black defendants than white ones, for example. We hear that apps track whether Amazon drivers look away from the road, or that the Chinese government has deployed a ‘social credit’ algorithm to assign citizens a risk score determining their ability to book a train ticket. We hear that AI will radically reduce many occupations, dermatologists and truck drivers alike. These are all worthy concerns. Yet there is something that critics of AI often overlook: the impact of AI on relationships, on the connections between people that are forged in emotional, interpersonal work, work like teaching, counselling or primary care.

Rather than capitulate to ‘better than nothing’, we need to make it more possible for people to see each other

Turning to tech replacements for socioemotional work is likely to have serious consequences, including the radical shrinking of the connective-labour workforce; the destruction of education as we know it, as students opt for bots to do their work; the extreme stratification of human contact, with personal connective labour as a luxury; and the loss of human-to-human bonds that underlie our civic life. Engineers trying to solve these problems with AI are doing so because they focus on the individual patient, client or worker. But by not talking about the implications for relationships, we also make it impossible to treat the depersonalisation crisis.

Addressing the depersonalisation crisis requires us to take seriously its roots: in standardisation, in exclusion, in screen time. Rather than capitulate to a mechanised recognition as ‘better than nothing’, we need to make it more possible for people to see each other well. Instead of scripting encounters to save time and money, for example, it means improving the training and investing in the staffing that would enable Jenna and others like her to offer the balm of their witnessing well.

‘To me, the art of medicine is being fully present,’ Ruthie, another physician, told me. ‘My real passion is my elderly patients. I love them. They want to talk, they want the stories, the connections, that are so desperately needed. If you don’t hear about why they’ve got into their situation, you’re not healing, it’s basically creating more illness. That’s what I’m really doing in medicine.’ Ruthie spent years looking for the right kind of setting before she found one – her own practice – that let her do medicine in this way. In my research, I found several examples of clinics and schools with the social architecture – the dedicated resources, visionary leadership, and culture of connection – that enabled them to prioritise recognition.

We also need to address the disparities in who gets witnessed and who is presumed to be a perpetual audience, a dynamic that will take work to upend. Mariah ran a programme for former inmates to learn business skills, and told me that students had to adjust to the programme. ‘It takes a while for our entrepreneurs to begin to feel comfortable having that much attention on them, like: “You mean, you just want to know about what I think? You mean, you just want to be here and invest in my plan? We’re just going to be talking about what I want to do?”’ Shocked by the novel experience of being seen, the students asked plaintive questions that revealed just how uncertain they were, how unsure they were that they even warranted all that focus from others. ‘All that stuff is [a form of] undoing,’ Mariah said, ‘particularly for those people who have been inside for so long, where they were completely disempowered.’ Part of that undoing also means making cultural space for diverse voices in books and movies, and training and hiring people from disadvantaged backgrounds for jobs in medicine and teaching, where they can help elicit these stories.

Finally, we need to forestall the mechanisation of recognition. In this freewheeling era of so little regulation, when the tech industry fights back every criticism with accusations of being ‘against progress’, it can be difficult to differentiate between what is valuable and what is not in the field. But we can commend some uses of new technology while discouraging others. First and foremost, we can implement a ‘connection criterion’, in which we evaluate technology by how much it replaces, impedes or enables human relationships. The depersonalisation crisis demands from us that kind of vigilance; our social health requires no less.

1. The names of interviewees have been changed to protect confidentiality.