Menu
Aeon
DonateNewsletter
SIGN IN
A holographic girl with blue hair, wearing a white outfit, sitting in a futuristic spherical chair, holding a book.

Azuma Hikari, the virtual home robot from Gatebox. Photo by Tomohiro Ohsumi/Bloomberg/Getty

i

Embracing the robot

Robot relationships need not be kinky, exploitative or fake. In fact they might give human relationships a helpful boost

by John Danaher + BIO

Azuma Hikari, the virtual home robot from Gatebox. Photo by Tomohiro Ohsumi/Bloomberg/Getty

There is a heartbreaking scene in the middle of Blade Runner 2049 (2017). The hero of the movie, a replicant called K, lives a drab existence in a dystopian, future Los Angeles. The one bright spot in his life is his patient and sympathetic partner, Joi. They share many affectionate moments on screen. But then she is killed, in the midst of declaring her love, in one of the movie’s most gut-wrenching moments. I know I shed a tear when I first saw it.

There is, however, something unusual about Joi. She is a mass-produced, artificially intelligent hologram, designed to be the perfect partner. She learns from her interactions with K, and shifts her personality to suit his moods. Her ‘death’, such as it is, is due to the fact that she can exist only in the presence of a particular holographic emanator. When it is destroyed, so is she.

Joi would be little more than a science-fiction curio if it were not for the fact that real-world companies are trying to create versions of her. The Japanese company Gatebox, for instance, sells Azuma Hikari. ‘She’ is a holographic AI, projected inside a cylindrical tube, who is intended to be an intimate companion. In an advertisement, we see her waking up her (male) user in affectionate tones and greeting him when he comes home at the end of the day. She provides a simulacrum of married life for the growing population of single Japanese men. And it’s not just emotional support that is on the cards – sex is, too. Although this is not a feature of Azuma Hikari, other companies are eagerly racing to create robotic lovers and sexual partners.

Is this a welcome development? A number of critics have voiced their concerns. They claim that relationships with robots would be fake and illusory: perceptual tricks, foisted on us by commercially driven corporations. They are also concerned about how these robotic partners will represent real people, particularly women, and the consequences that their use will have for society.

Contrary to the critics, I believe our popular discourse about robotic relationships has become too dark and dystopian. We overstate the negatives and overlook the ways in which relationships with robots could complement and enhance existing human relationships.

In Blade Runner 2049, the true significance of K’s relationship with Joi is ambiguous. It seems that they really care for each other, but this could be an illusion. She is, after all, programmed to serve his needs. The relationship is an inherently asymmetrical one. He owns and controls her; she would not survive without his good will. Furthermore, there is a third-party lurking in the background: she has been designed and created by a corporation, which no doubt records the data from her interactions, and updates her software from time to time.

This is a far cry from the philosophical ideal of love. Philosophers emphasise the need for mutual commitment in any meaningful relationship. It’s not enough for you to feel a strong, emotional attachment to another; they have to feel a similar attachment to you. Robots might be able to perform love, saying and doing all the right things, but performance is insufficient. As the moral philosophers Sven Nyholm and Lily Frank at Eindhovern University of Technology in the Netherlands put it:

If love boiled down to certain behavioural patterns, we could hire an actor to ‘go through the motions’ … But, by common conceptions, this would not be real love, however talented the actor might be. What goes on ‘on the inside’ matters greatly to whether mutual love is achieved or not.

Furthermore, even if the robot was capable of some genuine mutual commitment, it would have to give this commitment freely, as the British behavioural scientist Dylan Evans argued in 2010:

Although people typically want commitment and fidelity from their partners, they want these things to be the fruit of an ongoing choice …

This seems to scupper any possibility of a meaningful relationship with a robot. Robots will not choose to love you; they will be programmed to love you, in order to serve the commercial interests of their corporate overlords.

This looks like a powerful set of objections to the possibility of robot-human love. But not all these objections are as persuasive as they first appear. After all, what convinces us that our fellow human beings satisfy the mutuality and free-choice conditions outlined above? It’s hard to see what the answer could be other than the fact that they go through certain behavioural motions that are suggestive of this: they act ‘as if’ they love us and ‘as if’ they have freely chosen us as their partners. If robots can mimic these behavioural motions, it’s not clear that we would have any ground for denying the genuineness of their affection. The philosopher Michael Hauskeller made this point rather well in Mythologies of Transhumanism (2016):

[I]t is difficult to see what this love … should consist in, if not a certain kind of loving behaviour … if [our lover’s] behaviour toward us is unfailingly caring and loving, and respectful of our needs, then we would not really know what to make of the claim that they do not really love us at all, but only appear to do so.

The same goes for concerns about free choice. It is, of course, notoriously controversial whether or not humans have free choice, and not just the illusion of that; but if we need to believe that our lovers freely choose their ongoing commitment to us, then it is hard to know what could ground that belief other than certain behavioural indicators that are suggestive of this, eg their apparent willingness to break the commitment when we upset or disappoint them. There is no reason why such behavioural mimicry needs to be out of bounds for robots. Elsewhere, I have defended this view of human-robot relations under the label ‘ethical behaviourism’, which is a position that holds that the ultimate epistemic grounding for our beliefs about the value of relationships lies in the detectable behavioural and functional patterns of our partners, not in some deeper metaphysical truths about their existence.

Ethical behaviourism is a bitter pill for some. Even though he expresses the view well, Hauskeller, to take just one example, ultimately disagrees with it when it comes to human-robot relationships. He argues that the reason why behavioural patterns are enough to convince us that our human partners are in love with us is because we have no reason to doubt the sincerity of those behaviours. The problem with robots is that we do have such reasons:

[A]s long as we have an alternative explanation for why [the robot] behaves that way (namely, that it has been designed and programmed to do so), we have no good reason to believe that its actions are expressive of anything at all.

Put differently: (i) because the robot has a different developmental origin to a human lover and/or (ii) because it is ultimately programmed (and controlled) by others, who might have ulterior motives, there is no reason to think that you are in a meaningful relationship with it.

Humans once owned and controlled other humans but most of us eventually saw the moral error in this practice

But (i) is difficult to justify in this context. Unless you think that biological tissue is magic, or you are a firm believer in mind-body dualism, there is little reason to doubt that a robot that is behaviourally and functionally equivalent to a human cannot sustain a meaningful relationship. There is, after all, every reason to suspect that we are programmed, by evolution and culture, to develop loving attachments to one another. It might be difficult to reverse-engineer our programming, but this is increasingly true of robots too, particularly when they are programmed with learning rules that help them to develop their own responses to the world.

The second element (ii) provides more reason to doubt the meaningfulness of robot relationships, but two points arise. First, if the real concern is that the robot serves ulterior motives and that it might betray you at some later point, then we should remember that relationships with humans are fraught with similar risks. As the philosopher Alexander Nehamas points out in On Friendship (2016), this fragility and possibility of betrayal is often what makes human relationships so valuable. Second, if the concern is about the ownership and control, then we should remember that ownership and control are socially constructed facts that can be changed if we think it morally appropriate. Humans once owned and controlled other humans but we (or at least most of us) eventually saw the moral error in this practice. We might learn to see a similar moral error in owning and controlling robots, particularly if they are behaviourally indistinguishable from human lovers.

The argument above is merely a defence of the philosophical possibility of robot lovers. There are obviously several technical and ethical obstacles that would need to be cleared in order to realise this possibility. One major ethical obstacle concerns how robots represent (or performatively mimic) human beings. If you look at the current crop of robotic partners, they seem to embody some problematic, gendered assumptions about the nature of love and sexual desire. Azuma Hikari, the holographic partner, represents a sexist ideal of the domestic housewife, and in the world of sex dolls and sexbot prototypes, things are even worse: we see a ‘pornified’ ideal of female sexuality being represented and reinforced.

This has a lot of people worried. For instance, Sinziana Gutiu, a lawyer in Vancouver specialising in cyberliability, is concerned that sexbots convey the image of women as sexual tools:

To the user, the sex robot looks and feels like a real woman who is programmed into submission … The sex robot is an ever-consenting sexual partner …

Gutiu thinks that this will enable users to ‘act out rape fantasies and confirm rape myths’. Kathleen Richardson, a professor of ethics and culture of robotics at De Montfort University in Leicester and the co-founder of the Campaign Against Sex Robots, has similar concerns, arguing that sexbots effectively represent women as sexual commodities to be bought and sold. While both these critics draw a link between such representations and broader social consequences, others (myself included) focus specifically on the representations themselves. In this sense, the debate plays out much like the long-standing debates about the moral propriety of pornography.

Let’s set the concerns about consequences to one side for now, and consider whether there is something representationally disturbing about robot lovers. Do they necessarily convey or express problematic attitudes toward women (or men)? To answer that, we need to think about how symbolic practices and artefacts carry meaning in the first place. Their meaning is a function of their content, ie what they resemble (or, more importantly, what they are taken to resemble by others) and the context in which they are created, interpreted and used. There is a complex interplay between content and context when it comes to meaning. Content that seems offensive and derogatory in one context can be empowering and subversive in another. Videos and images that depict relationships of subordination and domination can be demeaning in certain contexts (eg, when produced and consumed by purveyors of mainstream hardcore pornography), but carry a more positive meaning in others (eg, when produced and consumed by members of the BDSM community or by proponents of ‘feminist pornography’).

This has implications for assessing the representational harms of robot lovers because neither their content nor the context in which they are used is fixed or immutable. It is almost certainly true that the current look and appearance of robot lovers is representationally problematic, particularly in the contexts in which they are produced, promoted and used. But it is possible to change this. We can learn here from the history of the ‘feminist porn’ movement, a sub-culture within pornography that maintains that pornographic representations of women need not be derogatory or subordinating, and that they can play a positive role in sexual self-expression.

To do this, proponents of the feminist porn movement pursue three main strategies: (i) they try to change the content of porn, so that it is not always from the male gaze, and so that it depicts a more diverse range of activities and forms; (ii) they try to change the processes through which porn is created, making it more ethical and inclusive of female voices; and (iii) they try to change the contexts in which it is consumed, creating networks of feminist sex shops and discussion groups for marketing and interpreting the content.

A similar set of strategies could be followed in the case of sexbots. We could work to change the representational forms of sexbots so that they include diverse female, male and non-binary body shapes, and follow behavioural scripts (pre-programmed or learned) that do not serve to reinforce negative stereotypes, and perhaps even promote positive ones. We could also seek to change the processes through which sexbots get created and designed, encouraging a more diverse range of voices in the process. To this end, we could work to promote women who are already active in sextech. This would include people such as Cindy Gallop, who founded the website MakeLoveNotPorn and has recently started a venture-capital fund for sextech entrepreneurs; Stephanie Alys, who co-founded the sex-toys company MysteryVibe and has spoken positively about the role of sexbots in conversations about human sexuality; and Kate Devlin, a lecturer in computing at Goldsmiths, University of London, who has criticised Richardson’s Campaign Against Sex Robots, and argued that sex robotics could allow us to explore sexuality ‘without the restrictions of being human’. Finally, we could also create better contexts for the marketing and use of sex robots. This would require greater ‘consciousness raising’ around the problems of gendered harassment and inequality, and a greater sensitivity to the representational harms that could be implicated by this technology.

We are already starting to do this, but it is undoubtedly an uphill battle that requires more effort. Given this difficulty, it is going to be tempting to slip back into calling for bans on the production of such content, but censorious attitudes are unlikely to be successful. We have always used technology for the purposes of sexual stimulation and gratification, and we will continue to do so in the future.

Debates about the consequences of robot-lovers are likely to become mired in controversy

Concerns about the representational harms of robots often translate into concerns about their consequences. If robots represent or express misogynistic attitudes, the worry is that these attitudes will get reinforced in how users interact with real people. They will be inclined to sexual aggression and violence, be unwilling to compromise, and possibly become more withdrawn and misanthropic.

Obviously, the consequences of robot lovers would be extremely relevant to any debate about their desirability. If the consequences were clearly and uncontroversially negative, then this would reinforce any negative social meaning they might have, and provide us with strong reasons to discourage their use. If the consequences were clearly and uncontroversially positive (eg, because their use actually discouraged real-world sexual violence), then their negative social meaning could be reformed, and we might have strong reason to encourage their use.

The problem is that we don’t know which of these two possibilities is more likely right now. We don’t have any empirical studies on the effects of robot-lovers. We can draw inferences only from analogous debates, such as those about the real-world effects of exposure to pornography. But those debates don’t provide much guidance. In a recent book chapter, I reviewed the empirical evidence on the effects of exposure to hardcore pornography and noted that the overall picture was, at best, ambiguous: some studies suggested that there were harmful effects, others suggested that there weren’t, and yet others suggested that the effects were positive. On top of this, many of the researchers lamented the poor quality and often biased nature of the current research literature.

The picture is much the same if you look at other ‘media-effects’ debates, such as the one about exposure to violent video games. This is discouraging because it suggests that debates about the consequences of robot-lovers are likely to become similarly mired in controversy and uncertainty. This should not be surprising: complex socio-behavioural phenomena such as sexual aggression or misogyny are likely to be causally over-determined, and subject to many different contextual and individual variations. Assuming that there will be a clear, linear cause-and-effect relationship between use of robot-lovers and other interpersonal behaviours – one that can meaningfully guide public policy around their use and development – is probably naive. The reality will be messier and less prescriptively useful.

Thus far, I have pushed back against critics and argued that genuinely meaningful relationships with robots are possible, and that the representational and consequential harms of those relationships can be overstated. I want to close by defending a more positive stance, based on different possible futures with robotic partners.

One possibility follows directly from the claim that meaningful relationships with robots are possible. If this is correct, it means that the goods we currently associate with human relationships are also realisable in robot relationships. This could be a positive consequence because it would enable us to distribute these relationship goods more widely. The philosopher Neil McArthur at the University of Manitoba makes this point specifically in relation to sexual relationships, arguing that there are many people who are excluded from the possibility of entering into valuable sexual relationships with other human beings. If we grant that sexual experiences are part of the well-lived life, and that there might even be a right to sex, this should be seen as a problem. Furthermore, the problem goes beyond sex: people are shut out from other relationship goods too, such as companionship and care. It is not possible to resolve this imbalance in the distribution of relationship goods by trying to find a human partner for everyone, since doing so would probably require mass coercion or compulsion, but it might be possible to do so with robotic relationship partners.

One plausible consequence of widespread robot-lovers would be normalisation of non-monogamy

In addition to this, it is a mistake to always think of robots as replacements for human lovers; we could also view them as complements to existing relationships. The ideal of human intimacy holds that we should relate to one another on terms of equality. But this is often not possible. One partner might demand too much from the other, leading the other to withdraw or retaliate. This dynamic can wax and wane over the lifetime of a relationship, with one partner being overly demanding at one time and the other being overly demanding at a different time. Robotic partners could help to redress these imbalances by providing third-party outlets that are less destructive of the human-to-human relationship because they might be less likely to be perceived as rivals.

An obvious scenario for this is in addressing desire-discrepancy and the need for variety in sexual relationships. But, again, the possibilities go beyond just sex. ‘Triadic’ relationships between humans, robots and other humans could ease the tension and pressure across many relationship dimensions. Of course, whether this comes to pass depends on how people perceive and respond to the presence of robotic lovers in intimate contexts. As the economist Marina Adshade at the University of British Columbia argues, one plausible consequence of the widespread availability of robot-lovers would be the normalisation of non-monogamy, and the reorientation of intimate relationships to focus less on sexual and emotional exclusivity, and more on companionship, care and shared life-plans.

In the coming decades, people will almost certainly be having relationships with more sophisticated robots, whatever we think about this. There is nothing intrinsically wrong with loving a robot, and some forms of human-robot love could complement and enhance human relationships. At the same time, some could be socially destructive, and it is important that they are anticipated and discouraged. The key question, then, is not whether we can prevent this from happening, but what sort of human-robot relationships we should tolerate and encourage.

Robot Sex: Social and Ethical Implications (2017) by John Danaher is published via The MIT Press.