Menu
Aeon
DonateNewsletter
SIGN IN

The North Macedonian town of Veles (population c50,000) became an epicentre of fake news website production during the US elections in 2016. Photo by Jonas Bendickson/Magnum

i

Trolls be gone

Anonymous users generate most toxic abuse and conspiracy theories online. The right to be anonymous should be curtailed

by Stephen Kinsella + BIO

The North Macedonian town of Veles (population c50,000) became an epicentre of fake news website production during the US elections in 2016. Photo by Jonas Bendickson/Magnum

We have come a long way from the optimism that surrounded the internet in the early 1990s. As Tim Berners-Lee has remarked several times, there was a ‘utopian’ view of its potential to democratise news and reinforce social cohesion. Indeed, only 10 years ago, we were celebrating the role that online communications played in the Arab Spring. Now, when the subject of social media is mentioned, it is far more often associated with organisations such as QAnon or the riots at the United States Capitol; with wild conspiracy theories or the bullying and silencing of women and minority groups.

As a European Union antitrust lawyer whose work in recent times has largely involved advising or challenging the large tech platforms, I take a professional as well as a personal interest in how they have developed. I am also focused on practical solutions. With that in mind, I founded Clean Up the Internet as a campaigning organisation seeking to try to improve the overall level of discourse online and to tackle abuse and disinformation.

A particular feature of many online platforms is that, whether explicitly under their terms and conditions, or in practice because of lax enforcement of those terms, they allow users to be anonymous and to conceal or even misrepresent their identities. There are of course many good reasons why a social media user might want not to be identifiable. But allowing uncontrolled use of anonymous accounts presents challenges in terms of whether users can trust the posts they see. In their report into social media manipulation, the NATO Strategic Communications Centre of Excellence in 2020 revealed just how easy it is for foreign governments, antidemocratic groups and commercial companies to manipulate public debate through campaigns using networks of fake accounts. For just €300, they were able, via ‘social media manipulation service providers’, to generate inauthentic engagement, including 1,150 comments, 9,690 likes, 323,202 views and 3,726 shares across Facebook, Instagram, YouTube, Twitter and TikTok.

A growing concern for many governments and for civil society generally has been the role that social media has played in amplifying disinformation around COVID-19. An early example was the attempt to link the coronavirus disease with the rollout of 5G telecoms networks, leading to attacks both on wireless masts and on those installing and maintaining them. Our own Clean Up the Internet study in 2020 showed that anonymous accounts were disproportionately active, when compared with the general population of users, in pushing disinformation around this topic, and were also far more likely to be involved with promoting the most extreme theories.

In addition to the concerns around disinformation, numerous studies show that users who feel themselves to be protected from being identified are more likely to behave aggressively: a toxic form of what is commonly described as the ‘disinhibition effect’. And recent research conducted by Opinium for the Compassion in Politics group found that 72 per cent of people who have experienced online abuse had been targeted by anonymous or false accounts.

In the Republic, Plato considered whether a man might be expected to remain virtuous even if he could be sure that he would escape detection and punishment for unvirtuous acts. He referred to the mythical Ring of Gyges that rendered its wearer invisible, allowing him to depose and replace the king. The modern monarchs of social media have updated that thought experiment, giving almost everyone the ability to abuse both acquaintances and complete strangers online with little fear of retribution. And it would appear that many cannot resist the temptation to do so.

Faced with these levels of abuse, a growing number of public figures have been campaigning to remove entirely the right to be anonymous online. Sports figures such as the US tennis player Sloane Stephens have spoken openly about the volume of abusive, angry posts they receive after a defeat, and the impact this has on their mental health. Earlier this year, a petition by the UK celebrity Katie Price, prompted largely by vicious online attacks on her son and calling for all those on social media to be identifiable, had more than 500,000 signatures within a matter of days.

The evidence of the dangers posed by unrestrained anonymity is undeniable. Yet there are many who argue that an outright ban on anonymity would be a disproportionate response. While it is true that some users hide behind anonymity specifically in order to harass or troll, others have wholly legitimate reasons for withholding their identity. They could be whistleblowers revealing corporate or departmental wrongdoing, who would otherwise face retribution. They might be political dissidents or individuals trying to avoid an abusive partner. Or they might simply have far less dramatic but equally valid reasons for wanting to be able to explore certain ideas online without having to face the consequences. We need to find a way of reconciling these legitimate but conflicting public interests.

When confronted with demands to keep their users safe, platforms face a clear conflict of interests

Unfortunately, the response by the platforms has normally been to place the bulk of the responsibility upon individual users to detect and block harmful content. As the Scottish footballer Leigh Nicol has pointed out, that imposes a huge burden on her and lets the social media companies off the hook far too easily. Ideally, the platforms would have focused more upon efforts to reduce the incidence of harm. But, in the main, they seem reluctant to take any steps that could appreciably diminish ‘engagement’. As the Facebook whistleblower Frances Haugen testified this October: ‘Engagement-based ranking is a problem across all sites. It is easier to provoke humans to anger. Engagement-based ranking figures out our vulnerabilities and panders to those things.’ When confronted with demands to keep their users safe, platforms face a clear conflict of interests. They can see that abusive or extreme content generates more engagement, and also that anonymous accounts inflate the user numbers they can present to advertisers, which translates into income – the overriding consideration for at least some platforms.

It’s a common deflection technique to argue that any single action can’t fix all the problems. It’s certainly true that curtailing anonymity would not, at a stroke, end online mischief. For example, when we look at online disinformation around COVID-19, the vast majority of the harmful traffic is promoted by a dozen or so ‘misinformation superspreaders’ who for whatever reason are happy to peddle these stories in their own names. But even looking at that ‘dirty dozen’, their impact was often magnified by inauthentic or suspicious accounts that echoed their messages and inflated their follower accounts to make them appear more widely respected than they really are. Without those hordes of fake followers amplifying their disinformation, many COVID-19 deniers would have struggled to build the platform they now possess.

The scale of the challenge posed by abuse and disinformation can appear daunting. However, there is clear public demand for regulatory intervention to force platforms to change their approach on issues such as identity and authenticity of accounts. One solution could be to turn the problem of anonymity on its head. Rather than seeking to limit the right or ability of users to be anonymous online, instead every user could have a right to have their account authenticated.

To take Twitter as an example, it currently gives a ‘blue tick’ to a tiny number of accounts (applying Twitter’s own extremely opaque criteria). That could be expanded to give every user a right to go through some form of verification process and be awarded, perhaps, a ‘green tick’. One advantage of this approach would be that every user would be able to see whether another had been verified, which should help to assess whether the content might be trustworthy. In addition, every user could then have a right to decline any interactions with any unverified accounts. What that would mean is that, rather than having to block the accounts of people who are being disruptive or abusive, as the footballer Nicol was left to do, a user could simply decide that she does not want to see replies from any unverified accounts and could screen them all out as a category. She would still be able to broadcast her message to the whole world, but she wouldn’t see any replies from non-authenticated accounts. That way, she would to a far greater extent than at present receive only the more helpful or constructive posts, while most if not all of the seriously abusive or irrelevant ones would be filtered out. And any ‘fan’ who remains determined to be abusive would need to do it from an authenticated account, and would know that they would be easily identifiable and traceable.

Any user could open a Twitter account using an email such as mickeymouse@gmail or a burner phone

Those who have good reason to be anonymous and to interact online in ways that cannot easily be traced could still continue as at present and in security. It’s just that one side-effect of anonymity – vicious trolling – would lose a great deal of its power as ordinary users could simply choke off the principal source.

It isn’t easy to get hard data on what percentage of racist or otherwise abusive posts on social media come from anonymous accounts. That is largely because the social media companies, who otherwise promote ‘sharing’, are famously reluctant to provide access to their data, even to independent academics and researchers. Yet they issue public assertions that are barely credible. For instance, Twitter responded to widespread concern about racist abuse of England footballers following the UEFA Euro 2020 cup final with a widely publicised claim that 99 per cent of the problematic posts were not from anonymous accounts. That also jarred with a report that paints a very different picture, conducted last year by the extremely respected Signify company, which has worked with the football authorities to track down offenders.

After numerous requests for clarification, Twitter finally confirmed to me that its definition of a ‘non-anonymous’ account is one that is linked to an email address or phone number, something that Twitter now requires to open an account. The 1 per cent they described as anonymous were apparently ‘legacy’ accounts dating back to a period before that requirement. But as Twitter must know, any user could open a Twitter account using an email account with a transparently inauthentic name such as mickeymouse@gmail or a non-traceable burner phone. The claim therefore that such accounts were ‘non-anonymous’ or even ‘verified’ is disingenuous at best, and would not conform to how the average citizen would understand those terms. Certainly, many offending individuals on the platform are not verified or identifiable to other users, and remain extremely vulnerable to the disinhibition effect.

There is no doubt that the issue of verification or identification can raise its own concerns regarding civil liberties. For instance, many are concerned that any such process would simply amount to handing over a lot more data either to governments or to the platforms themselves, which they could then use to target and monetise citizens. We know that this is how surveillance capitalism works. However, such an outcome is not inevitable. For example, in the Nordic countries, the ‘BankID’ solution relies on the data that citizens have already provided to their bank, whom they clearly trust for these purposes. It then generates a digital identity that can be used to create accounts on other platforms but without having to share that underlying data with those platforms, or anyone else. In short, there do exist privacy maximising and hassle-reducing solutions that should appeal to many users, though perhaps not to the platforms, which would not then get the opportunity to verify and store our data.

It’s true that there will still be a relatively small number of users out there who do not have online banking, and other routes would have to be found for them. But this need not be an insurmountable problem. One alternative is the Yoti app, which allows users to store and encrypt their own data and minimise what they have to share with others. Other solutions keep appearing and will certainly do so as demand for a safe interface continues to grow. And if there are circumstances where users need to verify themselves with the platform itself, the law could stipulate that platforms commit to keeping the data secure and separate, and not to use it for any purposes other than providing the initial verification. Further, it is important to bear in mind that, in any event, this need for some identity check would apply only to those who want to verify, and the vast majority of those will be people who are relatively easily placed to do so. The aforementioned poll for Compassion in Politics showed that some 80 per cent of users would welcome getting the right to be verified, and that more than 70 per cent would take advantage of the ability to limit interactions with unverified accounts.

Some question whether individual countries or regions can really take effective action, or if we need to wait for a global solution. One obvious response is that the US, as the jurisdiction from which most of the large platforms originate, seems to be stirring itself to action that has the potential to dramatically alter the balance between larger players, smaller competitors and users. When it comes to other regions, as so often, the ‘global solution’ is seized upon by those who see it as a way to kick the idea into the long grass. But it ignores the reality that, even if the platforms are not based within any given jurisdiction, their users are. Therefore, it is perfectly possible for the EU or the UK to legislate to control behaviour or designs that affect their citizens. Realistically if the US, the EU or the UK take decisive action in this area, any solution will probably have to be applied near-globally (China possibly excepted) and we can therefore have a highest-denominator approach to online safety rather than a lowest-denominator approach. A good recent example of this domino effect is that Instagram, in response to the UK’s Age Appropriate Design Code, has announced global changes to its platform that will limit sensitive content, as well as reducing the amount of targeted advertising directed at children.

Finally, how could one bring about these improvements in terms of verification and trust? Ideally, this would be achieved through dialogue with the platforms to encourage them to make the necessary changes voluntarily. That would be in their interests if it helps them stave off intrusive government intervention into how they run their businesses. But, ultimately, if the platforms don’t show greater willingness to do so, there is a growing drumbeat for legislation in a number of territories to offer greater protections to users.

The right to identification online is only one possible means of achieving a better online world. But it would be an important first step, and a proportionate one.