Menu
Aeon
DonateNewsletter
SIGN IN
A doorman in uniform at the entrance of a building looks out while people walk past.

Photo by Andrew Caballero-Reynolds/AFP/Getty

i

Battling implicit bias

Training is a cheap solution to a hard problem. It is the systems that allow for biased behaviour that need to change

by Jeffrey To + BIO

Photo by Andrew Caballero-Reynolds/AFP/Getty

On a Thursday afternoon in April 2018 in a Starbucks in downtown Philadelphia, police handcuffed two African American entrepreneurs, Rashon Nelson and Donte Robinson. A manager had reported them for waiting inside the coffeehouse while not having purchased anything. About a month later, on 29 May, Starbucks closed its 8,000 stores nationwide – at a cost of an estimated $16.7 million in sales – so that its 175,000 employees across the United States could participate in a four-hour ‘implicit bias’ training session that day.

Implicit bias was once jargon that academic psychologists used to refer to people’s automatically activated thoughts and feelings toward certain groups rather than others. Now, it’s a buzzword that regularly appears in news articles and, occasionally, presidential debates. Implicit biases stand in contrast to explicit biases, people’s conscious or self-reported thoughts and feelings toward certain groups over others, such as when people overtly voice dislike toward Asian people. Implicit biases are more subtle. You can think of them as tiny stories that flicker in our minds when we see other people. A pharmacy employee might see a Black woman crouching on the floor and zipping up a bag, and immediately think she’s attempting to steal, as indeed happened in 2015 at a Shoppers Drug Mart in Toronto (which was later fined $8,000 for the discrimination). Or a border patrol officer might enforce an identity check upon Black citizens, thinking they pose a threat, as happened in the Netherlands in 2018; the Dutch appeal court this year ruled that unlawful.

The concept of implicit bias has captivated social psychologists for decades because it answers a perennial question: why is it that, while most people espouse diversity, they still discriminate? And why is it that, though they say – and genuinely believe – they want equality, they behave in ways that favour some groups over others? Indeed, a research study with more than 600,000 participants demonstrated that, while white participants self-report relatively neutral explicit biases toward Black people, they still hold anti-Black implicit biases; another research study found that citizens of 34 countries implicitly associate men with science, more so than they do women. The assumption that drives implicit bias research, then, is that these biases, unchecked, can substantially influence thoughts and behaviours, even among well-meaning people. For instance, foreign-sounding names from minority job applicants’ résumés receive fewer call-backs for job interviews than equally qualified white counterparts; men dominate leadership positions in fields like medicine even when there is no shortage of women.

So, implicit bias is a problem. What do most organisations do to solve it? Implicit bias training, sometimes known as ‘anti-bias training’ or ‘diversity training’, aims to reduce people’s implicit biases (how people think), and thereby presumably reduce discrimination (how people act). While the structure and content of these trainings can vary substantially, what typically happens is that, in one or two hours, an instructor provides attendees with a primer on implicit biases, explaining, for instance, the theory and evidence behind the concept; attendees then complete an Implicit Association Test (IAT), used to measure implicit biases, and reflect on their scores; and, finally, the instructor briefs attendees on ways to mitigate these biases (for instance, the National Institute of Health’s online implicit bias training module suggests employees ‘be transparent’ and ‘create a welcoming environment’. These trainings have become a burgeoning industry: McKinsey & Company estimated in 2017 that implicit bias training costs US companies $8 billion annually.

Scores of criticisms around these tests already exist online, but I can give you my sense of why they’re so ineffectual. I completed an ‘unconscious bias training’ module as part of a work orientation from my alma mater. (Note: unconscious bias and implicit bias are not actually the same.) After spending about 30 minutes watching three modules of content that were supposed to last 90 minutes (I fast-forwarded most of the videos), and completing the quizzes after each module, I was left feeling the same way as I did after going through a workplace orientation module: bored, exasperated, like I had wasted my time on another check-box exercise or diversity-posturing activity.

I’m also an implicit bias researcher, and here’s what the scientific literature says about these trainings: they largely don’t work. There are three main reasons why. First, the trainings conflate implicit biases with unconscious biases; this risks delegitimising discrimination altogether by attributing biased behaviour to the unconscious, which releases people from responsibility. Second, it’s very difficult to change people’s implicit biases, especially because social environments tend to reinforce them. And third, even if we could change people’s implicit biases, it wouldn’t necessarily change their discriminatory behaviours.

Here’s where I land: while trainings, at best, can help raise awareness of inequality, they should not take precedence over more meaningful courses of action, such as policy changes, that are more time intensive and costly but provide lasting changes. If organisations want to effect meaningful societal changes on discrimination, they should shift our focus away from implicit biases and toward changing systems that perpetuate biased behaviour.

To understand all of this, it’s important to know how the common measurement tool for implicit biases – the IAT – works. (My lab is devoted to improving these kinds of tools.) The easiest way to understand what the test entails is to do one: the standard version measuring racial biases is publicly available through the website Project Implicit, a domain that houses IATs for a variety of topics (race, gender, sexual orientation). Otherwise, here’s a quick rundown. The IAT flashes on your screen two kinds of stimuli: faces, either of Black people or white people, and words, either good words (‘smile’, ‘honest’, ‘sincere’) or bad words (‘disaster’, ‘agony’, ‘hatred’). In some trials, you’re then asked to press ‘E’ on your keyboard if either a Black face or bad word is shown, and ‘I’ on your keyboard if either a white face or good word is shown.

But here’s where it gets tricky: what’s associated with each key mixes up as you progress. If in earlier trials ‘E’ means Black or bad, it can now mean Black or good (and ‘I’ white or bad). Let’s say that you’re now slower to press ‘E’ when it pairs Black with good than when it pairs Black with bad. That could suggest you hold more negative implicit biases toward Black people compared with white people because you’re slower to respond to Black when linked with good than with bad. (The ‘compared with’ is important here; the standard IAT evaluates one group relative to another.)

At the end of the test, people receive their IAT test score, which tells them which group they have an ‘automatic preference’ for. This is the part that can incite shock or horror because, when people see that they hold an automatic preference toward white people, it might lead them to believe that, while they thought they preached equality, they were subconsciously biased the entire time.

What some people get wrong, though, is that an automatic preference is not the same as an unconscious bias. Unconsciousness presumes an absence of awareness and thus conscious control. But an automatic preference doesn’t necessarily require either of those qualities. It’s like a habit, say nail-biting: you’ve associated stress with nail-biting so strongly that it doesn’t take long for stress to trigger you to bite your nails, but that doesn’t mean you’re not aware of it, that you can’t predict when it happens, or that you can’t, with effort, stop it when it happens.

We generally pardon wrongdoers if their offence was accidental as opposed to intentional

Numerous studies have shown that people can be aware of their implicit biases. One 2014 study by the psychologist Adam Hahn and his colleagues shows that people can generally predict their own IAT scores with a high degree of accuracy. They found an average correlation of r = .65 between participants’ predictions of their IAT scores and their actual IAT scores – a correlation that is typically considered large in psychological research; for instance, the heritability of IQ and education are also around that mark. If it were the case that people generally aren’t aware or conscious of their implicit biases, they wouldn’t be able to predict their IAT performance. Insofar as the IAT measures implicit biases, these biases are likely not unconscious.

Unfortunately, this misunderstanding remains widespread. For instance, an article by Christine Ro on the BBC in 2021 uses ‘implicit biases’ and ‘unconscious biases’ synonymously, as does an article on the website of the Office of Diversity and Outreach at the University of California San Francisco, an article by David Robson in The Guardian in 2021, and an article by Francesca Gino and Katherine Coffman in the Harvard Business Review in 2021.

To be clear, unconscious biases may exist, and just because someone might be aware of their implicit biases doesn’t mean they’re conscious of the effects of their biases on other people or that we can effectively control them.

But here’s why it’s important not to conflate ‘implicit bias’ and ‘unconscious bias’: claiming that discrimination arises from the unconscious psychologises it, presents discrimination as an unintentional act rather than a preventable consequence – and thereby enables people to feel less morally culpable for discriminating. One study from 2019 demonstrates this experimentally. The social psychologist Natalie Daumeyer and her colleagues at Yale showed participants a fabricated article in which both Democratic and Republican doctors demonstrated bias toward patients based on their own political ideology when they engaged in somewhat politicised health behaviours (say, gun ownership or marijuana use). In one condition, they read that the doctors were somewhat aware that they were treating patients differently. However, in the other condition, they defined bias as unconscious bias – the ‘attitudes or stereotypes that affect our understanding, actions, and decisions in ways that we are typically not aware of’ – and also read that the doctors held no conscious knowledge that they treated their patients differently based on their own political views. Finally, participants completed a questionnaire measuring whether the doctor should be held responsible and whether they merit being punished.

What did they find? When the doctors were described as having no conscious knowledge of unfair treatment, participants rated them as needing to be held less accountable, and less deserving of punishment, compared with when the doctors’ behaviour was ascribed to conscious bias. Why the difference? Awareness signifies intentionality, and we generally pardon wrongdoers if their offence was accidental as opposed to intentional. This detail matters. If diversity practitioners perpetuate this notion that unconscious bias underlies daily acts of discrimination, they could reduce accountability toward perpetrators and prevent behaviour change.

Even when implicit bias is conscious, it is notoriously hard to change. One study tested nine implicit bias interventions previously shown to reduce implicit biases, and found that these changes subsided after several hours or, at best, several days. That’s because, while biases might be an individual characteristic (similar to someone’s personality type or temperament), they require people’s social environment work, family, political and technological circumstances, for instance – to make them accessible, as the social psychologist Keith Payne argues in ‘The Bias of Crowds’ (2017). If the environment does not change, the bias will return.

To support this view, consider the fact that IATs generally measure individuals’ implicit biases unreliably. In other words, the IAT score you receive today can differ from the IAT score you receive tomorrow.

Psychometricians consider IATs ‘noisy’ measures: your scores can fluctuate depending on context, for instance, your mental state (tired, anxious), your physical surroundings (with friends, with colleagues), and what you were exposed to before doing the test (for instance, if you watched Barbie before doing the IAT, you might be more primed to respond more positively to women in a gender-science IAT). So, changing people to shift biases may be a futile exercise: since our social environment heavily influences our biases, short-term implicit bias interventions can hope to achieve only temporary effects before the environment re-instates our initial biases.

It’s one thing to know whether the IAT measures implicit biases. But how – if at all – do these biases relate to behaviour? This question has been studied thoroughly, with four meta-analyses (studies that compile and analyse other studies) synthesising the findings of hundreds of studies that largely use the IAT. They converge on a common finding: while implicit biases do demonstrate a reliable correlation with individual behaviour, this correlation is generally weak; that’s why Project Implicit warns participants against using their IAT scores to diagnose anything meaningful about themselves.

Implicit biases at a regional level can be strongly associated with regional-level behavioural outcomes

On the other hand, in line with the ‘Bias of Crowds’ model, aggregating the scores of many people taking the IAT test at once can help us predict behaviour. The IAT poorly predicts the behaviour of one person, but what about taking the average IAT scores of an entire city or state, and correlating with outcomes?

One study, by the social psychologist Eric Hehman and his colleagues, provides some insight. They studied the implicit biases of more than 2 million residents across the US within their metro areas, and also drew from metro-area sociodemographic data using crowdsourced and fact-checked databases for measures like overall wealth, unemployment rate and overall crime levels. They found that, out of 14 variables, only one – greater anti-Black implicit bias among white residents of certain metro areas – significantly correlated with greater use of lethal force against Black people relative to the base rates of that metro area. For instance, metro areas in Wisconsin held higher anti-Black implicit bias on average, which correlated with higher use of lethal violence against Black people in that area. These findings, in line with the ‘Bias of Crowds’ model, highlight that, whereas implicit biases aren’t strongly associated with individual-level behaviour, implicit biases at a regional level can be strongly associated with regional-level behavioural outcomes, possibly because implicit biases reflect systemic, rather than personal, differences.

Note, however, that most studies on the relationship between implicit biases and behaviour, including the study by Hehman and colleagues, are correlational. Even if we could change people’s individual implicit biases, would that lead to a change in levels of discrimination? In other words, let’s say implicit bias training successfully reduced individual police officers’ implicit biases against Black people. Would that reduction in bias translate to them discriminating against a Black person less often?

One meta-analysis looked at 63 randomised experiments that used an IAT and a behavioural measure; and randomised experiments, unlike correlational studies, do allow us to infer some causation. Yet they just confirmed what others have found. Changes in measures like the IAT – at the individual level – do not relate to changes in individual behaviour toward other groups, demonstrating, again, that changing people’s minds is unlikely to work.

This finding shouldn’t strike us as surprising given the gap between attitudes and behaviours that has been documented again and again. That gap usually follows a principle of correspondence: the extent to which an attitude predicts behaviour usually depends on how well the attitude matches the behaviour. For example, attitudes specific to organ donor registration (‘How do you feel about registering yourself as an organ donor?’) are better predictors of registration behaviours than general attitudes about organ donation (‘In general, how do you feel about organ donation?’). IATs usually measure implicit biases toward broad groups, like Black people in comparison with white people, without more information about what they’re doing or where they are.

Furthermore, attitudes interact with context to predict behaviour. Most of us demonstrate a positive attitude toward exercise, for instance, but that doesn’t mean we’ll go to the gym this weekend: we don’t feel motivated, the gym might be closed, or the weather rainy. In the same way, someone might show a negative implicit bias toward Asian people, but that doesn’t mean they’ll behave negatively toward an Asian person upon meeting one. A classic study in 1934 by the sociology professor Richard LaPiere at Stanford University illustrates this point: when he drove through the US with a Chinese couple, they stopped at more than 250 restaurants and hotels and were refused service only once. Several months later, the owners were surveyed on whether they would serve Chinese people and 92 per cent said they would not.

Given all this, the question that emerges, is: what can we really do? Here’s what we don’t need: more implicit bias trainings. In fact, as an implicit bias researcher, I think that organisations should decentralise, or do away with, the concept of implicit biases entirely. Implicit biases, as an empirical concept, are interesting and potentially valuable. But as a tool for diversity, equity and inclusion (DEI) pedagogy? It just confuses people and distracts from the actual problem.

The reason why these trainings exist is because they are cheap, easily scalable solutions that, from an optics standpoint, allow organisations to prop up an image that they care about DEI when the actions accompanying the imparted values are often vacuous. It’s ironic, isn’t it: the very notion of implicit biases stands on the discrepancy between values and actions, but the concept just perpetuates this problem. Before organisations preach the dangers of implicit biases, they should look at their hiring systems, policies and practices that actually discriminate against minorities by putting them at a disadvantage.

Here’s what I think: let’s stop caring so much about how people think, and focus more on how people – and companies – behave. I’m partly inspired by the paper ‘Stuck on Intergroup Attitudes: The Need to Shift Gears to Change Intergroup Behaviors’ (2023) by the psychologist Markus Brauer. It argues that researchers and practitioners, rather than relying on interventions that change people’s attitudes, should focus on interventions that directly target behaviour. For instance, rather than asking a hiring manager to participate in a workshop to change their attitudes toward women applicants, an organisation could instead enforce hiring criteria prior to seeing an applicant pool to reduce the biasing effect of applicants’ gender. Research shows that this approach has already been used successfully. Biases don’t come from a vacuum: they’re triggered by certain cues – the colour of someone’s skin, their accent, or the clothes they wear – attached to people. So, if we hide biasing information when it matters, we could also mitigate the effects of bias.

Using hiring criteria is an obvious example, but behavioural science research reveals other creative ways to attenuate discrimination from the top down rather than the bottom up. For instance, besides concealing information, organisations can restructure the way they present choices to employees. In business, one common reason we don’t see as many women becoming leaders is that the leadership selection process requires them to self-promote and self-nominate. Yet women who assert themselves can incur backlash for behaving in this counterstereotypical way, causing them to step back from competition. Here’s where organisations can push back by leveraging a behavioural economics concept known as ‘defaults’ – they can shift the default so that nominating oneself is a decision that women need to actively opt out from – and, if they don’t opt out, they get an opportunity to get promoted.

These trainings should not exist until organisations try doing the structural work first

The management professor Joyce He at the University of California, Los Angeles and her colleagues demonstrated the efficacy of this intervention in their study in 2021. On the recruitment platform Upwork, they recruited 477 freelancers for a data-entry job. At one point, they gave the freelancers (who were unaware of the experiment) the ability to choose between two tasks: a standard data-scraping task, paid at $5 per hour base compensation with a $0.25 bonus commission, or a more advanced, higher-paying task, paid at $7.50 base compensation with a $1 bonus commission. The freelancers had to compete with other workers for the advanced task and, if they didn’t win, they risked not earning any money at all. Here’s where defaults come in: in the opt-in condition, freelancers were by default enrolled in the standard non-competitive task, with the option to opt-in to the advanced task, whereas in the opt-out condition, the freelancers were by default enrolled in the advanced competitive task, with the option to opt-out to the standard task. They found a statistically significant gender gap between men and women freelancers in the opt-in condition (57 per cent of women versus 72.5 per cent of men chose to compete), whereas they did not find statistical significance in the opt-out condition.

To minimise biases and promote diversity and inclusion, we need to redesign biased processes to include more disadvantaged groups, rather than attempt to change people’s minds.

Still, I have two caveats. One is that structural ‘behavioural interventions’ are considered relatively low-hanging fruit compared with inclusive policies: policies that mitigate unequal wages between men and women, that increase access to paid parental leave, that reduce racial disparities, and that promote mentorship programmes for minorities – tackling the root causes of discrimination rather than symptoms. Furthermore, I don’t think that implicit bias training is useless, because, executed correctly (that is, using accurate science and emphasising behavioural strategies), it can be an effective awareness-building tool. And changing individual minds can catalyse structural changes. But I adamantly believe that these trainings should not exist until organisations try doing the structural work first.

And here’s another good thing about changing social structures: they can also impact individuals’ biases – and at a large scale. For instance, changing legislation can also change biases within a populace. One of my previous colleagues at McGill University, the intergroup relations researcher Eugene Ofosu, asked whether same-sex marriage legalisation was associated with reduced anti-gay implicit biases across US states. His team studied US IAT scores between 2005 and 2016, and what they found was striking. While the implicit anti-gay bias for each state, on average, decreased at a steady rate before same-sex marriage legislation, these biases decreased at a sharper rate following legalisation, even after controlling for demographic variables such as participants’ age and gender, as well as state-level factors such as education and income.

Legislation and policy don’t just tell us what to do, but what to think: they signal our social norms, the unwritten rules that define what’s acceptable and appropriate, that undergird our attitudes. Other studies also reinforce this point at an organisational level. Women working for companies perceived to have more gender-inclusive policies report more supportive interactions with their male colleagues, lower levels of workplace burnout, and a greater commitment to the organisation.

Stop distributing implicit bias training as a cure-all. Stop with the meaningless virtue-signalling. Stop selling these trainings under the guise of research. I get it. Trainings are easy. They’re cost effective. But one-off solutions do not work, and implicit bias is not really the problem. Biased systems and structures that allow for biased behaviour are the problem. Real DEI requires rebuilding biased systems from the ground up. It takes time. It requires top-down, versus bottom-up, change. It requires real accountability and leadership. Don’t ask how people can change their biases to get at diversity, equity and inclusion; ask what organisations and institutions have done – in their hiring systems, their DEI policy, or otherwise – to embody these values and provide every group an equal opportunity at success.