Menu
Aeon
DonateNewsletter
SIGN IN
A woman holding a child in a crowded group, all wearing colourful clothing, outdoors with buildings and trees in the background.

A woman holds her child as she waits at a food distribution point in Mogadishu. Photo by Sven Torfinn/Panos

i

What do they want?

What would happen if the aid industry started collecting data on how the people it serves actually feel about their lives?

by Claire Melamed + BIO

A woman holds her child as she waits at a food distribution point in Mogadishu. Photo by Sven Torfinn/Panos

Just like words on a page or paint on a canvas, data illustrate the world. The numbers we use to measure economic output, or school attendance, or births and deaths, are ways of telling stories about what is happening in people’s lives. They paint a picture of what societies look like. And the picture can be surprising.

Finding out about people through words is something the development sector is very good at. Over the past 15 or so years, the sector has developed and refined a number of techniques to talk to (mostly) small groups of people about their lives, their aspirations, and the problems they face. These methods have revealed a lot, and a lot that is now received wisdom in the industry: the centrality of threats of violence to the experience of poverty; the unspoken social norms that lie behind so much of the inequality faced by women; the daily humiliations of not being respected by people with power.

And yet, there is much that is not known, and much that numbers could tell us. Sometimes, for example, the starkness of a number forces people to confront truths that words can conceal.

The World Bank recently did a brave and very revealing piece of research. They asked their own staff to what extent they imagined poorer and richer people in three countries would agree with the statement: ‘What happens to me in the future mostly depends on me.’ Bank staff predicted that around 20 per cent of poor people would agree with the statement.

In fact, more than 80 per cent of poor people felt that what happened to them in the future depended on their own efforts – four times as many as the World Bank staff had predicted, and about the same proportion as richer people. It’s worth letting that sink in. Here we have staff in one of the most powerful development agencies in the world, freely assuming that the people whom they are employed to work with, and for, feel passive and helpless when in fact the opposite is the case.

These perceptions have consequences. Consider the recent vogue for ‘empowerment’ programmes. On the face of it, they make sense only if people don’t already feel like the captains of their own destinies. If, for example, World Bank staff had been right and just 20 per cent of poor people agreed that they had control over their own futures, then aid agencies might have a point in trying to run programmes to get those figures up. If it’s 80 per cent, you can probably skip that step and just get on with the education, the electrification, the provision of running water and so on, that actually allow people to express their own fully-developed sense of individual agency. Only numbers – and the sense of scale they provide – can tell you which is right.

This kind of quantitative research into people’s attitudes is incredibly sparse among development agencies. There is a huge appetite for data on objective things – on health, wealth and education, for example. But data on what people actually think is lagging far behind. It’s very rare indeed for an aid agency or NGO to run a survey to find out about public perceptions, values and views in the countries where they work. Why might that be?

It’s not a simple fear of numbers. Those same agencies, including some NGOs, do spend their own money on statistical research into the views of people in richer countries, to find out what they want from policymakers and politicians. Many of them commission regular opinion polls among the public in donor countries to learn how to make them happier about giving aid. Until 2010, the UK’s Department for International Development (DFID) ran an annual poll on what the public thought about development spending. Since that was discontinued, at least 10 different opinion polls or other research has investigated public opinions about aid and charitable giving in the UK alone.

Despite this enthusiasm for polling in donor countries, the idea of collecting the same kind of data from very poor people has sometimes been treated with suspicion. What if people say they are perfectly happy with their lives, when they are living in the direst poverty? What if cultural or linguistic barriers mean that the interviewer and the respondent have quite different understandings of the question that is being asked? People who are used to the depth and richness of words can be skeptical about the value of quite a different kind of picture – often broader, but shallower – that is revealed by numbers.

It goes without saying that all data should be treated with caution. And, of course, interpretation is key. But the thing is, quantitative data on people’s perceptions is being collected in developing countries, and these notional problems are turning out to be perfectly surmountable. The African-led survey project Afrobarometer does a fantastic job of country-level surveys in many African countries, and Latinobarómetro does the same for Latin America. Their polls have limitations though: if the purpose is specifically to find out about the very poorest people, for example, the sample sizes are generally too small to be sure of properly representing their views. But they are a start.

More recently, a survey run by the United Nations and my organisation, the Overseas Development Institute, asked 7.5 million people around the world a question about their priorities for themselves and their families. The top five priorities for poor people in poor countries included a lot that you would expect: health and education were up there, for example. But so were ‘an honest and responsive government’, better jobs, and reliable energy supplies. Protection from crime ranked high on the list, reflecting the deep, daily fear of violence and theft that are all too often part of the reality of poverty. And yet, as the British pioneer of people-centric development, Robert Chambers, once said to me, we’ve yet to see the development programme that gives people safes to help them protect their belongings.

It’s clearly possible to run quantitative research to find out what people – including the poorest – think. Some are even doing it. But, strangely, this quantitative research is used very little to drive thinking and practice among official donors or NGOs. A search on DFID’s website finds just one reference to Afrobarometer’s work.

This leaves two questions – why isn’t it having more of an impact, and what might change if it did?

Here’s the conspiracy theory version. Numbers have power, and finding out what people think in a systematic way (or making use of the data that already exist) might lead to some uncomfortable moments for donors and aid agencies.

Agencies operate under certain incentives, and they aren’t always about the poorest people. Think about who the development projects actually have to sell themselves to. Think about the market pressures they face. Commercial companies obviously need to know what their customers want, and how happy they are with the services they receive. Governments in democratic countries also need to know what people think. During elections, when voters, for a brief moment, are sovereign, political parties, the press and lobby groups all commission polls to demonstrate what people care about and why politicians need to listen if they want to be elected – and these polls drive policy, for good and for ill.

These incentives are strikingly absent in the relationship between donors and the people who are the beneficiaries of their programmes. While most individual aid workers do care, very much, about the people they work with and for, the actual structure of the aid business offers few reasons for anyone to worry about what aid recipients think or want. Staff in aid agencies need to think about what their funders want to pay for. For their own performance reviews, they need to think about how to demonstrate that what they are doing is achieving the best possible results with the smallest amount of money. So the incentives for spending money on expensive surveys to find out what representative samples of poor people think of their operations are just not there.

most opinion polling data is collected and owned by private companies, who need to charge money for it

And besides, the information might be a threat. What if it turned out that people feel patronised by aid workers? Or that they would rather their food didn’t arrive with logos announcing their indebtedness to foreign governments? Or that they resent being given a T-shirt when really they would sooner just have the money? What if people don’t really want another agricultural programme, and they’d rather have a bus ticket to the nearest town and somewhere to stay when they get there? These kinds of discoveries could be quite discomfiting for the agencies themselves – though in the long run, they would presumably do a better job.

So much for the conspiracy theory. There is a more practical reason too: even if the incentives changed and organisations did want to use the data that exist, many of the numbers are locked away out of reach. With the honourable exception of the Barometers and the World Values survey, most opinion polling data is collected and owned by private companies, who need to charge money for it. This means, of course, that most people don’t use it. It’s a strange accident of statistical history that some data – for example on people’s incomes, or how they do in school – is generally publicly owned by governments or multilateral agencies, while other data – like most opinion polling – is owned privately. While official agencies certainly should not have a monopoly on data collection, it seems to me that data should be available for use, whoever has created it.

More systematic use of numbers on what people think could lead to quite radical shifts in development policy and practice. Take one of the most hallowed indicators of development – the extreme poverty line, currently set at $1.25 a day. It turns out that, when people are asked in surveys what level of income they think is needed to get by, they almost always give a figure that is not absolute but relative to the average income wherever they live.

Imagine if this kind of data was used to define poverty. It would mean the end of the absolute poverty cut-off and the beginning of a new era in which global poverty was defined in terms of relative incomes. This might mean that donors would have to give up on their cherished aspiration to ‘end poverty’. But it would be truer to how poor people see their situation and what they mean by poverty.

Words have power. But so do numbers, and they are in short supply. We need lots more of them. To tweak Karl Marx’s line, the point might be to change the world, but we should really try to understand it first.