Menu
Aeon
DonateNewsletter
SIGN IN
Black and white photo of multiple white rats navigating a maze from above. The maze has straight walls and sharp corners.

Photo by Getty Images

i

Lying for science

Psychologists used to manipulate and deceive their subjects with impunity. Did the end justify the means?

by Antonio Melechi + BIO

Photo by Getty Images

Nearly 50 years ago, a 35-year-old bank employee from Madrid named Jordán Peña had a fiendish idea: he would contact fellow UFO-spotters across the city, purporting to be an extraterrestrial – DEI 98, from the planet Ummo. Spinning an audaciously convoluted yarn, Peña would proceed to chronicle the turbulent history of the fictitious planet, drip-feeding his saucer-eyed compadres information on the curious physiology of Ummo’s inhabitants, the intricacies of their language and system of government, and the mind-boggling technologies that they had deployed on recent missions to traverse the 14.6 light years from Ummo to earth.

Several hundred letters later, Peña’s Space Age prank was running amok. Letters from DEI 98 were hot property. Academics were beginning to give serious attention to the Ummite language and their constellation of pseudo-scientific formulas. And a Bolivian spiritualist cult, ‘the daughters of Ummo’, embraced the Ummite teachings with messianic fervour. Have faith, the Ummites are coming.

When Señor Peña eventually stepped forward to reveal himself as the author of the Ummo correspondence, ufologists had good reason to suspect that his prodigiously elaborate hoax was probably a government-sponsored exercise in misinformation. Peña remained poker-faced. He had acted alone. He was testing a pet theory of widespread paranoia, and he was using the tried and tested methodology of every social psychologist. Ummo was not a hoax: it was an experiment.

In many ways, this was a plausible cover story. Peña’s alleged experiment was certainly conceived in an era when all manner of risky stratagems and questionable illusions were deemed fair play within the social sciences, especially in the field of social psychology. A few years earlier, to generate evidence for the theory of cognitive dissonance, the American psychologist Leon Festinger had staged a CIA-style undercover operation, infiltrating the Brotherhood of the Seven Rays, a Chicago-based Doomsday cult that was nervously awaiting the arrival of an extra-terrestrial rescue party, sent to save them from the Great Deluge which was, they believed, about to engulf North America.

Meanwhile, in order to study the whys and wherefores of inter-group conflict, the Turkish-born psychologist Muzafer Sherif donned caretaker’s overalls, spying on and stirring up enmity between 22 boys on a bogus summer camp in Oklahoma. And in the most controversial of all social psychology experiments, Stanley Milgram at Yale had tried to shed light on the kind of unthinking obedience found within the ranks of the Third Reich by way of a fake ‘learning experiment’, in which volunteers were asked to administer electric shocks to fellow subjects.

With the blustering chutzpah of the short-con artist and the slick artistry of the stage magician, Festinger, Sherif and Milgram led the generation of post-war psychologists that contrived to rewrite the rules of laboratory and field research. Whether hiding out in public toilets, staging blood-splattered accidents, feigning madness to gain entry to psychiatric hospitals, or commissioning Hollywood actors to deliver nonsensical lectures on game theory, these tenured tricksters were convinced of one thing: deception was the only reliable way of studying true-to-life behaviour.

When the first psychological laboratories opened in European and American universities in the late 19th century, the likes of Wilhelm Wundt and William James used chronoscopes, tachistoscopes and a range of physiological devices to measure the response to physical and visual stimuli. Investigation of these simple sensory and affective processes would, it was hoped, provide brass-instrument psychologists with the building blocks for a positivist science of the mind. But the methodological rigour of the physical sciences was no more than a pipe dream. More complex individual and collective behaviours were, researchers found, almost impossible to study without exerting some influence on their subjects’ reaction. As the psychologist A H Pierce observed in 1908, the attitude of individuals participating in laboratory experiments was invariably characterised by ‘ready complacency and cheerful willingness to assist the investigator in every possible way… reporting to him those very things which he is most eager to find’.

Because of these pitfalls, psychology was destined to become increasingly reliant on trickery and espionage in both laboratory and field. To overcome what the psychiatrist Martin Orne later dubbed the ‘demand characteristics’ of the human experiment – the spoken or tacit cues that encourage a subject to behave according to the experimenter’s expectations – psychologists and sociologists would routinely misdirect their subjects: staging dummy tasks to keep their true objectives hidden; using knowing ‘confederates’ to covertly record or influence the behaviour of volunteers, or, following Sherif’s lead, contriving some disguise that allowed a group or subculture to be studied from within.

The nexus of deep-pocketed sponsors that rushed to bankroll this smoke-and-mirror approach to behavioural research, from the US National Science Foundation to the CIA’s Society for the Investigation of Human Ecology, had no objection to bluffs and misdirection in the field or lab. But the public remained largely oblivious to the high-minded gamesmanship of psychologists, sociologists and clinical researchers. Time and again, the promise of knowledge that might benefit the public at large provided experimenters with the moral justification for all kinds of ‘procedural deception’. Half of all laboratory-based psychology experiments conducted in the 1950s and early ’60s involved subjects who were actively misled as to the purpose of the study.

Researchers rarely considered the ethical or methodological impact of this repeated trickery. When W Edgar Vinacke, writing in the pages of the American Psychologist in 1954, raised the question of ‘the proper balance between the interest of science and the thoughtful treatment of the persons who, innocently, supply the data’, his remarks seemed to fall on deaf ears. The American Psychological Association’s Code of Ethics had already granted the freedom to lie, trick and deceive. Providing that subjects were properly ‘dehoaxed’, and that no lasting harm would arise from their participation, the principle of informed consent could be shelved.

Marking a new era in laboratory deception, Milgram’s obedience experiments fully exploited the latitude that psychologists were granted to explore the darker side of human behaviour. Before going to Yale, the 27-year-old assistant professor had been at Princeton with Solomon Asch, whose classic studies into conformity showed how easily an individual’s conviction could be swayed by group dissent. Asch’s experiment was elegant and simple. Using several confederates to dispute a subject’s initial assessment of the length of a line on a card, Asch found that the majority would toe the group line. Milgram had previously used a modified version of Asch’s group-pressure experiment in his doctoral research. What he set out to demonstrate at Yale was more radical: that ‘destructive behaviour’ also occurred when the individual ‘merges his person into an organisational structure’, becoming ‘freed of humane inhibition’ and ‘mindful only of the sanctions of authority’.

The set-up of Milgram’s ‘memory and learning experiment’ was nothing if not ingenious. Purporting to measure the effects of punishment on learning success, he and his assistants ushered a few hundred paid volunteers, recruited through an advertisement in a local newspaper, into a specially constructed booth with an intercom and a ‘Shock Generator, Type ZL’. Before making their way into the booth, the volunteer ‘teachers’ were made to watch as a confederate ‘learner’ was taken to a separate room, where electrodes were strapped to their wrists. Once installed in the booth, the teacher was instructed to issue the ‘learner’ with an electric shock every time they gave a wrong answer in a memory task. With every mistake, the shock was to be raised by 15 volts. All pleas and protestations were to be ignored. Whenever the volunteers faltered or refused, they were encouraged to proceed with a series of pre-scripted prompts. Please continue. The experiment requires that you continue. It is absolutely essential that you continue. You have no other choice, you must go on.

Having crossed a moral line, Milgram cast an ominous shadow over the profession as a whole

Before commencing the experiments, Milgram had polled colleagues and students to guess their outcome. ‘With remarkable similarity, they predicted that virtually all subjects would refuse to obey the experimenter… They expected that only 4 per cent would reach 300 volts, and that only a pathological fringe of about one in a 1,000 would administer the highest shock on the board.’ Over the course of three years, Milgram ran almost 20 versions of the experiment, and found that more than 60 per cent of volunteers were prepared to administer a charge of 450 volts to the supposed ‘learner’. The figure dropped to slightly under 50 per cent when it was transferred from a nondescript office to an imposing hall in Yale’s old campus, and to 30 per cent when the actor-learner was placed in close proximity.

The response to Milgram’s initial findings, published as a short report in the Journal of Abnormal and Social Psychology, was mixed. While one reviewer in The New York Times praised him for his experimental derring-do, other commentators took issue with his stark conclusions, doubting ‘the extreme willingness of adults to go to almost any lengths on the command of an authority’. Within the psychological community, there was a collective sigh of dismay. Having crossed a moral line, Milgram was upbraided for the stress and trauma to which he had exposed his volunteers, and roundly rebuked for casting an ominous shadow over the profession as a whole.

Milgram was unapologetic, defending the ‘technical illusions’ that he had employed to gain an insight into crimes of obedience. Even so, the ethical and methodological problems raised by psychology’s reliance on deception could no longer be ignored. From now on, his experiments would serve as a cautionary tale to psychologists balancing the need for free scientific inquiry against the individual’s right to full disclosure. Despite the fact that other experimenters and fieldworkers had employed equally dubious gambits, Milgram became the public scapegoat for psychology’s chequered history of dissemblance.

By the time Milgram published his book Obedience to Authority (1974), the climate of public and expert opinion on the ethics of scientific deception was even less forgiving. Public health physicians working on the Tuskegee syphilis experiment in Alabama were found to have duped hundreds of syphilis victims, depriving them of treatment with penicillin in order to observe progression of the disease. And while another recent study into conformity, Philip Zimbardo’s Stanford prison experiment, had drawn sharp criticism, many of the country’s leading social scientists were about to be exposed as willing assistants in the CIA’s wide-ranging research into behaviour modification. In short, Milgram-style deception found itself lumped together with some decidedly unsavoury company.

‘openness’ and ‘transparency’ became essential between investigator and research participant

Post-Watergate America decided that it could no longer trust its clinicians or behavioural researchers. In light of the new guidelines on human experimentation laid out by The Belmont Report (1978) from the National Commission for the Protection of Human Subjects in Biomedical and Behavioural Research, the social psychologist was required to navigate a new regulatory landscape. The American Psychological Association now spoke of ‘openness’ and ‘transparency’ as being essential to the relationship between investigator and research participant. Institutional review boards were encouraged to take an especially dim view of deceptions that could potentially expose subjects to transient physical or psychological harm.

The old guard of experimental psychology and the clinical researchers were not happy. ‘Whole lines of research have been nipped in the bud,’ complained Edward E Jones, a pioneer in the field of attribution theory. Robert Rosenthal, best known for his work on the experimenter effect, in which a researcher’s expectations could be seen to affect the outcome of human experiments, believed that vital research on violence, racism and sexism would be quashed by the embargo. And Stanley Schachter declared the new guidelines a ‘bloody bore and a terrible waste of time’.

Psychology’s prohibition on deception was, however, not quite what it appeared. Experimenters continued to be granted permission to misrepresent their research hypothesis, to spy upon their subjects, and to mask their identities. One in three of all psychology studies published since the early 1970s has been based on experiments whose true objectives were withheld from participating subjects. In the field of social psychology, deception and misdirection have remained as popular as ever. In 2009, Jerry Burger of Santa Clara University was given permission to repeat Milgram’s obedience experiments. The only notable modifications to the original protocol were that volunteers had to be screened and the fake shock machine stopped at 150 volts.

While historians of psychology overlook the ways in which Festinger and his fieldworkers passively encouraged a UFO cult that was descending into collective madness – and ignore the dubious ethics of Sherif’s summer camp that never was – Milgram’s obedience experiments rarely escape the wagging-finger exposition. In World as Laboratory (2005), a history of 20th‑century behavioural experiments, Rebecca Lemov paints a characteristically grim picture: ‘It was as if an unwitting looker-on stumbled onto a stage to find himself playing a starring role in a drama in which he had never agreed to act… When the house lights came on at last and the subject saw the key part he had played, he blinked and found himself in the throes of a humiliation too great even to articulate to himself.’

In the documentary footage of the original experiments that Milgram included in his film Obedience (1962), volunteers certainly seem to have been completely hoodwinked by the learning experiment. After agonising over the decision to apply the electric shock and loudly protesting with the impervious experimenter, volunteers appear to be more relieved than embarrassed when told the true purpose of the experiment. ‘Well, I’m glad to hear that,’ says one middle-aged man, puffing on his unlit cigarette. ‘I was concerned about the other party… ’cause he was having a heart attack or something.’ But were Milgram’s volunteers completely oblivious to the ‘drama’ that he had orchestrated? Or was something else happening?

If your subjects half-suspect that you are deceiving them, what are you really measuring?

The Australian psychologist Gina Perry has recently claimed that Milgram, as well as overlooking the sizeable number of volunteers who refused to comply with the experimenter’s demands, ignored the misgivings voiced by those who were skeptical of his learning experiment. At least one of Milgram’s laboratory volunteers withdrew, suspecting that ‘the whole experiment was designed to see if ordinary Americans would obey immoral orders’. Some participants were nonplussed by the experimenter’s indifference towards the ‘learners’ and unconvinced by the anguished cries that came from a nearby loudspeaker; others had an inchoate sense of something being amiss. Far from providing a reliable proxy of real-life behaviour, Milgram had, at best, staged a bold but confusing charade, a ham-fisted invitation to make-believe.

Truth be told, Milgram and his fellow social psychologists were not oblivious to the fact that the widespread use of laboratory deception might, as the Harvard ethicist Herbert Kelman observed, ‘actually produce an unspecifiable mixture of intended and unintended stimuli that make it difficult to know just what the subject is responding to’. If your subjects half-suspect that you are deceiving them, what are you really measuring? But this was a question that most psychologists simply preferred to ignore.

When, in 1969, Lawrence Stricker, Samuel Messick and Douglas Jackson sought to assess the efficacy of psychological deception at the University of Princeton, they had some bad news for the laboratory tricksters. Running a version of the Asch group-pressure study on 200 11th- and 12th-graders, ‘a group presumed to typify the naive subject’, the researchers reported that more than half the boys and almost 40 per cent of the girls were suspicious of the purpose of the study. (As one boy correctly intuited, ‘the experiment was trying to see “how someone’s ideas and answers are influenced… by hearing the answers or opinions of others”.’) The high levels of mistrust, suspicion and second-guessing that Stricker, Messick and Jackson found suggested that the psychological experiment was becoming something akin to an exercise in mutual deception.

Today’s psychologists tend not to dwell on the methodological confusion that their predecessors’ tangled web of deception has created. Calls for the discipline to follow the lead of experimental economics, which maintains a zero-tolerance position on all laboratory subterfuge, receive short shrift. One prominent psychologist has recently argued that ‘the question of whether or not deception should be considered an acceptable element of a research protocol is no longer a legitimate one’, going on to suggest that the profession’s tricks are no different to the ‘white lies’ that abound in everyday life.

But the side-effects of experimental illusions and gamesmanship cannot be so easily brushed aside. Laboratory deception is an open secret that has eroded the validity of the social psychology experiment, exposing its seminal insights into learning, conformity, motivation and attitude change as a farrago of statistical artifacts. For this reason, Jordán Peña’s Ummo hoax, as outlandish as it was, stands to tell us more about the vagaries of human behaviour than any number of carefully planned laboratory escapades.