Machine envy

Giant instruments are giving us a sea of data. Can science find its way without any big ideas at the helm?

by 2,400 words
  • Read later or Kindle
    • Kindle
Impressive hardware at Pacific Biosciences, a genome sequencing company. Photo by  Gregg Segal/Gallery Stock

Impressive hardware at Pacific Biosciences, a genome sequencing company. Photo by Gregg Segal/Gallery Stock

Philip Ball is a science writer. His latest book is Curiosity: How Science Became Interested in Everything.

Whenever I visit scientists to discuss their research, there comes a moment when they say, with barely concealed pride: ‘Do you want a tour of the lab?’ It is invariably slightly touching — like Willy Wonka dying to show off his chocolate factory. I’m glad to accept, knowing what lies in store: shelves lined with bottles or reagents; gleaming, quartz-windowed cryogenic chambers; slabs of perforated steel holding lasers and lenses.

It’s rarely less than impressive. Even if the kit is off-the-shelf, it is wired into a makeshift salmagundi of wires, tubes, cladding, computer-controlled valves and rotors and components with more mysterious functions. Much of the gear, however, is likely to be homemade: custom-built for the research at hand. Whatever else it might accomplish, the typical modern lab set-up is a masterpiece of impromptu engineering — you’d need degrees in electronics and mechanics just to put it all together, never mind making sense of the graphs and numbers it produces. And like the best engineering, these set-ups tend to be kept out of sight. Headlines announcing ‘Scientists have found…’ rarely bother to tell you how the discoveries were made.

Would you care? The tools of science are so specialised that we accept them as a kind of occult machinery for producing knowledge. We figure that they must know how it all works. Likewise, histories of science focus on ideas rather than methods — for the most part, readers just want to know what the discoveries were. Even so, most historians these days recognise that the relationship between scientists and their instruments is an essential part of the story. It isn’t simply that the science is dependent on the devices; the devices actually determine what is known. You explore the things that you have the means to explore, planning your questions accordingly.

When a new instrument comes along, new vistas open up. The telescope and microscope, for example, stimulated discovery by superpowering human perception. Such developments prompt scientists to look at their own machines with fresh eyes. It’s not fanciful to see some of the same anxieties that are found in human relations. Can you be trusted? What are you trying to tell me? You’ve changed my life! Look, isn’t she beautiful? I’m bored with you, you don’t tell me anything new any more. Sorry, I’m swapping you for a newer model… We might even speak of interactions between scientists and their instruments that are healthy or dysfunctional. But how do we tell one from the other?

It seems to me that the most effective (not to mention elegant) scientific instruments serve not only as superpowers for the senses but as prostheses for the mind. They are the physical embodiments of particular thoughts. Take the work of the New Zealand physicist Ernest Rutherford, perhaps the finest experimental scientist of the 20th century. It was at a humble benchtop with cheap, improvised equipment that he discovered the structure of the atom, then proceeded to split it. Rather than being limited by someone else’s view of what one needed to know, Rutherford devised an apparatus to tell him precisely what he wanted to find out. His experiments emerged organically from his ideas: they almost seem like theories constructed out of glass and metal foil.

In one of his finest moments, at Manchester University in 1908, Rutherford and his colleagues figured out that the alpha particles spewed out during radioactive decay were the nuclei of helium atoms. The natural way to test the hypothesis is to collect the particles and see if they behave like helium. Rutherford ordered his glassblower, Otto Baumbach, to make a glass capillary tube with extraordinarily thin walls such that the alpha particles emitted from radium could pass right through. Once the particles had accumulated in an outer chamber, Rutherford connected up the apparatus to become a gas-discharge tube. As electrodes converted the atoms in the gas into charged ions, they would emit light at a wavelength that depended on their chemical identity. Thus he revealed the trapped alpha particles to be helium, disclosed by the signature wavelength of their glow. It was an exceedingly rare example of a piece of apparatus that answers a well-defined question — are alpha particles helium? — with a simple yes/no answer, almost literally by whether a light switches on or not.

A more recent example is the scanning tunnelling microscope, invented by the late Heinrich Rohrer and Gerd Binnig at IBM’s Zurich research lab in 1981. Thanks to a quantum-mechanical effect called tunnelling, the researchers knew that electrons within the surface of an electrically conducting sample should be able to cross a tiny gap to reach another electrode held just above the surface. Because tunnelling is acutely sensitive to the width of the gap, a metal needle moving across the sample at a constant, just out of contact, could trace out the sample’s topography because of surges in the tunnelling current as the tip passed over bumps. If the movement was fine enough, the map might even show the bulges produced by individual atoms and molecules. And so it did. Between the basic idea and a working device, however, lay an incredible amount of practical expertise — sheer craft — allied to rigorous thought. They were often told the instrument ‘should not work’ on principle. Nevertheless, Rohrer and Binnig got it going, inventing perhaps the central tool of nanotechnology and winning a Nobel Prize in 1986 for their efforts.

So that’s when it goes right. What about when it doesn’t? Scientific instruments have always been devices of power: those who have the best ones can find out the most. Galileo knew this — he kept up a cordial correspondence with his fellow astronomer Johannes Kepler in Prague, but when Kepler requested the loan of a telescope, the Italian found excuses. Galileo saw that, with one of these devices, Kepler would become a more serious rival. Instruments, he understood, confer authority.

Today, however, they have become symbols of prestige as never before. I have several times been invited to admire the most state-of-the-art device in a laboratory purely for its own sake, as though I was being shown a Lamborghini. Stuart Blume, a historian of medical technology of the University of Amsterdam, has argued that the latest equipment serves as a token of institutional might, a piece of window-dressing to enhance one’s competitive position in the quasi-marketplace of scientific ideas. I recently interviewed several chemists about their use of second-hand equipment, often acquired from the scientific equivalents of eBay. Strangely, they all asked to remain anonymous, as though their thrift would mark them out as second-rate scientists.

One of the dysfunctional consequences of this sort of attitude is that the machine becomes its own justification, its own measure of worth. Results seem ‘important’ not because of what they tell us but because of how they were obtained. Despite its initial myopia, the Hubble Space Telescope is one of the most glorious instruments ever made, a genuinely new window on the universe. Even so, when it first began to send back images of the cosmos in the mid-1990s, Nature was plagued with content-free submissions reporting the first ‘Hubble image’ of this or that astrophysical object. Authors were often affronted to hear that the journal wanted, not the latest pretty picture, but some insight into the process it was depicting.

At least this kind of instrument-worship is relatively harmless in the long run. More problematic is the notion of an instrument as a ‘knowledge machine’, a contraption that will churn out new understanding as long as you keep cranking the handle. Robert Aymar, former director-general of CERN, was flirting with this idea when he called the Large Hadron Collider a ‘discovery machine’. He was harking back (possibly without knowing it) to a tradition begun by Francis Bacon in his Novum Organum (1620). Bacon’s ‘organon’ was a proposed method for analysing facts, a systematic procedure (today we would call it an algorithm) for distilling observations of the world into underlying causes and mechanisms. It was, in effect, a gigantic logic engine, accepting facts at one end and ejecting theorems at the other.

Except, as it turned out, the system was so complex and intricate that Bacon never even finished describing it, let alone putting it into practice. Even if he had, it would have been to no avail: it is now generally agreed among philosophers and historians of science that this is not how knowledge comes about. The preference of the early experimental scientists, such as those who formed the Royal Society, to pile up facts in a Baconian manner while postponing indefinitely the framing of hypotheses to explain them, will get you nowhere. (It’s precisely because they couldn’t in fact restrain their impulse to interpret that men such as Isaac Newton and Robert Boyle made any progress.) Unless you begin with some hypothesis, you don’t know which facts you are looking for, and you’re liable to end up with a welter of data, mostly irrelevant and certainly incomprehensible.

Many gene-sequencing projects seem to hope that, if you have enough data, understanding will somehow fall out of the bottom of the pile

This seems obvious, and most scientists would agree. But the Baconian impulse is alive and well elsewhere, driven by the allure of ‘knowledge machines’. The ability to sequence genomes quickly and cheaply will undoubtedly prove valuable for medicine and fundamental genetics, yet these experimental techniques have already far outstripped not only our understanding of how genomes operate but our ability to formulate questions about them. Many gene-sequencing projects seem to depend on the hope that, if you have enough data, understanding will somehow fall out of the bottom of the pile. As the biologist Robert Weinberg of the Massachusetts Institute of Technology has said: ‘the dominant position of hypothesis-driven research is under threat’.

And not just in genomics. The United States and European Union have recently announced two immense projects, costing hundreds of millions of dollars each, to map out the human brain, using the latest imaging techniques to trace every last one of the billions of neural connections. Some neuroscientists are drooling at the thought of all that data. ‘Think about it,’ one told Nature in July 2013. ‘The human brain produces in 30 seconds as much data as the Hubble Space Telescope has produced in its lifetime.’

If, however, we wanted to know how cities function, creating a map of every last brick and kerb would be an odd way to go about it. Quite how these brain projects will turn all their data into understanding remains a mystery. One researcher in the EU-funded project, simply called the Human Brain Project and based in Switzerland, inadvertently revealed the paucity of theory within this information glut: ‘It is a chicken and egg situation. Once we know how the brain works, we'll know how to look at the data.’ Of course, the Human Brain Project isn’t quite that clueless, but this hardly mitigates the enormity of this flippant statement. Science has never worked by shooting first and asking questions later, and it never will.

The faddish notion that science will soon be a matter of mining Big Data for correlations, driven in part by the belief that data is worth collecting simply because you have the instruments to do so, has been rightly dismissed as ludicrous. It fails on technical grounds alone: data sets of any complexity will always contain spurious correlations between one variable and another. But it also fails to acknowledge that science is driven by ideas, not numbers or measurements — and ideas only arise by people thinking about causative mechanisms and using them to frame good questions. The instruments should then reflect the hypotheses, collecting precisely the data that will test them.

Biology, in which the profusion of evolutionary contingencies makes the formulation of broad hypotheses particularly hard, has long felt the danger of a Baconian retreat to pure data-gathering. The Austrian biochemist Erwin Chargaff, whose work helped elucidate how DNA stores genetic information, commented on this tendency as early as 1974:
Now when I go through a laboratory… there they all sit before the same high-speed centrifuges or scintillation counters, producing the same superposable graphs. There has been very little room left for the all important play of scientific imagination.

Thanks to this, Chargaff said, ‘a pall of monotony has descended on what used to be the liveliest and most attractive of all scientific professions’. Like Chargaff, the pioneer of molecular biology Walter Gilbert saw an encroachment of corporate strategies in the repetition ad nauseam of standardised instrumental procedures. The business of science was becoming an industrial process, manufacturing data on the production line: data produced, like consumer goods, because we have the instrumental means to do so, not because anyone knows what to do with it all.

High-energy physics works on a similar industrial scale, with big machines at the centre, but at least it doesn’t suffer from a paucity of hypotheses. Indeed, it faces the opposite problem: a consensus around a single idea, into which legions of workers burrow single-mindedly. Donald Glaser, the inventor of the bubble chamber, saw this happening in the immediate postwar period, once the Manhattan Project had provided the template. He confessed that: ‘I didn’t want to join an army of people working at big machines.’ For him, the machines were taking over. Only by getting out of that racket did he devise his Nobel-prize-winning technique for spotting new particles.

To investigate the next layer of reality’s onion, there’s no getting away from the need for big particle colliders to reach the incredible energies required. But physics will be in trouble if, instead of celebrating its smartest marriages of ideas and instruments, it becomes a cult of its biggest machine.

The challenge for the scientist, particularly in the era of Big Science, is to keep the instrument in its place. The best scientific kit comes from thinking about how to solve a problem. But once it becomes a part of the standard repertoire and acquires a lumbering momentum of its own, it might start to constrain thinking more than it assists it. As the historians of science Albert van Helden and Thomas Hankins said in 1994: ‘Because instruments determine what can be done, they also determine to some extent what can be thought.’

Read more essays on , and

Comments

  • Preston Garrison

    Before you quote old Chargaff as a sage, you should note that he completely misjudged Watson and Crick, who really were idea people, as just a bad comedy team, and he implacably opposed DNA cloning, which has been necessary for all the amazing progress that molecular biology has made in the last few decades. Chargaff was a better than average writer, but as a scientist he missed the big things that were right in front of him. By 1974 he was a curmudgeon, bitter that the world of molecular biology was leaving him behind.

  • Ken Weiss

    There will hopefully always be those who actually have innovative imagination. But the system for careers and funding that we have now is not tolerant of much deviation from the safe (and big-scale) inductive path, couched of course in language that makes it sound Important and Innovative. This is reinforced by the fact that after-the-fact one can always point to what worked and claim the approach was worth it, rather than asking what more might have been learned for the same or less cost, with a more focused approach?

    There will always be advances and perhaps in all human affairs most of us are drones and only a few are really innovative and have new ideas. That may be the price for the middle class to have been able to get into science in the first place.

  • mcal

    see Latour's "Visualization and Cognition: Thinking with Eyes and Hands" (1986)

  • http://thewayitis.info/ Derek Roche

    The larger question raised by this story could be whether mechanical instruments of any and every type, including computers, don't carry with them the false presumption that the systems under study are themselves mechanical. What if they're not? What if even the physical world is a complex adaptive system?

    • http://www.aeonmagazine.com/ Ed Lake

      Lee Smolin's recent book Time Reborn pursues this thought in a number of interesting ways.

  • Agga

    This type of obsession with the tool rather than the real world is abundant. Take for example media's obsession with "social media". Several news articles lately about the arab spring, and other topics, have been along the lines of "The Twitter Revolution". And I find new news stories all the time that focus, not on what happened, but what technology made it happen or allowed us to see it.

    In the same vein, there is a facebook group called something like "Science is awesome" and it features awesome things like a picture of an animal and some interesting facts about that animal. Well, science is awesome, but it had little to do with why that Mimic octopus is awesome. To give science credit is like smelling a rose and exclaiming "my nose is awesome". It is true but it misses the point.

    Except that I suspect that "the point" is changing. We are all up our own noses nowadays. We prefer to marvel at the things we have made, rather than marvel at the much more imperfect and uncontrollable world around us. Hence, in for example Japan, young people choose to be single and just watch porn. Because humans are "disappointing" and messy and inconvenient. Better to get a bot.

    I suspect a feedback loop where we make am artificial thing, and it fascinates us for a while. Then when we need new stimuli, we try to make new artificial things instead of turning to the things already out there. We get less able to appreciate those things with time, and so we are driven to create further things, and so on and so forth.

    That isn't a healthy state of affairs, which is obvious for those of us looking in from outside it.

    • DJ

      Given that both the photo of the Mimic octopus and the interesting facts about the octopus derive from scientists doing science, exactly who do or what do you give credit to?
      "Science is awesome" is simply a page that shows some interesting, thought-provoking stuff about the world that we wouldn't know about without the process of science.
      Your nose analogy is what misses the point: everybody's got a nose, everybody knows how to appreciate the products of your nose (nice smells), noses don't get vilified and denigrated by right-wing politicians, noses aren't under constant threat of crippling funding cuts, and therefore there's no need for a facebook page promote the idea that noses are useful and awesome. This is not true for science, and if you don't believe me, look up how many Americans believe that the planet is 6000 years old.

      • anonlol

        Please don't bring politics into this. Left and right both stifle science plenty.

        He was right about the octopus thing, and the general growing fascination with certain elements of science, which I'll call "popsci". The Mimic octopus isn't science; it is a natural phenomenon. Science is the process of understanding and categorizing the outside world; not simply the depiction of the world itself.

        That said, it's just a facebook group, and the octopus is fascinating to watch, so whatever. But the trend remains the same; many self proclaimed "science lovers" don't love science, but only the most immediately and sensually observable results of science. Science itself is a tedious, often fairly boring process.

        As for the creationist bit, all I could find was a Gallup poll saying that 46% of Americans didn't believe in evolution. I also found these:

        http://www.dailykos.com/story/2012/08/18/1121497/-Why-I-remove-Gallup-and-Rasmussen-from-the-Abbreviated-Pundit-Round-up-charts#

        http://triblive.com/usworld/nation/4139574-74/gallup-review-election#axzz2psSQigxq

        It would not surprise me if the percentage of measured creationists was somewhat overinflated. I'll concede that a significant portion of Americans are creationists, but they're entitled to hold whatever beliefs they want.

        Furthermore, I'll argue that quite a lot of those people who "accept evolution" have done little research or thinking into the matter themselves, and simply accept the "final word" from mainstream scientific authorities, or perhaps popsci sites. The reason for which I say this is the observation of many, particularly on popsci-type sites, who completely misunderstand the nature of evolution. Now, this is certainly better than religious creationism, but I don't find it particularly significant if the way by which they arrive at their beliefs is through blind faith, the same way creationists arrive at theirs.

  • Adam

    A bit of an overstatement. I think of Tycho Brahe's lifetime of measurement without theory leading to Kepler and Newton. Not everyone has the inclination to be a theorist and very few have the ability/timing to transform the theoretical landscape.

  • A Burton

    Makes me think of Niels Bohr's pet reproof, "You're not thinking, you jus being logical." Machines can extract details and shuffle data hugely faster than humans, but that doesn't make them intelligent. When computers outplay chess masters, it's not because they're clever, they simply do things so fast it's like giving them them a million years against the human master's forty minutes. Given any program one must, as Kurt Gödel demonstrated, find a point outside the program to stand judgment on its output.

    The sooner we make our culture aware of this generally, the more easily we'll maintain a proper perspective on what human thinking is, and how it differs from machine capabilities, however impressive.

  • selimibn

    What I can we conclude from this article is that Phillip Ball doesn't seem to visit any theoretical scientists, and that he has some ideas about things that might be happening but in fact are not. What are his examples of tool-worshiping in science? Some scientist seemed to him as though they didn't want to accept they use second-hand equiment (maybe they ran afoul of some institutional regulation?), the Hubble Telescope took too many pretty pictures (and I agree to some extent, but it also gave more accurate measurements of the age of the universe and the Hubble constant that contributed to the ΛCDM picture of the universe) and LHC is like Francis Bacon's “organon”. (It isn't, and the author stops well short of outright naming anyone who thinks it is.)

    The historical cases he holds as exemplary are exactly the way science works today: instruments are designed to answer specific questions, and not as all-purpose data collection tools. The LHC is, indeed, “a piece of apparatus that answers a “well-defined question” with a yes/no answer. For example: Is there supersymmetry at the sub TeV scale? Is the Higgs mechanism responsible for the spontaneous symmetry breaking of the weak gauge symmetry? I can't see why this is it different from Rutherford's apparatus; is it because it is bigger and more expensive? The questions it is answering is farther away from the scales of human life; there really is no reason why they absolutely have to be accesible to table-top experiments.

    The truism “you explore the things you have the means to explore” will mislead us if we do not adequately consider the fact that those things comprise a growing set, and one of the motors of its growth is precisely humanity designing and building means to explore those things it couldn't before. Why didn't Rutherford discover the top quark with thin glass instruments? Well, he clearly explored the things he had the means to explore.

    I do not know anyone who thinks that the data from gene-sequencing project is the scientific result, but maybe the author can provide some quotes. And while it is true that “physics will be in trouble if (...) it becomes a cult of its biggest machine”, I see no sign that this is happening.

    • DJ

      See my post above. I totally agree. I also don't know anyone who thinks that the data from gene-sequencing project is the scientific result and I work with people doing sequencing every day.

  • DJ

    Oh please. Strawman.

    I work in a DNA sequencing lab and the first question we always ask any researcher is "what is your scientific question"? And we usually don't need to ask the question because they are already well aware of the fact that masses of data does not make a good scientific study without some thinking and defining of "what question am I trying to answer?".

    "Many gene-sequencing projects seem to depend on the hope that, if you
    have enough data, understanding will somehow fall out of the bottom of
    the pile."

    Really? Name them.

    "The faddish notion that science will soon be a matter of mining Big Data for correlations".. if it's a fad, who's in this fad? I don't know anyone who believes this. If it's so faddish, where's the crowd following the fad?

    Another example of a criticism of things that would be bad if they were actually happening but on closer examination are really not happening to any degree. This article lacks something that's as important to journalism as it is to science: actual data.

    Aeon magazine seems to make a habit of these types of articles. A bad habit. Aren't there any real, important issues to cover?

    • Bob

      This is spot on. What is to be gained by focusing on scientists' fetishization of their equipment? It smacks of a ridiculously anti-science agenda. They may also show off equipment to journalists because they know simpletons like shiny things, which doesn't make them arrogant unless the journalists don't bite.

  • http://www.bobvanvliet.nl/ Bob van Vliet

    Good read!

    Philip, are you familiar with 'Thing Knowledge: A Philosophy of Scientific Instruments' by Davis Baird? It traces almost exactly the outline of your article, but comes to a slightly different conclusion.

    Baird discusses how instruments, in some cases, can be seen as *material theories* themselves, instead of just expressions of conceptual theories. He takes a more generous view of what it means to keep them "in their place", discussing for instance the role models and instruments played in Francis' and Crick's discovery of the double helix, exactly in the area of producing "the idea".

    If this is an area that continues to interest you, I think you'll find Baird's work novel and quite interesting.

  • Philip Ball

    Bob van Vliet,
    Thank you so much - I ought to have known about Baird but did not, and will certainly look that up. I felt pretty sure someone would have said this before, and probably better.
    DJ,
    It's good to hear that you see that emphasis on scientific questions first (although of course what really matters is the nature of the answers you're given - a "scientific question" is not necessarily the same as a hypothesis).
    But you're being a little disingenuous to suggest that this isn't an issue. Look at what Robert Weinberg, a leading specialist in cancer genetics at MIT, has said about the matter in Nature 464, 679 (2010). Not everyone will agree with Weinberg, but I guess we have his word against your (anonymous) one?
    On mining Big Data for correlations: take a look at what Chris Anderson says at http://www.wired.com/science/discoveries/magazine/16-07/pb_theory. You might think he's talking nonsense, but for a lot of folks he is a guru. And he's not alone.
    selimibn,
    Sure, I visit theorists. Not sure why that is relevant, though... The LHC is a great bit of kit, but it is also fetishized - take a look at the Science Museum's new exhibition "Collider" (which I enjoyed all the same). Likewise, I'm a fan of the HST, but as I say, in its early days it was brandished totemically by some of those who used it. Blume gives a good account of such tendencies - you might disagree, but you might at least take a look first.

    • selimibn

      I just meant that, presumably, theorist don't say: ‘Do you want a tour of the lab?’. Rather, they probably want to talk about ideas. of which they have plenty. (I know I don't say it, at least). I just came from a meeting of high-energy physics and I can happily report that scientists still exhibit critical thinking skills when discussing big experiments, that they are very careful to examine their limits and assumptions, and that the foremost thing is still the physics you can do with them. They don't sit around “fetishizing” the LHC; they care if the branching ratio of Higgs to two photons has been consistently measured in the independent CMS and ATLAS experiments, if the backgrounds are right, if the next to next to leading order calculations can help discard some beyond-the-standard-model constructions, etcetera.

  • Scientismist

    As someone who works with a lot of grad students in the sciences I see a trend toward unsophisticated epistemologies at least abetted by technology. Extreme competition in grantmanship and tenure also contribute, as does increased specialization (if not fractionation) of disciplines, and perhaps even the size of and division of labor in top-notch labs. What I see is more and more mere "technicians" getting PhDs and fewer and fewer who understand the epistemology of their own endeavors. But this is not news to philosophers and sociologists of science who do not simply rely on the stories scientists tell each other (and the public) but empirically investigate the processes of sciences--something those same scientists do not do.

or newsletter