Things genes can’t do

by and 4000 4,000 words
  • Read later or Kindle
    • KindleKindle

Things genes can’t do

Researchers working on cloning projects at the Beijing Genomics Institute in Shenzhen, China April 23, 2012. Photo by Tyrone Siu/Reuters

Simplistic ideas of how genes ‘cause’ traits are no longer viable: life is an orderly collection of uncertainties

Kenneth Weiss is professor of anthropology at Penn State University. He is the co-author of The Mermaid’s Tale: Four Billion Years of Co-operation in the Making of Living Things (2009).

Anne Buchanan is a research associate in the anthropology department at Penn State University, and author of The Mermaid’s Tale: Four Billion Years of Co-operation in the Making of Living Things (2009).

4000 4,000 words
  • Read later
    • KindleKindle

DNA is a metaphor for our age. It conveys the powerful idea that our identity is scientifically reducible to an unambiguous, determinative code. We hear this idea expressed all the time. The car company Bentley advertises for employees saying: ‘Hard work is in our DNA.’ The footballer David Beckham says: ‘Football is in England’s DNA.’ And a toll-collector for the Golden Gate Bridge in San Francisco says: ‘Our DNA is embedded in this bridge.’

Everyone knows these statements aren’t literally true, but although we might understand their figurative meaning, they continue to reflect, and influence, how we think. Even biologists, being quite human, too often think metaphorically and assign properties to genes that genes don’t have. The metaphor works because our society has a deeply embedded belief in genes as clearly identifiable material things which explain our individual natures, making them inherent from the moment of our conception and thus predictable. If hydrogen and oxygen are the causal atoms of water, genes are the causal atoms of our existence.

And we’re surrounded. News stories appear every week announcing the discovery of a gene ‘for’ this trait or that. Direct-to-consumer (DTC) genetic testing and ancestry determination companies are thriving, because consumers believe their genes will tell them more about their ancestry than family stories can. They also want to know whether they are fated to suffer from particular diseases, and they believe that this too is written in their genes. Sperm banks suggest that prospective parents consider a potential donor’s hobbies, the languages he speaks, his favorite foods, or his educational attainment, as though these traits are written in his sperm.

But try or wish as we might, the idea that everything about us is reducible to genes is not supported by real-world observations. Indeed, a simplistic picture of genes as individual causal things with straightforward effects is out of date in many ways. For starters, we now know that no gene acts alone. Complex traits — such as the diseases that most of us will eventually get — result from the interactions among multiple genes and/or environmental factors. Predicting disease depends not just on identifying our genotype, the particular, unique set of DNA sequence variants we inherited, but also on predicting our future environments — what we’ll eat, drink, or breathe, the medications we’ll take, and so on — which neither DTC companies nor anyone else, no matter how ‘expert’, can do.

Because environments vary and every genome is unique, multiple studies of a given trait or disease will generally yield different results. DTC estimates of disease risk are inherently probabilistic, not fixed. The same applies to choosing a sperm donor based on behavioural traits — of which any genetic component would likely be swamped by cultural and environmental factors, such as the food the donor was exposed to when growing up, or whether he could afford to go to university.

The metaphor that corporations and nations have their own DNA, and the belief that genes have straightforwardly determinative effects, might provide a comfortable, tempting image of simple cause and effect. But it’s akin to replacing the religious concept of ‘soul’ with the modern, scientific one of ‘gene’, and that’s very misleading. It tends to assign a kind of fixed metaphysical essence, analogous to Calvinism’s predestination, and drastically simplifies what are actually complex phenomena (dogmatic beliefs are like that). And there are consequences.

Genes are certainly real, so it’s important to understand what they can tell us about ourselves. You might be told that, based on your genotype, you have a (let’s say) 15 per cent chance of heart disease. This is a risk, or probability, not a certainty, nor anything like it. Probabilities are not the same as ‘causes’, and they can be extremely difficult to grasp. For example, even in the simplest situation, such as when we flip a coin to see who pays for the drinks, we might say our thumb is the cause of the flip itself, but we tend to think of the actual result — the ‘heads’ or ‘tails’ — as down to ‘chance’.

But what do we mean by chance? It is easy to wrap our heads around ideas such as coin-flipping by assuming that every flip has a 50-50 chance of a heads or tails result. Sounds simple enough, but what if we need to predict the specific outcome of a large number of such flips, somewhat like the challenge we face in predicting, from a person’s genotype, the risk of life events such as a heart attack or diabetes? Each of us has a unique set of variants in perhaps hundreds of different genes that separately contribute to the probability of disease. What will each one do? Will they flip as ‘illness’ or ‘health’? Is that even a realistic question?

Unlike coin-flipping, disease prediction depends on knowing, assuming or guessing the underlying risk associated with each individual genetic variant, risks that differ from gene to gene, and that do not work like the simple heads and tails of a coin (which, if the coin is fair, will always carry the same risk). What kinds of ‘probabilities’ are they when it comes to understanding what can be predicted from an individual’s genes and the major life decisions that might follow? Just as a coin heavily biased towards heads can come up tails on a given flip, a person inheriting a genotype that raises the risk of diabetes might not in fact get the disease. And risks can easily be perceived as more serious than they actually are, even if we assume the risk estimate is solid. If the risk of a given disease is, say, 2 per cent in the general population, and our best guess is that your genotype raises that by a whopping 25 per cent, that still only changes your actual risk to 2.5 per cent.

Genes must be contributing to risk in important ways. But if so, how can they be as slippery as eels when we try to find them?

So far, we’ve considered purely physical traits. What role do genes play in non-physical traits such as behavior, or even the ultimate questions of consciousness and free will? Here, the metaphoric replacement of ‘soul’ by ‘gene’ works in a different way. How much of our feelings, thoughts and behavior is actually determined from the moment we are conceived, and could in principle be read like a computer program from our genome?

The extent to which we have free will is a fundamental aspect of how we view our ‘selves’, and for many religions, relates to whether we can be held responsible for our moral behavior. The scientific view, on the other hand, goes something like this: we live in a totally material world made of matter, energy, and the forces that connect them. Since genes are the fundamental causal elements of life, it would seem inevitable that, if we knew enough, we could predict everything about all of us — our health, our behavior, and our ideas. The alternative would seem to be mysticism — invoking some sort of immaterial something-or-other that we can’t measure but that affects who and what we are. But if genetic prediction is so unreliable and complex, how did our view of ourselves get so entangled in genetic determinism in the first place, and what might all this tell us about not just physical traits, but such elusive ideas as free will?

Today’s gene metaphor is a fabric woven of two threads from the 19th century. In 1858, Alfred Russel Wallace and Charles Darwin proposed a stunning new framework for understanding life in a way that was entirely materialistic and freed from mysticism. The diversity of life, they said, is due to the historical process of evolutionary divergence from common ancestry, in which present-day traits and functions are an outcome of natural selection. Darwin and Wallace developed their theory in the Newtonian era, when the aspiration of science was to understand existence in terms of ‘laws of nature’. Darwin viewed natural selection as, like gravity, a ubiquitous, essentially deterministic causal force in a relentlessly competitive world, a view he expressed in the foundation text of evolutionary biology, On the Origin of Species (1859).

Evolutionary determinism was the first thread of the gene metaphor. Natural selection preserves only what is inherited from the successful organisms in the past. The second thread comes from Darwin’s contemporary, Gregor Mendel, who conducted his studies of peas in order to understand the nature of inheritance. His findings also fit the Newtonian worldview perfectly. If natural selection was a law of nature, like gravity, then Mendel’s laws of inheritance promised to identify the fundamental building blocks of biological causation. By choosing specific traits that he knew bred true, Mendel identified a pattern of inheritance that provided perhaps the most powerful tool for research design in the history of science. The genetic research that followed eventually led to the identification of the nature of DNA, the locations and structure of genes in DNA, and the understanding of how they code for proteins. But that same Mendelian thinking made us conceptual prisoners of the deterministic, law-like interpretation of genetic function that leads us to think of traits themselves, not just genes, as discretely packaged units, produced by discretely packaged genes. That suggests that a pea seed already contains mini-green peas, or that a fertilised human egg contains a tiny human: a kind of genetic superstition.

Mendel showed that inheritance was probabilistic in the same sense as coin-flipping. Each parent carries two copies of every gene, and they each transmit, at random, one of those copies to each of their offspring. But once the particular randomly transmitted copies are inherited, their effects in the offspring follow causally deterministic principles: the resulting peas were either green or yellow, wrinkled or smooth.

There are plenty of instances in which genes do seem to be determinative, and work as they did for Mendel’s peas. Hundreds of known diseases, for example, appear to be caused by one or just a few genetic changes that disrupt or destroy a gene in some major way. Examples include cystic fibrosis, muscular dystrophy, and diseases of the nervous system such as Rett syndrome or Tay-Sachs disease. But as a rule, these ‘Mendelian’ diseases are a minority of rare traits that appear early in life, regardless of lifestyle exposures. The success for medical genetics in picking this easy-to-find ‘low-hanging fruit’ hasn’t given us a way to harvest the rest.

This isn’t for want of trying. Billions of dollars have been spent on searching for ‘the’ genes ‘for’ such common diseases as obesity, heart disease, type 2 diabetes, stroke, hypertension, cancers, asthma, and countless other afflictions. There have been few notable successes. The frustration is great because for most traits, including most diseases, members of an affected person’s family tend to have increased risk of the same trait or disease, in ways that can’t entirely be blamed on shared environment. This strongly reinforces the DNA metaphor by suggesting that genes must be contributing to risk in important ways. But if so, how can they be as slippery as eels when we try to find them? The reason is that the fabric of genetic causation is probabilistic both in terms of the inheritance of genes, and their effects.

The standard ‘scientific method’ we were all taught in school was based on stating, and then testing, a specific hypothesis about what causes some outcome; for example, that mutations in the LDL receptor gene (which affects cholesterol levels) can cause heart disease. However, most studies of such specific hypotheses have come up empty. The growing availability of wholesale DNA sequencing technology, largely initiated by the completion of the human genome project in 2003, led to the widespread abandonment of standard hypothesis-based genetics, to be replaced by what is called ‘hypothesis-free’ genomics.

In keeping with the DNA metaphor, the idea of the genomic approach is to assume that genes simply must be causing a trait of interest, and to look across the entire genome to find variants that are more common in individuals with the trait than in those without it. The hope was that we would soon eliminate the debilitating or fatal diseases to which most of us now fall victim, once we had exhaustive knowledge of genome-wide variation.

Genomic studies searching for causal genes have grown ever larger and more expensive, but commensurately important results have yet to roll in. Most of the estimated overall genetic influence on the traits or diseases of interest is still unidentified. What we’re finding instead is ‘polygenic’ causation, that is, that many different parts of the genome contribute mainly trivial individual effects.

Each genetic variant is a very weak ‘coin flip’ with unstable probabilities, and everyone is flipping a different set of coins

A typical well-studied example is Crohn’s disease, an inflammatory bowel disease that runs in families, and thus would seem to have a major genetic component. However, the most recent study, by Heather Elding and colleagues at University College London, published in The American Journal of Human Genetics, estimates that the number of genes associated with the disease is around 200, most with very small effects, which explains only a small amount of the genetic background of this disease. To liken this again to coin-flipping, variants at each ‘causal’ gene affect risk in some probabilistic way, usually very small — far from 50-50 — and with no guarantee whatever that the same variant provides the same risk in different people who carry it, or in different populations, or in men or women, or at different ages. It’s as though each coin keeps changing its probability of coming up heads. Thus, the predictive power of this type of ‘personalised genomic medicine’ is generally very weak, like trying to predict the outcome of hundreds of individual-specific coin-flips. That’s why, with some fortunate exceptions, the clinical or therapeutic value of all these genetic studies has so far been slight.

It’s a similar story for normal traits as it is for disease. Height is an easily measured trait that clearly runs in families, and many studies have been done looking for genes for this trait. More than 400 contributing genetic regions, from an estimated 700 or so, have been found but, again, none with very large effects. In fact, to date, only 10 per cent or so of the variation in height has been explained, as a study from Exeter University published in Nature in October 2010 demonstrated. Many more genes will be found to contribute, but environmental factors such as diet or illness will as well.

Height and Crohn’s disease are just two of many instances of this same basic pattern. Behavioral and psychiatric traits are proving to be just as intractable, and the story is similar with the same kinds of studies in other species, as varied as yeast, insects, and plants. What is being documented is the blunt reality of the state of nature. No matter how unwelcome it might be for those who still hope for simple deterministic-like genetic causation, complex traits are affected (one should perhaps no longer say ‘caused’) by multiple genes with individually small and typically fickle effects. In addition, nobody disputes that there is usually a hefty, indeed often predominant, environmental component to the risk of disease, although it’s typically not very seriously considered by geneticists. These environmental factors are themselves quite complex and elusive to assess, or even identify.

Perhaps the most important single fact lurking in all of this is that when numerous genes contribute to a trait, the specific set of contributing variants is different for every individual. This is a many-to-many causal relationship: there are many genetic paths to a single height, blood pressure, triglyceride, or cholesterol level. Equally, a given genotype is consistent with many different trait values. Each genetic variant is a very weak ‘coin flip’ with unstable probabilities, and everyone is flipping a different set of coins. So, even if we identify the genotype of an individual, we can’t as a rule accurately predict its effects, even though this is just what ‘personalised genomic medicine’ has promised to do.

This makes another aspect of the DNA metaphor problematic. Instead of the widespread view of life as raw, relentless Darwinian competition leading to a single ‘fittest’ way to be, a far better way to see it is in terms of cooperation. By cooperation we do not necessarily mean the social, emotional variety. Cooperation describes the way in which a trait is produced by many factors, the countless genes and lifestyle aspects that contribute to the trait. If these factors do not work adequately together, the trait will not successfully be built into an embryo in the first place. Extensive webs of cooperation within us — genes with genes, organelles with organelles, cells with cells, tissues with tissues, and so on — mean that except for the rare disastrous instances, individual contributing genes neither spell doom nor success on their own. If there are many ways to fail — as the rare, serious genetic mutations show — there are a great many more ways to succeed.

Another way to view cooperation among genes is that evolution has provided a kind of redundancy that protects individuals from harmful mutations and overly harsh screening by natural selection. If each gene is, in itself, not a deterministic cause of some useful trait, then the organism can often do just fine with modification of or even loss of that gene, because other contributing genes cover for it, or any one modification has only a trivial effect. We know, for example, that many well-known variants that are clearly associated with very serious human disease are the normal state in other species. Indeed, whole-genome sequence studies have consistently shown that all of us carry a significant number of defunct or seriously disrupted genes, and this can include genes whose mutations are clearly implicated in some disease contexts, even if we ourselves are healthy.

All this might seem confusing: genes are molecules and hence fundamental causal agents of life, yet their effects are highly probabilistic and very hard to pin down or predict. As we have tried to explain, although genetics and evolutionary research are often very technical, the issues are actually reasonably simple. That’s fortunate, because an understanding of how life and evolution work as an orderly collection of uncertainties can lead us to a better sense of what is ‘inherent’ in our nature, and why.

In this light, we can return to the intriguing topic of behavior and, particularly, of free will. What we know about life undermines the explanatory power of molecular reductionism: that is, the attempt to use genetic variants to predict not only physical traits but also higher-level phenomena – such as the ability to do calculus, or write poetry – which seem to ‘emerge’ magically out of nowhere. For scientists attempting to understand life’s complexity, this might be the winter of our discontent, but Richard III’s soliloquy was written in Shakespeare’s hand — not his genome.

Complex organisation arises from webs of interaction among causal factors. Even if individual factors cannot be held responsible for particular developments, complex phenomena such as people, skills, skulls, languages, and even football teams clearly do exist, and have a material rather than any mystical or immaterial basis. In fact, emergent complexity takes essentially the same form, and presents the same challenge, in the very different contexts of biology, ecology, anthropology, sport — and free will.

But here’s the conundrum we mentioned earlier: if science says that the world is an entirely material phenomenon following universal laws of causation, then even the idea that we are responsible for our thoughts and actions comes under siege. Personality? Intelligence? Criminality? Political preference? You name it — even our moral decisions must, in principle, be predictable from our inherent, inherited genome. Yet, our thoughts and actions seem to be even farther beyond the reach of gene-based prediction than physical traits such as diabetes or height, which we’ve seen to be extremely complex in their causation. Is this just a temporary limit in scientific knowledge, or is something more profound going on here?

Perhaps as we are evolved biological organisms, uncertainty is unsettling to us

The question is more than incidental, because it raises the rigid idea of a mind/body dualism. Dualism asserts that mind and consciousness, whatever they are, are free from the usual material constraints. In other words, we have free will, just since we feel that we do. Free will is at the heart of assumptions that we are morally responsible for our actions, which in turn affects social and legal policy as well as religious notions of earned salvation. Clearly, if individuals are just the product of their genes, then they can’t be held responsible. Yet, how can they not be the product of their genes?

An answer might lie in the understanding of complex causation that we have presented here. We aren’t qualified to deal with religious issues about moral responsibility, but from a scientific point of view there is no mind/body dualism. Mind, wondrous though it might be, is in fact the product of molecular forces, including genes. Yet the mind seems fundamentally unpredictable from genes. The reason is that the brain and its activities are the result of countless billions, if not trillions (or more) of ordinary molecular and cellular interactions of all sorts, each of them probabilistic, from gene usage to the formation of neural connections, beginning before birth and extending over our lifetime’s experiences. During development, our brains are programmed to ‘wire’ up in a very general way, but the details in each individual are the result of experience, and our individual behaviours are the result of our brains responding to our unique set of experiences.

We should not be at all surprised that, just like most other traits, behaviour is not specifically predictable from genes. The massive web of probabilism makes such prediction weak at best, just as we’ve seen for physical traits. Our mental activities feel as if they are free, and their unpredictability supports that feeling. But the reason is that the causation involved is so complex and deeply probabilistic that it is, in effect, unpredictable even if we were to try to enumerate all the contributing factors. In that sense, for all practical purposes, we are indeed free.

It is sobering to point out that none of these issues about determinism, probability, complex causation — and even their implications for free will — are new. They can be traced back to the classical philosophers, and were vigorously debated along with the development of probability and statistics in the 18th through to the 19th centuries, and then reinforced by discoveries in sub-atomic physics in the 20th century. The significance and challenge of probabilistic multifactorial causation have been recognised. What is new is that we have a much better documentation of this problem from a genetic point of view. But, conceptually, we have not advanced very much in our understanding of what are deeply puzzling aspects of the way the cosmos — including life — works.

Human beings don’t like things that are unexplained. We want the comfort and sense of safety that comes from predictability. Perhaps as we are evolved biological organisms, uncertainty is unsettling to us. And, in the scientific era, we assume a material understanding of causation. That’s what the idea of determinism represents in a simple, easy-to-grasp way. We want to be in control, to be able to manipulate nature to alleviate the problems that we face in a finite life in a finite world. We want our causes to be simple, real causes, and that is perhaps why the metaphor of the gene as the atom of causation in life is so easy to absorb, and its subtleties so easy to overlook. We are made very uneasy by things that are only probabilistic unless, as in coin-flipping, we can sense what’s going on. When we can’t see it, and causation is many-to-many, that is far too much for our minds to deal with easily. Yet that seems to be the reality of the world.

Read more essays on biology, evolution and genetics


  • kranthi askani

    awesome...thank u for the article

  • Gyrus

    Great article, thanks!

    One thing:

    Human beings don’t like things that are unexplained. We want the comfort and sense of safety that comes from predictability. Perhaps as we are evolved biological organisms, uncertainty is unsettling to us.

    This seems to be over-generalized, projecting perhaps a quite recent trait across the whole species. It made me think of a quote from a book by Mathias Guenther on the religion of southern African Bushmen:

    The Bushmen ... appear to have no lack of tolerance for the ambiguity inherent in their belief system. They seem untroubled mentally and emotionally by such cosmological and logical incongruities as humans merging identities with the animals of myth and veld, or god being both creative and destructive - the source of disease and death, but also healing, along with a physiological-mystical bodily potency. ... The contradictions in their "religious attitude," their theology, and their cosmology - of which they may be made aware only by the probing questions of resident anthropologists - do not cause them intellectual unease. They seem not only unperturbed by the great variation in beliefs and myths, as well as the narrative accounts thereof which they hear from other people, but actually seem aesthetically to cherish the interpersonal idiosyncrasies of ideas. (Tricksters and Trancers, p. 227)

    As we know, the figure of the trickster in mythology became marginalized as religions became more civilized and organized, and this figure contributed significantly to the image of the Devil in Christianity. Science, I believe, inherited much from Christianity even as it claimed to render it obsolete.

    I wonder if our aversion to uncertainty is less to do with being "evolved biological organisms", and more to do with being advanced technological organisms, whose intuitive understanding of nature is clouded by the membrane of mechanism that stands between us and the organic world?

    I'm not wanting to set up a simple dualism here. Obviously Bushmen would still have some aversion to uncertainty, and there is no absolute line between nature and artifice. But since relatively predictable mechanisms, relying on simple cause-and-effect, have permeated our world, they also seem to have conditioned our sense of the world. Perhaps the recent rise of often infuriatingly unpredictable technologies (e.g. complex software) is beginning to soften this industrial-age simple-mindedness?

    • Ken

      We were just noting the possibility (Just-So story?) that animals, including our forebears, are made uneasy by uncertainty: we want to know where the lion is, where our food is, and where our rivals may be. That was our surmise, and we know that modern life may be very different (but usually less immediately dangerous) from those of the San ('Bushmen'). Too many choices probably explains why everybody seems to need a personal trainer and a therapist!

    • beachcomber

      My understanding of the Bushman (and other first peoples ..) is that their sense of survival in a constantly challenging environment doesn't allow much time for the luxury of contemplation of self and it's relation to the cosmos.

      There are of course well established mythological elements which perpetuate their world view (documented by Lucy Lloyd and Wilhelm Bleek in the 1860's in Cape Town).

      It would seem that their closeness to the earth, their intense awareness of the physical elements (fauna, flora, seasons, weather), their impact on their lives and consequently a corresponding understanding that they are a very small part of an enormous space and are only able to affect their immediate surroundings to a limited extent, has given them this equanimity, enabling them to accept their lot and also to be humble enough not to think that they can influence or comment critically on someone else's destiny.

      • Gyrus

        I see where you're coming from, but your basic idea of foragers not having enough time to contemplate self and cosmos is problematic, if not flat out wrong. Marshall Sahlins' work is famous for his claim that, contrary to the wisdom received from Hobbes et al, foragers actually spent a relatively small amount of time on survival needs (about 4 hrs / day is the usual figure). Naturally his work has been challenged - you can rely on any findings that challenge deeply-seated prejudices of a society to be shot down, whether they're right or wrong. I've not followed the recent debate but it seems complex, and Sahlins is far from having been unseated. Certainly we should be cautious about positing a simple laid-back Eden, but we can safely assume most foragers aren't engaged in a non-stop struggle to make ends meet (unlike the lower tiers of industrial societies).

        We should also bear in mind that foragers being studied recently have usually been, by definition, living in marginal environments into which they've been forced by expanding agricultural and industrial societies. Things were probably even easier for them when they were free to occupy more fertile regions.

        I think their "contemplation of self and cosmos" simply isn't that obvious to us, not only because they lack writing and formal institutions, but because this "thinking" is intimately interwoven with their "mundane" activities. It's embedded in their foraging and hunting skills, their daily gossip and bickering, their leisure and festivals. With agricultural and industrial societies, subsistence work becomes divided from "contemplation", which is reserved for the leisured elite.

        I think it's the closeness of their contemplative and cultural activities to their basic subsistence activities that results in them having attitudes more aligned with the complexity and unpredictability of nature.

        • beachcomber

          Yep - I agree with most of what you say: perhaps I didn't articulate that combination of foraging/contemplation well enough. As a gardener I'm aware of the process.
          I'm also aware of the +- 4 hour theory but I think we must remember that the pre-contact Bushmen were constantly on the move as natural resources in semi-arid desert are scarce. So it wasn't as if they were in one place for weeks on end where they foraged for about 4 hours and then spent the other 20 lolling around.
          Perhaps what I'm trying to get at is that because of this lifestyle they were constantly aware of the earth/cosmos but had no real need to intellectualize it; for them it simply was part of their being.
          And sadly, yes, their traditional lifestyle was dramatically changed by the movement of european and bantu settlers into their foraging space.

  • ramesh rghuvanshi

    Considering every thing is depend on genes is fad.In every society this kind of fad are always spreading ,people become intoxicated with some fad.Fad become fashion and if we are not join it we may be isolated,if everybody is running why should not I?This kind herd mentality spread in country or in world They want change in their life ,some kind of sensation to overcome the boring.It remain some time then .fade up slowly,wise men do not give much importance to this kind of fad.Unfortunately there are more fools in the world,more herd mentality dominated in world and con men take advantages of credulous mob.There is very beautiful Urdu proverb "There is no scarcity of fools in this world search one thousand appear before you. "

  • Jason Moore

    Great stuff! Must read for students.

  • In Tempore

    It's not so much uncertainty per se - it's that we want to conquer it and seem to have the tools to do so, but can't. Moreover, it's irrational to try, since the inherently probabilistic nature of life is how we derive meaning. It's either nihilism or strive for what cannot be attained.

  • Flyers4n

    Excellent article!

  • Rosalind80

    Unfortunately, this article seems to be mostly the author's misconceptions about genetics being portrayed as what scientists believe, followed by an attempt to debunk those "misconceptions." Scientists actually do understand the complexity of this topic - many have devoted their lives to studying it. The article also seems to advance the erroneous idea that just because something is not 100% deterministic, we can't understand its fundamental workings or develop technology around it.

    • Gyrus

      Seemed pretty clear from the first few paragraphs that the author was talking about the general scientific discourse, science as understood through mass media - rather than science per se.

      That's not to say there's no overlap between the two, i.e. the "popular science" genre. Dawkins' The Selfish Gene is a pretty apposite case in point. Considering that, the author seems restrained for not having had a go at the legacy of this deeply suspect metaphor.

      • Rosalind80

        Perhaps, but I found the tone of the article to be rather annoying - essentially "look, everyone, it turns out that science is complicated!" The problem with implying that even scientists don't appreciate the complexity of their subject (for example, the "Even biologists..." line in the first paragraph) is that this perception can be used to argue against genetic research because scientists just don't know for sure what will happen, and therefore it's too dangerous! Scientists can of course be mistaken and hold incorrect beliefs, but peer review, regulation, and the iterative process of the scientific method are powerful balancing factors that keep us moving in the right direction.

        • Gyrus

          Even biologists, being quite human, too often think metaphorically and assign properties to genes that genes don’t have.

          To me that's implicitly acknowledging the sophistication of actual scientists, but acknowledging that even they are human and sometimes fall prey to false metaphors. Not quite how you're taking it I think. More a call for obviously necessary humility rather than a blanket put-down of science or scientists.

    • Gyrus

      Maybe a fairer point about developing technologies around non-deterministic phenomena. Heart transplants, bypasses, etc. were all developed pragmatically without a full understanding of that organ's non-linear dynamics. I wonder if the point with genetics is that pragmatic experimentation may be a much bigger and more complex can of worms, in terms of ethics and unforeseen consequences?

      • Ken

        To respond to the above series of comments, it is obviously true that we as a scientific community know that things are complex and we know, in a sense, in what ways that is so. The problem to us is that the public and the media and many scientists (by the way they act and talk, regardless of what they 'know') are promising miracles based on strong assumptions of genetic causation. We tried to be clear that when many genes contribute to a trait, and they all vary, and we don't have a good handle on the probabilities associated with most of them, and we know we're missing many that weren't able to generate statistically significant evidence, that individualized genomic prediction is being over-sold. There is disagreement, as these comments show, about what to do about or whether things are just fine as they are. We don't think the latter is the case, and we think a more sober view of predictive power of individual genomics is in order, as is a more serious effort to think beyond enumeration. That is, of course, just our view, and clearly it's a minority view.

  • cuzzin

    Interesting article, although I disagree slightly. From what I understand, the point of the article is that humans cannot use the genome as an effective way to predict anything but the most simple conclusions about our future selves, due to near infinite probabilities involved not only from the environment but from our own complex genetic networks. Thus concluding that because we can't predict our own future we therefore have free will.

    My argument comes on your definition of probability because if we take a molecular reductionist viewpoint, probability does not exist in reality. Since the big bang the course of all particles in the universe has been a result of cause and effect which means that although we might see something as random it is in fact pre-determined. The only way I can see humans as having free will is still through mysticism; if our consciousness/soul/will is not formed of matter from this universe only then can we have the power to force particles out of their perpetual cycles of cause and effect.

    • ken

      I really have to disagree, at least in part. We're not physicists, but our understanding is that physicists (and this applies to colleagues here with whom we've talked about this) hold that some 'causes' are truly probabilistic (God does play dice, to contradict Einstein's quip). If that is so, then specific individual prediction is not possible--in principle. Alternatively, if so many things are just as-if they're probable because our knowledge is too fragmentary at present, then we'd argue that in practice, if not in principle, things are unpredictable in the way we tried to discuss.

      Even if every cause were Newtonian billiard-ball in nature--and I think nobody including philosophers really claims to know what 'cause' means (Newton famously eschewed trying to explain it), we simply have no prospect of enough information to identify much less enumerate each individual's countless (or nearly so!) causal elements, so that our predictive power will remain probabliistic.

      There's another aspect of this. From an entropy point of view, one needs greater entropy (to be more complicated) than what one is trying to explain--you have to have more 'information' (lower entropy) in your quiver then the negentropy of what you are trying to account for, in order to do that. If, say, there are trillions of neuronal connections, googles of stochastic molecular interactions, and these are always changing, then we simply in this view cannot account for all of it accurately enough to make deterministic sense or predictions out of it all. And every molecule is changing every microsecond,etc. So even if it were all purely mechanical and clockwork-like, free will and so on might be illusive, but not probably so.

      We hope we were not seen as arguing for a mystic interpretation of free will. If that is somehow the case, it is far beyond science to show it, and basically we haven't the evidence to suggest that things are other than material. On the other hand, nobody seriously claims to know what 'consciousness' is, and there is always the possibility (probability?) that some other 'force' or 'matter' (like, say, 'dark matter' or 'dark energy') also affects the causal elements.

      If I had to conclude with any bottom line, for me it would be that we need a better way to understand, and use, complexity than the Enlightenment-based, reductionistic, inductive way we approach things now. And a better understanding of what truly probabilistic causation could even mean.

      • cuzzin

        I feel your article debunks a kind of
        genetic "pre-determinism lite" which I agree with, but I'm more interested
        in the infinite cosmic form of pre-determinism. I'd be interested in a link to some more information on these truly probabilistic events if that's okay (I'm no physicist either...). Thanks for the reply.

        • ken

          Well, cuzzin, I'm no physicist, but I think physicists have come to accept that we are not in a purely Newtonian world with deterministic causation in the billiard-ball sense. Probabilitic causation--very counter-intuitive to animals like us for some reason--may be real. That would mean that, at the time of the Big Bang, our exchange here on Aeon would not have been predictable at all. In a purely classical, Newtonian world of causation, everything would have been predetermined, but not predictable--because no one could have known enough about everything to make the prediction: to know that much, you'd have to be greater than what you were trying to predict, but everything of the universe would have been _in_ the universe, a subset rather than superset.

          Then, there is the multiverse theory, in which each of infinitely many universes is exactly predictable and deterministic, but since you don't know which universe you're in you can't do better than probabilities.

          This may not answer your question, but after all, I haven't got the kind of crystal ball that would be required. We simply can't do better with a coin or dice than probabilistic prediction, and those may be truly deterministic phenomena if we measured enough aspects.

          • cuzzin

            Putting prediction to one side...

            "I think physicists have come to accept that we are not in a purely
            Newtonian world with deterministic causation in the billiard-ball sense."

            How? On what evidence? If, as all current scientific evidence suggests, everything in the universe is material, I struggle to see how you can logically believe in free will.

          • Dan

            There is a huge amount of evidence pointing to a probabilistic universe, at the very least on the quantum level. Ask Schrodinger's cat....


          • cuzzin

            I don't see how this is relevant. If you were to literally carry out the experiment, the radioactive decay would just be a function of cosmic cause and effect. Whether we observe if the cat is dead or alive is irrelevant to the fact that the cat's fate was pre-determined in the first place.

          • Dan

            No, that's not the case. Current physics would indicate that even with perfect information about the starting state of a system some facts about particles cannot be know beyond a percentage chance. That's what both Ken and I are trying to point out. Both the Copenhagen and many worlds interpretations of quantum physics (which are the 2 most commonly accepted theories among physicists) would indicate that there are predictions that can only be made in probabilistic terms, again even with absolutely perfect information regarding the starting state of the system.

  • Matt Baen

    In addition, most genetic evolution - whether competition, mutualism, parasitism, - is about forces at the genomic level, not the phenotypic level. Evolution driven by the phenotype is only a minority of genetic change - whether or not those genes are neutral! Meiotic drive, selfish DNA etc. - phenotype not involved. Therefore we can expect that much genetic variation to have negligible effect on function and selection even if the trait difference is measurable or even obvious.