Tuesday, September 29, 2009

Come Together … Right Now … Over Teeth

I dare say that all paleontologists, including myself, drool over papers like this one recently published in PLoS Genetics, because, as the authors write, it “provides manifest evidence for the predictive power of Darwin’s theory."

Instead of looking at the genes behind new or modified existing traits, the authors looked at the genes for traits that have disappeared. Most lost or vanishing traits that come to mind are comprised of soft tissues that do not preserve well in the fossil record (e.g. cave fish eyes), but thanks to the steadfast properties of enamel, tooth loss and enamel loss can be examined both genetically and in the fossil record. That means the evolutionary scenarios for the evolution of enamel loss and tooth loss can be rendered in much higher resolution, if you will, than those for soft tissue traits.

Tubulidentata (aardvarks), Pholidota (pangolins), Cetacea (whales, porpoises, and dolphins), and Xenarthra (armadillos, sloths and anteaters) are four groups of mammals with toothless and/or enamelless taxa. They also have pretty decent fossil records, especially when it comes to teeth. What’s more, mutations in known mammalian genes (e.g. enamelin’s gene ENAM) that are involved in enamel formation are known to – wait for it – cause defects in enamel.

This is a perfect opportunity to bring fossils and DNA together.

Do these mammals in question show degeneration, like a pseudogene, at ENAM? Yes, various kinds.

Do the nature of those changes support the phylogenetic hypotheses made by comparative anatomy and the fossil record? Yes, mostly (e.g. enamel may have been lost independently in different armadillo lineages).

See for yourself here:

Molecular Decay of the Tooth Gene Enamelin (ENAM) Mirrors the Loss of Enamel in

the Fossil Record of Placental Mammals

Sunday, September 27, 2009

Adam’s rib and the sanctity of Knowledge

Anne and Ken are still away, but they’ll be back very soon. When they are, I hope they’ll write about this new Nature article.

In the meantime…

********

Hi. My name is Holly Dunsworth. I’m a professor. And I thought that men had fewer ribs than women until halfway through college.

HI HOLLY!


Fantasizing about group therapy sessions may be blowing things out of proportion. But I wonder if there are any serious ramifications to learning the story of Adam’s rib from the Book of Genesis.

What’s the big deal about thinking men have fewer ribs than women?

I guess I can think of a few medical situations that could go horribly wrong if the doctor miscounted the ribs - mainly if she was using a particular rib to draw an incision or to locate an organ. But doctors learn basic anatomy (e.g. that men and women have the same number of ribs) before they are allowed near patients. I’m guessing that a disastrous rib incident is more likely to be found in the pages of The Onion than in The Times.

However, from a larger philosophical perspective, I think it is a big deal.

First of all, if you think the little story of Adam’s rib has gone away, you’re going to be surprised when you click here and here.

The question is definitely out there and most answers are truthful, but many perpetuate the myth, some quite elaborately.

The story of Adam’s rib symbolizes a puzzling phenomenon in America. We systematically teach myth-information, and then, later, we may or may not replace it with long-known facts. I wish I could say this was only something we do to children, but it’s also one of pop culture’s favorite ways to “educate” adults.

For example, The History Channel - which is widely assumed to be educational - bookends shows about Lincoln’s assassination with ones on Nostradamus and the Loch Ness monster. Even when these shows include the skeptical side of the story, they still validate the pseudo-scientific point-of-view, as if each holds equal footing in some sort of “debate”.

We love to spread myths that have been overturned by facts. What ridiculous behavior for any species, let alone for upright, intelligent, apes who should know better! For thousands of years we have been systematically replacing fiction with fact, unknown with known, fake with real. This is a triumph of humanity. It’s a feat that the perpetuators of Adam’s rib don’t appreciate.

I’m not saying that myths and folk knowledge are unimportant. They’re culturally relevant and crucial to our history. They are sources of beautiful songs, poetry, and literature. We are a story-telling species. But when those myths can be made into scientific hypotheses, which can be tested, and when they’re tested, and when they fail to be supported by the evidence… that knowledge should be replaced by the new knowledge that better explains the world around us.

There is no room for debate on this. Whenever they are discovered, real facts should replace false ideas.

What’s more, we shouldn’t expect each human to relive that entire process of discovery and falsification throughout her lifetime. Each new person deserves to start life standing on the shoulders of her predecessors so that she can leap off and fly above and beyond them.

We’ve known for a while now that developing embryos and fetuses don’t go through all the developmental stages of their ancestors before they’re born. So since somatic ontogeny doesn’t recapitulate phylogeny, why must we insist that intellectual ontogeny does?

Sure you can’t learn calculus without first mastering arithmetic and then algebra, but no one’s asking Kindergartners to design their own numbering system before they get started. Scientists and medical professionals need to single-handedly explore human anatomy as if it’s unknown territory; everyone else, including my elementary school self, deserves to second-handedly learn the truth about their bodies, about something as simple as rib number.

To me, the story of Adam’s rib is illustrative of the rising anti-intellectual epidemic which causes everyone, not just the afflicted, to suffer. That we continue to teach myth-information to children today, after we have accumulated so much Knowledge, is largely because of the disregard and negligence bred by anti-intellectualism.

Although it may be tempting to blame anti-intellectualism on religion, we should not boil it down to that, especially since so much Knowledge throughout history has been, and continues to be, accumulated by religious folks and institutions. Recently Gregory Rodriquez, citing Alexis de Tocqueville, argued in the L.A. Times that our shared American sense of equality is at fault.

Knowledge is not a democracy, it is a meritocracy. Good ideas hold. Bad and out-dated ideas are kicked out. And all ideas, bad and good, beget exponentially more of them, so as the information available at our fingertips balloons, we have to be even more careful with Knowledge.

Yesterday a student asked me something that I didn’t know the answer to, so I said “I think it’s probably [a, b, and c], but you’d have to check [x, y, and z]."

To that she replied, “You could have just said [a, b, and c] like it was the answer and I’d have believed you.”

I told her that was nice but that I lost my right to bulls_ _ _ when I became a professor, that now I can only speak in facts and hypotheses and I have to be perfectly honest when I don’t know something.

One thing I know for sure, however, is that men and women have the same number of ribs.


- Holly Dunsworth, guest blogger

Tuesday, September 22, 2009

My brain is no bigger than a caveman’s

Many people, including myself, consider Richard Dawkins to be well above average when it comes to intelligence.


So is the size of his brain above average too?


Not necessarily.


Nobody has discovered a way to use a person’s intelligence to predict their brain size and vice versa.


What’s more, all of our brains are no larger on average than those of half-million-year-old “archaic” humans - the kind of people who hunkered down in caves to rest between hunting expeditions or to hide from hungry saber-toothed cats.


In spite of these issues - in spite of it being impossible to use someone’s head size to predict their IQ score or even to predict whether they are simply above or below average in the intelligence department – so many of us mistakenly cling to the notion that that “smart” people have bigger brains than “stupid” people.


It’s partly evolution’s fault.


Evidence from the fossil record and from comparative anatomy of living species makes it clear. Along with body size, an increase in brain size is a common trend in many evolutionary lineages. Over the last two million years our lineage experienced an extreme increase in brain size. And although it has not been as pronounced in other lineages, encephalization has also occurred in apes, monkeys, elephants, whales, carnivores, birds, cephalopods, etc. Mother Nature certainly likes big brains, but she would never ramp up the growth of something so metabolically and developmentally expensive if there wasn’t a payoff. We assume this has something to do with brain function, or intelligence.


Even those who know little or nothing about evolution (or deny it happens all together) can make the connection. After all, our brain is where our intelligence lives and our brains are conspicuously large. We can do all sorts of wonderful things that other animals cannot, so of course our large brains play a role in that.


It is hard not to apply this logic to the variation that we see within our species. But we shouldn’t.


And neither should Richard Dawkins as seen in this recent interview…[start at minute 4]

What’s the big deal? What’s wrong with what he said? It sounds pretty reasonable. Aren’t I just reacting too sensitively to his use of the fact that less educated people have more children than highly educated ones? They do. He’s right. We shouldn't have to be politically correct about facts. I sound like a knee-jerk liberal. Okay okay.


The point here is not to bark about Dawkins potentially misspeaking. He may wish he had said things differently here, and Darwin knows that I wish that very thing after most teaching bouts.It’s just that his hypothetical future evolution scenario was supposed to clarify evolution for the public, but it only raised questions and further supported racist beliefs. Ambassadors of Evolution should be more careful.


Looking around the animal kingdom, it is clear that brain size is correlated to intelligence. Those animals with big brains are the most intelligent. Our common mistake lies in applying that observation to modern humans and towards understanding our current variation in brain size and intelligence.


Whatever drove human brains to achieve modern size about 500,000 years ago is something that unites us all. This is true regardless of our current variation. And this was a type of intelligence, which we all carry with us, that laid the groundwork for all the cognitive and cultural development that has occurred since.


So the development of art, farming, calculus, plastics, microchips, neurosurgery, crossword puzzles, etc… all that stuff (all of which is a big part of intelligence estimations and measures) has nothing to do with why our brains got big in the first place.


Of course intelligence varies between people. But if brain size and intelligence were linked in our species, wouldn’t we be able to spot an intelligent person just by looking at the size of their head? Wouldn’t NASA and Harvard measure heads just to keep their applicant pools in check? Wouldn’t women give up trying to compete with men who have bigger brains than us? Wouldn’t people who wear small hats give up their Jeopardy! or architect school ambitions or just never dream them up in the first place? Most of us already know, whether we realize it or not, that brain size and intelligence are not linked anymore in the hominin lineage.


Once we get past that, then we can ask a couple of really interesting questions.


What did Mother Nature find so fascinating about our brains 500,000 years ago? What kind of function was our brain providing in the middle Pleistocene that required it to be so big back then?


Stone tool technology - which is one of the few things that is preserved from this time period – steadily advanced and became more and more elaborate and complex during this phase of our evolution. So, invention and technological intelligence, which goes along with physical intelligence like manual dexterity, is a good explanation for our encephalization. Another strong hypothesis suggests that social networking was so utterly important to our survival and reproduction that only with a larger cortex could a person maneuver and compete within a large society full of other intelligent creatures. Political games, power struggles, relationship forming, relationship maintenance, and resource acquisition (e.g. cooperative foraging and hunting) may have all relied on a big social brain. Language was another likely brain size booster.


Given that the trend for increasing brain size began 2 million years ago and lasted for about 1.5 million years, why did it just stop in the Middle Pleistocene?


Maybe we had all the brain we needed. Look how far we’ve come with caveman-sized brains!


It’s also possible that this is as big as it gets: Metabolic and developmental constraints may prevent our brains from getting any bigger.


Okay, so Dawkins jumbled up the story of human brain size evolution and intelligence. Still, what’s the big deal? What part of it flirts with racist beliefs out there? If you look back into the history of science and pseudo-science, there is a long tradition of measuring heads in different human populations and a long history, which continues today, of concluding that some “races” have smaller brains than others. There is also a long-held belief that some races (which are historically grouped by geographic origin, skin color, language, and other cultural traits) are more intelligent than others. In both brain size and intelligence, guess who gets ranked lowest most often in these "studies"? Africans and people with African ancestry. So to determine if someone is intelligent, we might take into account skin color, nose shape, eye shape, and cultural factors like language and body adornment. Dawkins's misstep adds unfortunate credence to this pseudo-scientific nonsense.


One big confounder is that we know very little about what causes variation in human intelligence. We know that intelligence is not determined by genes alone, but that genes do play a role since they build the brain. Natural Selection could act on these genes and drive evolution, as Dawkins said. However, intelligence is much more than “good genes”. Something as simple (but often so hard to obtain in a crowded world) as good nutrition contributes greatly to neurological and cognitive development.Environment is another key factor. Stimulation, practice, learning, and discovery, along with a healthy diet, help children become mental gymnasts who can grow up to qualify for the intellectual Olympics. It seems to me that if we put the need for nutrition and education programs in terms like, “Granting all Americans the opportunity to be intelligent citizens,” there may be may be more taxpayer support.


As Ambassadors of Evolution, it is our duty to clarify for others what we know and what we don’t know about the evolution of the human brain and intelligence. Even more than the “aquatic ape” hypothesis, the evolution of intelligence is consistently the most popular topic in public and classroom discussions of human evolution, and yet is the most dangerous given our sordid history, but having this discussion holds the potential to improve the human experience.

- Holly Dunsworth, guest blogger

Further Reading:

Race is a Four-Letter Word: The genesis of the concept by C. Loring Brace (2005)


Saturday, September 19, 2009

T. wrecks T.rex

A new fossil discovery has hit the press, as reported in Science online and many other places, a 150 million year old HO-gauge Tyrannosaurus rex. This 'small' (human size) creature looks so much like the famous Hollywood star (and actual fossil), that it has caught paleontologists by surprise. It is making good news fodder (and will surely be the basis of a Hollywood sequel).

The new find from northeastern China, named Raptorex kriegsteini, is about 35 Myr older than the famous movie star that draws countless tourists to natural history museums, and much much smaller, perhaps 150 pounds, but with the same short arms, large head and jaw, and long tail. The prevailing theory of the selective forces that molded the giant version of this animal, the big-tailed, huge-thighed, tiny armed, upright vicious predator was that to be as big as an American car they had to have this shape.

The powerful carnivore, so the story went, evolved from a very different ancestor, developing its oddities because it became so large. It needed a large head, powerful jaws and sharp teeth to capture its prey, and strong legs for running, but, it was said, with long forelimbs, it would have been top-heavy and its running ability hampered, so its arms shrank into the short limbs we all readily identify with T. rex.

Paleontologists can discuss the morphological aspects of this fossil in a meaningful way that we can't, but the relevant issue for us, and this blog, is the nature of explanations in evolutionary science, and the confidence with which we tend to hawk them to the public and, probably worse, to ourselves.

Clearly, this new miniature replica shows that the original explanation was overstated if not completely wrong. It may be that the same shape considerations would have worked on a smaller scale, if T. wrecks was subsisting in such a relatively minaturized environment, in the way that toy trains work just like real trains, but on small tracks. But what would the proof of that small-scale argument be, given the clearly false original argument?

It is easier to reconstruct evolutionary history than evolutionary scenarios. The former describes the temporal biogeography of past life, based on actual evidence. Incomplete though it be, the evidence is at least a partial picture of things from the actual past. The problem of reconstructing evolutionary scenarios is that here we are trying to bring specific processes and events back to life. They can't be observed directly.

It is perfectly natural for us to try to explain form in terms of our ideas about process, and of course this is done in a (usually rigid) Darwinian way, in terms of the selective forces that 'must' have been operating. But when one new find can undermine or overturn such scenarios, how confident can we be in the ones that haven't been overturned (yet)?

The same comments probably apply widely in paleontology, even to species so utterly boring or remote that they have no cinematic interest. It certainly has been a plague in anthropology, where finds like a new finger-bone are touted as revolutionizing our understand of human evolution. When one new knuckle can do that, we should knuckle down and keep our interpretations well within the sanity zone, but that seems difficult to do, given the hunger of the media (as well as attention-hungry scientific journals like Nature).

It's for reasons like these that evolutionary biology has long, and often rightly, been accused of conjuring up "Just-so" stories, the phrase from Rudyard Kipling's children's tales. We explain our empirical findings with stories of processes and events that we are inventing, or guessing at. What we are actually saying is not what did happen, but that what we see appears as if such-and-such was going on. In turn, that provides plenty of fodder for those who use our tall tales to argue that, to the contrary, evolution didn't happen at all.

Making up stories and calling it Science is only good at the box office. While we can and should try to guesstimate the way life was in the past, we need to be a lot more modest in doing so. How many times does a T.rex have to be ruined by a T.wrecks before that lesson is learned? Or when will funders penalize over-zealous claims, to bring science back into a less hubristic mode of operation?

Probably, this won't happen as long as there are television, movies, authors, and journals who have something to sell. And that's our "Will-be-so" story of the day.

Friday, September 18, 2009

On the road again

Ken and I will soon be on the road for 10 days or so, back Oct 1. Ken will be giving two talks, one at the meeting of the Italian Federation of the Life Sciences in Riva del Garda, Italy, that he's calling "Genetic causation: a Fermi problem", and the other for the Department of Genetics, Universitat Pompeu Fabra in Barcelona, with the title "Darwin's 'most imperfect' sketch: how does it look after 150 years?" We don't know how much we'll be able to blog while we're away, but we hope to at least be able to check in from time to time, at least with details about the meeting.

But, we won't be going completely dark. We're pleased that a guest blogger has agreed to fill in for us. Holly Dunsworth is a paleontologist who graduated from our department a few years ago as a student of Alan Walker. She did a few years of post-doctoral work in genetics, and in primate paleontology, and is now teaching at Northeastern Illinois University in Chicago. She has worked in Kenya for a number of years on fossil apes and humans that are well-preserved there. You may have heard her on NPR's This I Believe last year, talking about evolution as the foundation of her work. We're very happy that she has agreed to tend our blog while we're away, and we look forward to reading her posts.

Evolutionary significance of color vision gene therapy

An aspect of the color blindness story that we didn't mention in our earlier post, but should have, is the evolutionary implication. Like humans, the monkeys in the experiment have three types of cone cells, L cells or cells that are sensitive to light of long wavelength, M, or medium wavelength sensitive cells and S, or short wavelength (blue) sensitive cells. Color blindness occurs in primates missing the L or M sensitive photopigment. In this experiment, the investigators introduced a photopigment gene that was expressed preferentially in M cone cells, into animals that were red-green color blind.

We quote here from the paper (Gene therapy for red-green colour blindness in adult primates, Mancuso et al., Nature, advance online publication 16 September 2009).

Classic experiments in which visual deprivation of one eye during development caused permanent vision loss led to the idea that inputs must be present during development for the formation of circuits to process them. From the clear change in behaviour associated with treatment, compared both between and within subjects, we conclude that adult monkeys gained new colour vision capacities because of gene therapy. These startling empirical results provide insight into the evolutionary question of what changes in the visual system are required for adding a new dimension of colour vision. Previously, it seemed possible that a transformation from dichromacy to trichromacy [from seeing 2 colors to seeing 3, which, in combination, allows us to see the full spectrum of color that we do] would require evolutionary/developmental changes, in addition to acquiring a third cone type. For example, L- and M-opsin-specific genetic regulatory elements might have been required to direct the opsins into distinct cone types9that would be recognized by L- and M-cone-specific retinal circuitry, and to account for cortical processing, multi-stage circuitry might have evolved specifically for the purpose of trichromacy. However, our results demonstrate that trichromatic colour vision behaviour requires nothing more than a third cone type. As an alternative to the idea that the new dimension of colour vision arose by acquisition of a new L versus M pathway, it is possible that it exploited the pre-existing blue-yellow circuitry. For example, if the addition of the third cone class split the formerly S versus M receptive fields into two types with differing spectral sensitivities, this would obviate the need for neural rewiring as part of the process of adopting new colour vision.

So, in these monkeys, M cone cells that were previously not sending signal to the brain began to do so after a functional photopigment gene was introduced and activated in the retina. Apparently no new brain circuitry was required for these monkeys to begin seeing color, because they began to do so at the same time that high levels of the expressed transgene were detectable. Thus, the investigators suggest this experiment is a reprise of the evolution of color vision, and that it didn't require new cortical function or circuitry but only the addition of a third cone type.

The latter conjecture is worth thinking about, but the basic color vision system is much older and many studies have been done about the gene arrangement, spectral sensitivity, and adaptive aspects of the system. It is not a simple evolution, much less a story of novel progress from simple to complex, not even in primates. But if it can help understand how eye-to-brain wiring and perception works, it will be a step forward.

Thursday, September 17, 2009

Color blindness a disease?

Two color blind adult monkeys have been cured of their 'disease' with gene therapy, as described in an online Nature story. The story is picked up by the BBC and Science Daily, among others. One of the investigators is quoted in Science Daily as follows,
"We've added red sensitivity to cone cells in animals that are born with a condition that is exactly like human color blindness," said William W. Hauswirth, Ph.D., a professor of ophthalmic molecular genetics at the UF College of Medicine and a member of the UF Genetics Institute and the Powell Gene Therapy Center. "Although color blindness is only moderately life-altering, we've shown we can cure a cone disease in a primate, and that it can be done very safely. That's extremely encouraging for the development of therapies for human cone diseases that really are blinding."
The researchers introduced genes that produce a protein called long-wavelength opsin into the monkeys' retinal cells via an adenovirus delivery system. The virus gets into the cell and the opsin gene it has been engineered to contain is expressed as a protein, which the cell then processes properly enough that it works to respond to the right light frequency. Apparently the monkeys began to see color for the first time about 20 weeks after the injection of the genes. Here's a video showing how the monkeys' newfound ability to see color was tested.

Color blindness is a nuisance, though perhaps a non-trivial one when it comes to seeing traffic lights or other alerts, but many people live very successful lives without being able to see red or green. But, if gene therapy can cure other forms of blindness associated with cone cells, as Dr Hauswirth suggests, this is good news indeed.

However, the story in Science Daily goes on to say that

[t]he finding is ... likely to intrigue millions of people around the world who are colorblind, including about 3.5 million people in the United States, more than 13 million in India and more than 16 million in China. The problem mostly affects men, leaving about 8 percent of Caucasian men in the United States incapable of discerning red and green hues that are important for everyday things like recognizing traffic lights.

The reason that more men than women are color blind is that these color-sensing genes are on the X chromosome, and males only have one X, so that if they have a defective gene, all their cone cells will bear the defect. Women have two X's (that's what makes them female), and though each retinal cell only randomly picks one of the two to use, a woman carrying a mutant opsin gene will have half her retinal cells using the normal, functioning gene (it's rare for both to have a mutation, since the mutations are fairly uncommon).

But, color blindness is part of the natural spectrum of color-sensitivity variation. It isn't a 'disease' and in a tight-budgeted time for health care, suggesting that millions of people might be interested in a 'cure' might seem to be an almost cynical disregarding of priorities. Given that the numbers of people who are color blind is much greater than the numbers of people with true blindness, are the researchers emphasizing color blindness in an attempt to interest pharmaceutical companies in this new technology, or has this slant been introduced by the journalists? Hard to tell. This could be another Viagra, a blockbuster cure for something that hasn't needed pharmacologicals before now. If this is what happens, it would show (or confirm) how utterly socially irresponsible the pharmas are, and the governments who allow this to be done.

On the other hand, if the technique works and could be used to treat real genetic retinal disease, it will be a boon to society and an excellent example of the fact that, for things that truly are genetic and relatively simple, genetic engineering should work. Curing any genetic form of blindness would be a major advance.

Tuesday, September 15, 2009

Out of the running?

As the whole world knows, the young female South African middle-distance runner, Caster Semenya, who has been outracing female peers from all over the world, has now been found to be "technically a hermaphrodite." She has both male and female sex organs, and the question of whether she will be allowed to continue to compete with women is sending athletic ruling bodies into a frenzy.

"This is a medical issue and not a doping issue where she was deliberately cheating," IAAF [International Association of Athletics Federations] spokesman Nick Davies was quoted as saying.

"These tests do not suggest any suspicion of deliberate misconduct but seek to assess the possibility of a potential medical condition which would give Semenya an unfair advantage over her competitors. There is no automatic disqualification of results in a case like this."

So, it's a question of 'unfair advantage', not cheating--and would be so even if she had known of her unusual state before the race in Berlin that caused such an uproar. Apparently Ms Semenya runs faster than other fast women because she makes more testosterone. But what's 'unfair' about it?

The question assumes that all women have an equal chance of becoming world-class runners. But, world-class athletes are an elite group, presumably not only because of how hard they train, but, at least in part because of the genetic make-up of their muscles and how they work. Even Ms Semenya's competitors run faster than women with shorter legs, or heavier builds, or muscles that don't fire as quickly, or due to less efficient oxygen usage, and their advantages are not considered to be unfair. Ms Semenya just happens to be the elite of the elite.

Copy number variation, in which we each have different numbers of copies of parts of our genome, occurs in all of us. Some CNVs are associated with disease, or traits like the sensitivity of our sense of smell. But most CNVs are of no as-yet known function. But suppose someone comes along who has, say, 3 rather than 2 copies of the erythropoietin gene, that helps make red blood cells (that handle oxygen and hence relate to endurance). Is that natural CNV any different, in terms of 'unfair advantage', than Epo doping, which is currently very much against the rules of sport?

Certainly as the relevant genotypes become known, there will be designer babies and they will be at a metabolic advantage over the normal population, for whatever reason. They will be so not because of doping but because of their inherited genotype. Of course, modifying the genotype for in vitro fertilization could be viewed as a kind of gene doping if it's designed to alter the future individual in some relevant direction, but in fact the intelligently designed would be functionally no different from someone who has the same genotype just by luck of the draw.

Much may have to do with the degree to which different genotypes actually improve performance, relative to the modifiable factors that currently affect who stands on the Olympic podium while their national anthem is played. If genotypic effects seem to be minimal relative to diet, weight training, skill training, practice, and wearing the best Adidas shoes, then there will be no big issues. After all, we are all genetically different already.

If it turns out that genotype can make a difference, naturally or because training can be tailored to the competitor's genotype, then we may reach a decision point. We could create new genotype-based competition subdivisions, much as there are weight classes in wrestling, or sex classes in most sports. Or, we could just decide that each athlete plays the cards dealt, and some have an advantage. Only those lucky ones will show up at the Olympics.

What counts as fair and what as cheating is culture-laden, potentially contentious, and not always an easy call. But whatever the definition, we can be sure that lots of people will try to use it to their advantage, and they will still be the ones breaking the tape.

Sunday, September 13, 2009

The importance of cooperation in life: second installment

Back in June, we posted what we called a first installment on the importance of cooperation in life. Distracted by other issues and subjects, we never got around to posting a second installment. Here it is, and rather timely at that, as cooperation seems to be having its day, with entomologists and primatologists, sociologists and political scientists, among many others, now addressing this often slighted aspect of life. We, too, consider cooperation to be fundamental, even describing it as a principle of life in our book.

An essay in a recent Science by science writer Elizabeth Pennisi takes up the subject (On the origin of cooperation, Science, 4 September 2009: Vol. 325. no. 5945, pp. 1196 - 1199). Pennisi reminds us that Charles Darwin was perplexed by the existence of altruism--why would an individual help another at cost to him or herself?
Cooperation has created a conundrum for generations of evolutionary scientists. If natural selection among individuals favors the survival of the fittest, why would one individual help another at a cost to itself? Charles Darwin himself noted the difficulty of explaining why a worker bee would labor for the good of the colony, because its efforts do not lead to its own reproduction. The social insects are "one special difficulty, which first appeared to me insuperable, and actually fata to my theory," he wrote in On the Origin of Species.
And, biologists have been perplexed by this ever since, because it doesn't fit easily within the prevailing evolutionary framework.
And yet, [Pennisi continues] cooperation and sacrifice are rampant in nature. Humans working together have transformed the planet to meet the needs of billions of people. Countless examples of cooperation exist between species: Cleaner fish pick parasites off larger fish, and nitrogen-fixing bacteria team up with plants, to name just a few.
The usual discussions about cooperation, as above, are about social cooperation, among individuals in a population. Widespread as such examples are, they don't even hint at the extent of the cooperative nature of life, which is true at all levels, as our book is largely about. Genes cooperate with other genes, organelles with each other inside cells, receptors on and in cells cooperate with their ligands, cells with cells, tissues with tissues and organs with organs. Organisms cooperate with others of their own species (sexual reproduction being the quintessence of co-operation), and members of different species with each other. So, if cooperation is so all-pervasive, why has it been so consistently overlooked in favor of competition and selfishness?

The word cooperation may be denigrated from a fundamentalist Darwinian point of view as soft-headed goody-goody thinking. 'Cooperation' is indeed a culturally loaded word. But it is no more so than 'competition'! A 'selfish' gene is not competing in the same aware sense that a marathon runner is. Neither are two molecules aware of cooperating in the way members of a soccer team are.

We mean co-operation literally, that is, operating at the same time and place and in appropriate amounts and ways. That includes social cooperation. It might be better to call this 'interaction', as another way to stress that the elements of life don't act alone. But we want an antidote to the very loaded term 'competition', until the mainstream of biology changes that term to something like, say, 'differential proliferation'.

We aren't the first to point out that the idea that life is all about competition fits neatly with the history and politics of the culture within which evolutionary theory developed and grew. Historiographic context analysis is often written as if it shows the falseness of the idea being discussed. That doesn't necessarily follow, but it does seem correct that the words used and the approaches taken reflect social context whenever human affairs are the subject. In this case, our contention is that a cultural obsession with individual-based competition, which was shared by Darwin (but less so by Wallace), affects what we see and focus on and how we interpret it.

The focus on competition is one of the consequences of viewing life on the compressed evolutionary scale. Darwin himself understood that it was impossible for us to understand the immensity of time over which life evolved. Thinking in evolutionary terms makes it easy to forget that life is actually lived from moment to moment, and needs to be understood on that scale as well. When seen at the level of daily life, cooperation is omnipresent, and far more important than competition.

Nor are we the first to point out that even when cooperation is undeniable, it's often quickly redefined as competition--people are only altruistic because they get something out of it, or to help their kin, and so on. Why people help non-kin is easy to explain when you acknowledge the role of culture in what we do. If you filter everything through a strictly Darwinian lens, where reproductive fitness is the ultimate measure of success, and we're all in competition with each other, driven by natural selection, it is indeed impossible to understand why people would jump off a bridge to save a drowning stranger, or choose to limit their number of offspring (even if you explain this with r and k strategies, this only kicks the question back a step), or invent the concept of socialism, or, the ultimate inexplicable action in Darwinian terms, detonate a suicide belt in the service of religious conviction.

But, if you allow that culture can drive what we do, in perhaps biologically inexplicable ways, and not simply our sex drive, or the fact that helping our cousin favors some of our own genes, these actions don't then have to be explained in terms of competition or survival of the fittest or optimal energy expenditure or whatever. A Darwinian purist's post hoc explanations are, for example, to invoke 'reciprocal altruism'. In the moment of truth before you swim to the drowning stranger's aid, somewhere deeply in your reptilian brain is the little message "Do it, because if they survive they may save you some day!"

Baloney! One of us has had this exact experience, and there was no little Devil on the shoulder whispering in the ear.

The fundamentalist view of social behavior rests on the important, automatic, but erroneous assumption that cooperation always involves a cost of sufficient magnitude to be detected by selection and that the act has to be seen in terms of selection and the latent assumption is that the mechanism must be related to altruism itself. That's an industrial-age's argument for 'efficiency' as the Law of Life that justifies harshness towards workers in manufacturing companies.

We suggest to the contrary that cooperation is so fundamental to life that it needs to be accepted on its own terms, and need not even be specifically 'programmed' (or such program specifically reinforced by selection). If anything, for many species the cost is for not cooperating, and translating this into Darwinian terms only distracts from what is important.

What we see and how we view it have implications for what we don't do or don't see in science, even if the latter is there unmistakeably. We think that, regardless of the aspects of differential proliferation that were involved, a focus on the nature and extent of cooperation in its many forms of equal grandeur to anything Darwin ever remarked on, and would be healthy for biology to concentrate on.

Friday, September 11, 2009

Apology to Turing rightly done

Thanks to R Weiss for alerting us to this follow-up to our post of 8/20 about the petition drive in the UK asking the government to issue an apology to Alan Turing for the persecution he suffered because he was homosexual. We said on 8/20 that we agreed that Turing deserved an apology, but not any more so than all others who were equally persecuted for their sexuality. We're pleased to see that that's exactly the apology Gordon Brown offered.
Alan Turing was a quite brilliant mathematician, most famous for his work on breaking the German Enigma codes. It is no exaggeration to say that, without his outstanding contribution, the history of the Second World War could well have been very different. He truly was one of those individuals we can point to whose unique contribution helped to turn the tide of war.

The debt of gratitude he is owed makes it all the more horrifying, therefore, that he was treated so inhumanely. In 1952, he was convicted of “gross indecency” – in effect, tried for being gay. His sentence – and he was faced with the miserable choice of this or prison – was chemical castration by a series of injections of female hormones. He took his own life just two years later.

I am pleased to have the chance to say how deeply sorry I and we all are for what happened to him. Alan and the many thousands of other gay men who were convicted as he was convicted, under homophobic laws, were treated terribly.

Wednesday, September 9, 2009

Letting a thousand flowers bloom, or ten thousand, whether you like it or not

Genetics is flourishing to an extent that those of us who have been around for a long time can hardly believe. The proliferation of journals is daunting: each issue of some, such as Genetics, resembles the proverbial Manhattan phone book. And we're now seeing the proliferation of online journals, which seems to be totally out of control in a competitive gold rush.

It's not just primary journals, all purportedly peer reviewed, that are proliferating. Review journals are sprouting like dandelions (Sense About Science, a British charitable trust dedicated to correcting misrepresentations about science, estimates that 1.3 million peer reviewed articles are published a year, and growing.) Pretty soon we'll have journals that just report the contents of review journals. In addition, we see the journals two times or more. Approved manuscripts are posted online before the final actual paper. Then the paper is e-published online before ink is put to any actual paper. Our inboxes are being filled with Urgent! emails listing stunningly important new papers. Humility is not part of the mix!

Indeed, a recent paper in Science (Strategic Reading, Ontologies, and the future of Scientific Publishing, Renear and Palmer, Science 325:828-833) shows that we read more but it takes us less time than in the past--as depicted in the graph. And soon computers will be mining papers for us, extracting the pithy parts, so that we can read even less. Does anybody really believe we have much recall of all this even now?

Hasty reading, reading mainly to cite something later in a 'literature review' of one's own papers or grant proposals. Peer review is clearly more hasty than ever before, and those of us on editorial boards know that a substantial fraction who are asked to review a paper decline.

This is out of control but clearly serves many interests. Not the least is that we're each trying to make our mark on tenure committees, grant reviewers--and posterity. Now. Patience is not called for! Of course, as always, most of what's being published will be chaff and relatively little grain, or weeds among the thousands of flowers that are blooming.

This does not necessarily mean that the papers being published are poor papers. Whether the system is good or bad depends on many factors and probably differs for each person depending on their age, seniority, grant funding, and personality. Some naturally are happy buried in minutiae, focused on one particular problem, such as the genetics of some specific disease, or a specific protein.

But others of us are not at ease with the current system, not just because we grew into our profession in calmer times, but because we think the important long-term aspect of science is synthesis, rather than reductionist partitioning and division down to the smallest detail.

Since its 'invention' in the 17th and 18th centuries, science has become a search for generalizations, not just particulars. Its long-term flow is driven by synthesis, which we often call 'theory'. In a field with thousands of technical particulars there is a kind of stream-like momentum. Implicitly, at least, people follow each other in terms of methods and kinds of analysis. Frantically trying to keep up or get ahead or stay afloat, de facto consensus forms--genomewide association studies, e.g., will explain complex diseases, so everyone jumps on the bandwagon. Epigenetics. Copy number variation. Whatever is the latest hot new thing, until something new comes along. Is this good in the long run, or is it ephemeral and herd-like?

The answer is probably a bit of both. Human genetics, plant genetics, Drosophila or mouse or zebrafish or nematode genetics are all providing similar explanations for how genes work and how development happens. Microbiology, though single-celled, is similar. In that sense, we really do get an overall picture of how life is from a genetic point of view.

At the same time, until there is a manifest feeling of failure or barrier, the flow won't shift and the de facto consensus can become a theory that is accepted without necessarily being very critically examined. We write a lot about the issues we think are being short-changed or mis-stated in this process. It is probably true that most people are happily ensconced in the details of their chosen subject and don't care about the big picture--we accept the current synthetic view without thinking too hard about it. That takes time or may slow down our next publication! But we're satisfied that the sea of particulars represents dramatic progress and are happy to carry breathlessly along.

We may drain ourselves to the point of exhaustion this way, or this may be how progress is made--it is our descendants who, a century from now, in calmer times, or at least in retrospect, will have the vantage point from which to look back and see what we have wrought. Our pointillist sea may, upon the distance of time, be a coherent picture after all.

What problem was Darwin trying to solve....and did he actually solve it?

We properly honor Darwin on the 150th anniversary of his Origin of Species, though more proper would be to have honored both Darwin and Wallace last year, when their ideas were jointly presented to the Linnaean Society. Indeed, their ideas actually rest on the cell theory, which was presented by Virchow in the same year (1858).

At the meeting Ken attended in Brazil last week, he got involved in a discussion with the distinguished ecologist Doug Futuyma of SUNY/Stony Brook. Ken had asserted that despite the title of his book, Darwin had not, in fact, solved the 'species' problem. First, beyond individuals, species are the nearest we have to objective categories in nature. Usually, we define species as populations that cannot interbreed to produce fertile offspring. But even there our definitions are often vague or imprecise.

Variation, even genomewide variation, can exist without speciation (it does among individuals within every species!). Widespread adaptive variation can exist without leading to speciation (humans are variable worldwide for presumably adaptive reasons--e.g., skin color, yet we're one species). And mating barriers can arise without adaptation in the usual sense (e.g., hybrid sterility genes).

In that sense Darwin did not solve the species problem he named his book after. Doug Futuyma suggested, however, that Darwin's main objective was not speciation per se, but the process that leads to it. Indeed, Darwin wanted 'natural selection' in the title of his book, because that was the process he was invoking as an extension of artificial selection by breeders, to explain long-term biological change and the origin of adaptive structures.

But was 'species' an incidental interest or a primary one? We think the answer is that species was indeed a central objective, and yet it is not separable today, nor in Darwin's mind, from the ultimate result of the process which is speciation. This seems clear in the way Darwin's book was written, in the materials presented to the Linnaean Society, and also in letters he wrote around the time of the book and earlier, around 1844, when he drafted a private sketch of his ideas.

The process was an extension of agricultural and hobby breeding, that clearly led to variation. But Darwin was also determined to show that species--natural 'types'--were not the result of specific acts of creation. The nature of 'transmutation' as it was often called at the time, was hotly debated and of course then, as now, centrally involved religious explanations of the world. Darwin was convinced that 'varieties' and 'species' arose gradually through natural processes.

So, while he did not solve the species problem per se (which is not a 'neat' problem in any case), he provided brilliant insight as to the nature of the processes that, in various ways, are involved in natural divergence that leads to the origin of species.

Monday, September 7, 2009

Fundamentalism and anthropological naivety

We were channel hopping this weekend, looking for the broadcast of Penn State's first football game of the season (well, Ken was looking; Anne wasn't). That led us to pass through several religious channels that our cable service provides.

It is in a sense an incredible phenomenon. Someone, usually manifestly poorly educated, in a robe of some sort, spouting off patent non-sense and opinion, in a rhetorical and tonal style to appeal to unquestioning emotion, and audiences (sometimes very large audiences) nodding unquestioningly (often in tears).

It is incredible that in our supposed age of science, this can still occur. It is an unsavory dose of reality, that wealth, education, and comfort do not actually educate people (unless by some weird chance these preachers are right and the entire empirical world an illusion). This is culture in action. We scientists and intellectuals flatter ourselves that we're the enlighteners of a benighted world, but it's not really true. People are surrounded by science, including evolution, and still it doesn't sink in.

In fact, we probably err in bemoaning the degenerating world that these religious hawkers are selling (and selling is an appropriate word for much of it, of course). We are perhaps the most formally educated population in human history: by far more people with more years of school, more credentialist degrees, technical training, and access to knowledge. Nonetheless, today as ever before, it has only been a small elite that is really 'educated' in the sense that applies here. High levels of this kind of knowledge have never been the daily bread of the majority of people, and they aren't now either.

The arguments produced in favor of sacred-text religion are specious and in the US often culpably misrepresent the claims of science, and we have every duty to try to correct them. But the real issue is not the physical facts of evolution vs theology. It's a deeper cultural fact about people, and symbolic battles for power and feelings of importance.

People generally like simple answers that explain everything. And they need opiates to calm their unease about the harsh realities they know are part of life. Mysteries can be appealing but also frightening, especially to those who know they're mortal. Science is no exception--simple answers are very appealing--which is why we do our best to resist unquestioned genetic determinism or darwinism. There should be no Gods, or gods (whether the latter be Marx or Darwin, Michael Jackson, or $$). Gods are dangerous. People are still willing, sometimes eager, to die for them.

We saw a comment in the news this weekend to the effect that fundamentalism will be the downfall of civilization. The rise of fundamentalism, according to this British scientist, means that global problems, such as climate change and population growth, won't be addressed with the kinds of worldwide cooperation needed to solve them. We realize that this was meant as a bemoaning of the human strife caused, justified, or motivated by fundamentalist belief, and we share that view. But it's important to understand that, as stated, it's totally wrong. First, it's probably fair to say that economic differences rather than religious ones are preventing global agreement on climate change. And secondly, if anything, the most advanced civilizations have thrived on marauding justified by religion (theological or, as in Marxism, secular). Mass-scale malevolent treatment of groups of people by other groups of people is perhaps one of the characteristics of large-scale complex societies.

It may also be natural for there to be people of good will who resist these dire aspects of human culture. There may be no precedent for it, but we hope they will eventually prevail.

Friday, September 4, 2009

Genetics in Brazil

I (Ken) was away this week, to give a talk at the 55th Congress of the Brazilian Genetics Society. The meeting was held in a relatively isolated hot-spring resort town, Aguas de Lindoia, north of Sao Paulo (the picture is of capybara, the world's largest rodent, in a lake near the meeting place). Around 2500 people were there, representing the wide array of genetics research, from ecology to experimental to biomedical.

Most of the attendees by far were young, enthusiastic students. Their posters and presentations (those I could understand through my lack of Portuguese) were first-rate, and shows a high degree of development of genetics research in this huge and burgeoning country. Several outsiders, like me, were privileged to be invited to talk to those attending.

The purpose of this brief post is just to pay a tribute to these achievements. With its huge resources for studying both the academic and practical sides of ecology, ecological change, and human impact, Brazil is a fascinating place that will be important in applied, evolutionary, and population ecological genetics. My guess is that the Brazilians, naturally friendly and open people, will continue to be receptive to potential collaborators who have good ideas. But they will be full collaborators, not just investigators in need of outside expertise or resources.

This meeting, like so many, was themed to honor the 150th anniversary of the publication of Darwin's Origin of Species. My only issue with this, and one I raised in my own presentation, is that the focus on Darwin and 1859 is somewhat misplaced and unfair. The reason is that it denies credit to Alfred Russel Wallace who, with Darwin but in the year before (1858), independently developed a theory of evolution. Wallace's was somewhat different from Darwin's, focusing more on group or species competition with the environment rather than among individuals, and Wallace's were somewhat more accurate relative to current knowledge in some other respects than Darwin.

Also, 1858 was the year in which Rudolf Virchow published his cell theory, that all life is cellular and all cells descend from other cells. That is an understanding upon which modern biology (evolutionary as well as functional) is entirely based.

So perhaps we missed doing our proper duty, and should have been celebrating last year. Wallace and Virchow might have gotten more notice, but of course, Darwin would still have been at the heart of the festivities!

Thursday, September 3, 2009

More on honey bees in the NYT

The New York Times today has an update on honey bees and colony collapse disorder ("Saving Bees: What We Know Now"), including an interview with Dr May Berenbaum, the senior author of the CCD study in PNAS, which we wrote about on Tuesday.

Not-so-random gene expression

One of the more perplexing questions about development is how a system rife with randomness--in the timing of gene expression, in whether genes in specific cells actually get turned on when instructed, in genetic variation itself, and so on--so predictably builds a recognizable replica of the organisms that donated their genetic material to the effort. The replica isn't exact, to be sure, as the genetic material comes from two parents with their own unique genomes, and mutations happen, but it's exact enough: a whale won't give birth to an elephant, nor a rabbit to a mouse. Randomness may be built in, but so is stability.

A recent paper in Science (Synchronous and Stochastic Patterns of Gene Activation in the Drosophila Embryo, Boettiger and Levine, July 24, 2009, Vol. 325. no. 5939, pp. 471 - 473) describes a mechanism that may explain some of that stability. Development is a time of rapid and contingent gene expression, demanding that at least a critical mass of cells in a developing tissue respond to signals in the same way, so that the next stage of growth can proceed. But not all cells that receive the same signal respond in the same way, such as by expressing a given gene at a specified time.

A note in the September Nature Reviews Genetics (Polymerase stalling gets genes in sync, p. 590) asks:
How is this variability dealt with in situations in which precise patterns of gene activation are important? A recent study [Boettiger and Levine] suggests a mechanism that can reduce variability in the onset of transcriptional activation in the Drosophila melanogaster embryo and may contribute to the precision of the developmental programme.
One of the initial steps in gene transcription is the recruitment and assembly of the RNA polymerase II complex that then starts the synthesis of new protein. If that complex isn't ready and waiting when a cell receives a signal to turn on a gene, the cell may not respond in a timely way, and the gene won't be turned on when needed.

Boettiger and Levine describe a series of elegant experiments looking at the timing of expression of a number of important 'control genes' in hundreds of fruit fly embryos.
These studies revealed two distinct patterns of gene activation: synchronous and stochastic [meaning random]. Synchronous genes display essentially uniform expression of nascent transcripts in all cells of an embryonic tissue, whereas stochastic genes display erratic patterns of de novo activation. RNA polymerase II is "pre-loaded" (stalled) in the promoter regions of synchronous genes, but not stochastic genes. Transcriptional synchrony might ensure the orderly deployment of the complex gene regulatory networks that control embryogenesis.

The timing differences are significant; synchronous expression of genes in different cells happens within 2 minutes of each other, while stochastic expression varies by as much as 20 minutes. Boettiger and Levine suggest that this may indicate two classes of genes, those for which timing of expression is crucial, and those for which it's less crucial. What controls the pre-loading of the RNA polymerase is not clear, nor how much play there still is in the process--previous experiments have shown that there is considerable variability in expression of the same gene in different cells, including non-expression, so the Boettiger/Levine classification scheme is clearly not exhaustive.

In many ways, randomness is crucial to evolution, but too much randomness during development can be lethal. As the Boettiger and Levine experiments show, evolution has produced ways to rein it in.

Tuesday, September 1, 2009

Colony collapse disorder solved?

Investigators may be on the verge of explaining colony collapse disorder (CCD), or the to-date unexplained demise of one third of the honey bee hives in the US and other countries. Previous explanations have included the proliferation of cell phone towers, pesticides, antibiotics, pathogens, sheer exhaustion in bees asked to work too hard, and many others, plausible and not. Earlier studies have found evidence of infection from a variety of viruses, suggesting a general immune disorder of some sort. However, Dr May Berenbaum and her team report in the current Proceedings of the National Academy of Sciences (in a paper called Changes in transcript abundance relating to colony collapse disorder in honey bees (Apis mellifera), Reed et al., published online Aug 24) that they may now actually be closing in on a convincing explanation of why so many bees are dying.

The investigators used microarray technology to compare the genes being expressed in the gut of sick bees vs. healthy bees, on the east and west coasts of the US, searching for a genetic footprint that might lead them to the cause of the disorder. If they found immune genes differentially expressed in sick bees, they could conclude that the bees were fighting an infection (however ineffectively). If they differentially found detoxification genes involved in response to pesticides, that would suggest a man-made cause. And so on. They also put pathogen DNA on these microarrays to see whether they could identify pathogens that might be more abundant in sick bees.

They found a lot of variation in gene expression between east and west coast bees, but generally, 65 genes seemed to be more frequently expressed in sick bees than healthy ones. These genes did not include an elevated level of pesticide response genes, or immune response genes, suggesting that these two oft-suggested insults were not the answer. To the surprise of these investigators, however, and rather by accident, they found broken fragments of ribosomes, protein manufacturing 'factories' that are inside every cell, in bees suffering from CCD. They also found evidence of picorna-like viruses (pico=small, picorna = small rna), which attack ribosomes by insinuating themselves into the bees' ribosomal RNA and disrupting control over which proteins the hijacked ribosome can synthesize. And, the team found ribosomal fragments, suggesting that infection can degrade these molecules. The ribosome ends up reproducing the virus's RNA (or none at all), but not the bee's, so that the bees are then unable to make the proteins they need to fight infection. Thus, Berenbaum et al. suspect that after the ribosomes have been attacked, any and every insult, including pesticide exposure, exhaustion, other infectious agents, and so on could precipitate the death of the colony.

If this really does explain CCD, does this mean it's treatable or preventable? No, at least not yet. This study represents a great use of a high tech method to tell a story, but, as in human genetic diseases, even when a causative gene is identified, this rarely points the way to a cure. For now, if confirmed, these results can be diagnostic, meaning that hives on the verge of collapse can now be identified. Whether collapse can then be prevented is not clear, at least to us.

Berenbaum went on to suggest, as a guest on the BBC program Material World on August 27, that working honey bees in the US and Europe represent only a "tiny slice of honey bee genetic diversity". There are more than 20 races of honey bee, she says, and she trusts that the amount of genetic diversity maintained by these races will be sufficient to save the bees. It's early days in our understanding of the honey bee genome, she said, but characterizing the function of more bee genes might help. It's not clear whether she's envisioning genetically modified bees, or artificial selection to increase the bees' resistance to the viruses now infecting them, or some other preventive measure, but she's hopeful. We hope she's right to be.