Friday, January 30, 2015

Marshawn Lynch's extra placenta feeds the curiosity

The most important sporting event in the universe will take place Sunday night and that's partly why one of the players, Marshawn Lynch, has been in the spotlight. But it's not entirely why.
Marshawn Lynch, running back for the Seattle Seahawks.
He's an interesting guy, and in more ways than his mesmerizing moves, broken tackles, and bizarre locker room interviews, as Andrew Sharp writes:
After his mother gave birth to him 28 years ago, the midwife told her that she may have had a twin that died in utero.“They just knew that Marshawn was living off two placentas,” his mother told USA Today. “She told me that with that, he may be an amazingly strong child.”
We all have origin stories, some more elaborate or more epic than others. And when someone is notable or special or revered or freakish, we certainly pile it on. You can't just be born if you're somebody. You have to emerge under a full moon at the very least, but even better under a meteor shower, and best while Halley's comet's in sight.

Siddhartha was talking and walking right out of the birth canal. And you know who he became? The Buddha.

Marshawn Lynch was feasting from not one but two placentas. And you know who he became? Beast Mode.

Right. But if you've ever read this blog before you're probably wondering the same thing I'm wondering: Is it even possible for a fetus to be nourished from two placentas? Like with two umbilical cords or a bifurcated one?

I'm going to offer that yes, it's possible; to say otherwise is as arrogant as that semicolon. But hell no I don't know of an instance to share with you. And hell yes I think it's highly unlikely that Lynch was living off two placentas. What's more, even if he was, it wouldn't necessarily have anything biologically to do with Beast Mode. I just can't see how. But I'm not exactly the Beast Mode of placentology so take that for what you will.

Lately, however, I am the Beast Mode of new motherhood so I adore how the midwife and the family interpreted the two placenta event. This is what makes humans a riot. This is why they're my new favorite species. Spin a tale about the start of a precious life, the kind of tale that sets dreams afloat... the sky's the limit for this kid and nobody better get in the way of that.

But here comes science. The parsimonious placental tale would be that Beast Mode's twin died in utero, but the placenta stayed around (boring!). I gather from a brief Google search that this happens occasionally, or that this is how the presence of two placentas (or one placenta that looks like it has two parts) is normally explained.

But see, now we've done it. The science killjoys have ruined another beautiful legend. If the Lynch family has already been fed that boring scientific explanation, or if they read it here just now, and feel like their story might no longer have legs, we can help. Let's put some stardust back into it, but with different science.

If there was a twin, then the fact that Marshawn Lynch survived the same experience that his twin couldn't, well, that might speak volumes about his strength. 

There. It's not a far-fetched story, as-is, already. And that's lovely. But we can do better.

If Marshawn Lynch was born with teeth--which the Buddha probably had too if he could speak so well at birth--then maybe Lynch didn't just merely out-survive his twin. Maybe he went full Beast Mode in utero, just like the sand tiger shark where one vicious shark blasts through the other eleven. No kidding. This shark has to prey upon its siblings in the womb if it is going to be born, and the one born is not only strong but stronger for it.

Feeding from two placentas? Never heard of it.

But feeding off your embryonic kin? Yeah.

Thursday, January 29, 2015

The Fountain of Youth, or, Monkey Glands Revisited?

Remember Serge Voronoff?  No?  He was the guy who wanted to use surgery to mimic a slurp from the Fountain of Youth.  At around the turn of the 20th century the idea became very popular among those socialites with resources to burn and not enough religion to really believe they were going somewhere after death (or, speaking of burning, perhaps given their wealth that that 'somewhere' might be rather warm), that you could extend youthful life by receiving a transplant of 'glands', that is, testicles from some other animal.  This would give you a hormonal boost that would put hair back on your chest (I don't know if women were ever privileged to have xenografts of, say, goat or chimp ovaries).

Voronoff (1866-1951)  Source: Wikipedia

Voronoff tried such xenografts among various domestic animal species, and for humans it became 'monkey glands' that were in demand by those in their idle dotage, or who wanted to avoid such dotage.  Here shown is a before and after (taken from the Voronoff book How to restore youth and live longer published in 1928).  Note the difference?  It seems less than miraculous to us, not that different from the fake youthfulizing miracle creams or treatments advertised in Parade Magazine.  But never mind.

Before and after a 'glandsplant', from Voronoff's 1928 book


I don't happen to know if Voronoff had himself rejuvenated by this means, or not, but it's beside our point here.  In the end, the idea proved rather useless, unless there was a placebo effect, and today we know about such things as tissue rejection and that this isn't the kind of transplant likely to work without a lot of remodeling of the recipients immune system.  But the idea of transplant of healthy tissue to replace diseased tissue is very much alive and well.  Fortunately, the problems have to a great extent since been worked out, as many thousands of people with kidney, heart, bone marrow, or other organ transplants have lived to be able to attest.

In a sense, putting a younger, healthy heart or kidney into you to replace your worn-out one is a clever, focussed and almost miraculously wonderful way to rejuvenate.  It doesn't make you younger over all, the way the monkey gland craze promised, but it gives you longer, stronger life.  That's very good and the transplant industry seems to be thriving as a result of understanding many different aspects of the body's responses to non-self tissue.  And there is no reason this success can't be extended in open-ended ways.

Naturally, the dream of staying frisky past sixty attracted a lot of attention in Voronoff's day, where the objective was not to replace a single decaying organ but to reverse the aging process more generally.  Just as naturally, it didn't work, but could more generic rejuvenation now be in the offing?

Modern 'parabiosis'
A recent Nature review describes the history of studies of pairs of mice, one young, one old, that have had a bloodline connection made between them (see figure).  This is known as parabiosis, and is like reverse-engineering conjoined twins!  The young mouse's blood goes into the older mouse, and then the scientists check what they wish to look at in the recipient's tissues.  The article summarizes findings suggesting that many different tissues appear in a sense to be 'younger' than the recipients calendar age.


The method summarized in a figure (with credits) in the Nature commentary
The effects were measured on trait after trait and various organ systems, with the basically consistent finding that the youthful blood made the tissues more youthful-appearing.  We say 'appearing' because the authors of these various experiments don't claim that there will be an overall longevity increase, though that would not be surprising.

The investigators are not claiming an actual fountain of youth, but make the reasonable inference that constituents of the blood change in a senescing way with age. They have identified a few components and clearly there are more to be found.

There may be many people with specific debilities for which such blood-borne treatment would work, though doing it by getting Sonny to be grafted onto Granny may be a tad of a problem (especially if Granny doesn't like to watch football games every weekend or Sonny doesn't like to watch Lawrence Welk).

However, instead of that rather draconian treatment, and it's not clear whether Sonny or Granny would resent it more, one can imagine extended sorts of allografts from at least compatible people, or homo-grafts (storing blood when young for use later on oneself....somewhat like cryopreservation!), or more likely the purifying or synthesizing of individual factors found differently in younger vs older blood.  Again, this is the appropriate sort of thing that is being investigated.

Those who themselves or whose relatives are 'losing it' with age, will likely welcome some such therapies.  But one will have to be careful.  Allografting tissue to ameliorate mental deterioration, one clear objective in principle, might not be so unquestionably good if the mentally rejuvenated recipient still can't walk, or hear, or see, etc., or if the treatment just stalls off death long enough for cancer, stroke, or renal failure to arise.

An ethical aside
At this juncture we can't refrain from remarking that we find the approval of these Frankenstein experiments on living mice to be thoroughly immoral and should not have been allowed.  The history of animal research reveals some of the gruesome things IRBs will sign off on. The investigators apparently reassured their doubting IRBs that they would carefully 'reduce animal discomfort', such is the euphemistic language usually used to avoid saying honestly, the victims' terror and agony.  Of course, as the article says, the investigators held 'long discussions with our animal-care committee', quoting one.  The animals remained joined for many months.  This seems a horrible thing to do to the animals.

One can argue about who should make the call about what constitutes 'torture', but we think surely there must be other ways to do blood or blood extract transfusions to achieve similar ends.  Of course investigators always defend what they want do as 'necessary'.  Were such studies ever really needed or justified?  Those who did them will of course vehemently say yes, essentially invoking human exceptionalism and the dismissal of the quality of life of their laboratory victims.  Things will have to rest at that.

We must acknowledge that we did IRB-approved research on normal and abnormal mouse development for 25 years, and that, while we did not knowingly torture them, we did kill them.  The whole subject of animal research, and what IRBs will sign off on, is not an easy one.

The current situation: plenty of positive possibilities
Anyway, if we can somehow put these ethics issues aside, it seems that many investigators who are following up these basic findings are, as the story reports, taking the modern more focused or specific approaches one would expect, such as to identify all compounds circulating more in young than old blood from isogenic individuals (e.g., inbred lab strains), finding various plausible signaling and other similar factors, and then attempting ingredient-specific infusion therapies.  The paper reports a number of these attempts and specific factors (growth factors, and the like) that are found in the younger blood that could be useful in this context in addressing human senescent diseases.  At this stage, with the factors identified, one doesn't even need isogenic individuals, and older to younger human parabiotic transfers are in trials in the university and private sectors.

In principle, given these kinds of circulating factors, one might not need to be specific to genotype of young donor or old host persons, if it be sufficiently accurate that there is no reason to think that the major aspects of the effect, or the detailed molecular structure of the youthful donor's circulating factors themselves, need to be genotype-specific.

One might wonder why it is that these factors' concentrations in blood diminish with age in the first place, because the complex mix and relative proportions of circulating factors may be genotype specific (especially if the widely claimed importance of genotype-based causation is accurate).  Thus, a good scientific question might be to ask how to reverse the normal decline of factor concentration with age, so an individual can resume making his/her own mix, rather than thinking that single-factor transfusion is the best approach.  On the other hand, if the decline in natural production is due to somatic mutation or local epigenetic changes, re-stimulating the person's own mix might be worse than using exogenous, purified, single-factor therapy, because the result might re-stimulate damaged molecular structure or relative concentrations. These things would be worth knowing both for basic biology and to understand aging better.

Additional more likely and important cautions exist because, as the Nature review notes, by stimulating cell division in the elderly one might generate cancers that would otherwise arise much later or not at all.  The same would be true of stimulating endogenous production of the same cytokines.  That is because, if understanding of cancer as largely a somatic mutational problem (see Tuesday's post) is correct, the cells in elderly recipients contain many mutational changes acquired by somatic mutation during life, making them closer to final transformation.  Stimulating a quiescent carcinogenic cell, or many cells that are but one mutational step away from transformation, could very quickly lead to a tumor.

It is relevant, we think, that the natural increase in age-specific cancer incidence generally tapers off in the elderly.  The presumed reason is simply that cell division slows down as tissues restore themselves more slowly.  So, making elderly cells divide could be playing with dynamite.

Nonetheless, identifying what is in young blood but diminishes with age is a useful idea with potentially major importance.  Some of this applied work, according to the Nature story, is already in the private sector.  Of course, with a cynical gleam in our eye, we can imagine that company offices are surely being set up somewhere in Florida, where the hordes of well-heeled snowbird customers are a ready-made sitting-duck clientele.  If they go for the skin creams and so on, think how they'll go for the new advanced rejuvenation therapy!

One could set up shop anywhere near Miami, say, but any shopping mall in the Sunshine State will probably do. Fortunately, with these new tools it will not be necessary to find an actual fountain, where the miracles of youth will be dispensed. As with old-fashioned monkey gland treatments, who, who is nearing his Maker, who can afford the treatment, would have the courage to say 'no'?  Still, if you want this new-style rejuvenation regimen, you will have to stand in line.  Ponce de Leon and his crew got there first, and they've been waiting for a very long time.

Ponce de Leon makes a point (from Wikipedia images. public domain)

By the end of his life, Serge Voronoff  had generally been judged to have been a quack. We can't say if that's fair or not.  There are always snake oil salesmen around, including 'respectable' scientists enthused by their idea, often who believe their over-stated or premature claims, hastily taking a research finding to the private world.  Brand it, say, 23andAgain!

At least, Voronoff's general idea wasn't completely whacko, even if the technology and its knowledge-underpinning were simply far from adequate for the task. The rich no longer demand monkey glands, but the basic yearning for extended youth is understandable, and isn't likely to go away, given the alternative.

But, while reveling in the many exciting, positive things that may be available in the near future, think about sparing a thought for the helpless and hapless mice whose slaveowners decided that doing this to them was an acceptable thing.

Tuesday, January 27, 2015

Somatic mutation: does it cut both ways?

I've written journal articles as well as blogposts here at MT, about the known and potential importance of somatic mutation (SoMu) as a cause of disease.  I referred to this in our post on 'precision' medicine yesterday, saying I'd write about it today.  So here goes, an attempt to show why SoMu may be an important causal phenomenon, one I called 'Cryptic causation' in a paper a few years ago in Trends in Genetics.

SoMu's are DNA changes that occur in dividing cells after the egg is fertilized.  Mutations arise every time cells divide after that, throughout life.  Each time a cell divides thereafter, the mutations that arose when it was formed are transmitted to its daughter cells, and this continues throughout life (unless that site experiences another mutation at some point during its lifelong lineage).  The distinction between somatic mutations and germ line mutations goes back to Weissmann's demonstration of the separation of the 'soma' and the 'germ line', the germ line being a developmental clade of cells leading to sperm and egg cells and soma being cells unrelated to these.  A change from parent to offspring that reflects mutation arising in the germ line is the usual referent of the word 'mutation'.   Wherever they arose in the embryogenesis of the gonads, they are treated as if they occurred right at the time of meiosis.  That isn't a real problem, but it is fundamentally distinct from SoMu, because the latter are inherited in the somatic (body) tissue lineage in which they arose, but are not transmitted to offspring.

Normally, we would dismiss somatic mutation as just one of those trivial details that has little to do with the nature of each organism--its traits.  At any given genome location, most of the cells have 'the' genome that was initially inherited.  If a SoMu breaks something in a single cell in some tissue, making that cell not behave properly, so what?  Mostly the cell will die or just while away its life not cooperating, its diffidence swamped out by the millions of neighboring cells, performing their proper duties, in the mutant cell's organ.  It will have no effect on the organism as a whole.

But that is not always so!  In some unfortunate cell, a combination of inherited and somatic variants may lead that individual cell to be hyperviable in the sense of not following the local tissue's restrictions on its growth and behavior.  It can then grow, differentiate, grow more, again and again. We have a name for this: it's called cancer.

Somatic changes may mean that different parts of a given organ have somewhat different genotypes. Some fraction of, say, a lung or stomach, may work more or less efficiently than others.  If the composite works basically well, it won't even be noticed (unless, for example, the somatically mutant clones cause differences, like local spots, in skin or hair pigment).  But when a change in one cell is early enough in embryogenesis, or there is some other sort of phenotype amplification, by which a single mutant cell can cause major effects at the organismal level, the SoMu is very important indeed.

It isn't just cancer that may result from somatic mutation.  Epilepsy is a possible example, where mutant neurons may mis-fire, entraining nearby otherwise-normal neurons to engage in firing, and producing a local seizure.  I suggested this possibility a few years ago in the Trends in Genetics paper, though the subject is so difficult to test that although it is a plausible way to account for the locality of seizures, the idea has been conveniently ignored.

There are theories that mitochondria, of which cells contains hundreds or thousands, may mutate relatively rapidly and function badly.  They are an important way the cell obtains energy, and the mitochondrial DNA is not in the nucleus and is not prowled by mutation-repair mechanisms the way chromosomes are.  Some have suggested that SoMu's accumulate in neurons in the brain, and since the neurons don't replicate much if at all, they can gradually become damaged.  It's been suggested that this may account for some senile dementia or other aging-related traits.

Beware, million genome project!
What has this got to do with the million genome project?  An important fact is that SoMu's are in body tissues but are not part of the constitutive (inherited) genome, as is routinely sampled from, say, a cheek swab or blood sample.  The idea underlying the massive attempts at genomewide mapping of complex traits, and the new culpably wasteful 'million genomes' project by which NIH is about to fleece the public and ensure that even fewer researchers get grants because the money's all been soaked up by DNA sequencing, Big Data induction labs, is that we'll be able to predict disease precisely, from whole genome sequence, that is, from constitutive genome sequence of hordes of people.  We discussed this yesterday, perhaps to excess. Increasing sample size, one might reason, will reduce measurement error and make estimates of causation and risk 'precise'.  That is in general a bogus self-promoting ploy, among other reasons because rare variants and measurement and sample errors or issues may not yield a cooperating signal-to-noise ratio.

So I think that the idea of wholesale, mindless genome sequencing will yield some results but far less than is promised and the main really predictable result, indeed precisely predictable result, is more waste thrown onto mega-labs, to keep them in business.

Anyway, we're pretty consistent with our skepticism, nay, cynicism about such Big Data fads as mainly grabs in tight times for funding that's too long-lasting or too big to kill, regardless of whether it's generating anything really useful.

One reason for this is that SoMu cannot be detected in the kind of whole genome sequences being ground out by the machinery of this big industry.  If you have SoMu's in vulnerable tissues, say lung or stomach or muscle, you may be at quite substantial increased risk for some nasty disease, but that will be entirely unpredictable from your constitutive genome because the mutation isn't to be found in your blood cells.  Now, thinking about that, sequencing is not so precise after all, is it?

I've tried to point these things out for many years, but except for cancer biologists the potential problem is hardly even investigated (except, in a different sort of fad, by epigeneticists looking for DNA marking that affects gene expression in body cells but that, also, cannot be detected by whole genome sequencing).

In fact, epigenetics is a similar though perhaps in some ways tougher problem.  DNA marking affects gene expression by changing it in local tissues, which reflects cellularly local environmental events and hence constitutive genomics can't evaluate it directly.  On the other hand, epigenetic marking of functional elements can easily and systematically be reversed, also enzymatically in response to specific environmental changes at the cell level.  These are somatic changes in DNA dynamics, but at least SoMu, if detected, basically doesn't get reversed within the same organism and is 'permanent' in that sense, and hence easier to interpret.

But--the mistake may go in the opposite direction!
But I've myself neglected another potentially quite serious problem.  SoMu's arise in the embryonic development of the tissues we use to get constitutive genome sequences.  The lineage leading to blood and other tissues divides from other lineages reasonably early in development.  The genome sequenced in blood is not in fact your constitutive genome!  Information found there may not be in other of your tissues, and hence not informative about your risks for traits involving gene expression.

The push for precision based on genomewide sequencing is misguided in this sense, the opposite of the non-detectability of SoMu's in blood samples.  The opposite may be true: what's is found in 'constitutive' genomes in blood samples may actually not be found in the rest of the body and may not have been in your inherited genome!

This may not be all that easy to check.  First, comparing parent to offspring, one should see a difference, that is, non-transmitted alleles in both parties.  But since neither parent's blood and offspring's blood is entirely their 'constitutive' genomes, it may be difficult to know just what was inherited.  Even if most sites don't change and follow parent-offspring patterns, it doesn't take that many changes to cause disease-related traits (if it did, then why would so much funding be going to 'Mendelian', that is, single gene, usually single-mutation traits)?

One could check sequences in individuals' tissues that are not in the same embryonic fate-map segment as blood, or compare cheek cells and blood, or other things of that nature.  In my understanding at least, lineages leading to cheek cells (ectodermal origin) and blood cells (mesodermal origin) separate quite early in development.  So comparing the two (being careful only to sample white cells and epithelial cells) could reveal the extent of the problem.

It might comfortingly show that little is at issue, but that should be checked.  However, of course, that would be costly and would slow down the train to get that Big Funding out of Congress and to keep the Big Labs and their sequencers in their constituencies in operation.

Still, if we are being fed promises that are more than just ploys for mega-funding in tight times,  or playing out of the belief system that inherited genome sequence is simply all there is to life, or is enough to know about, then we need to become able to look where genetic variation manifests its effects:  at the local cell level.  Even for a true-believer in DNA as everything, a blood-based sequence can only tell us so much--and that may not include the variation that exists in the person's other tissues.

Well, one might wish to defend the Infinite Genomes Project by saying that at least constitutive genome sequences from blood samples get most, or the main, signal by which genetic variation affects risk of traits like disease.  But is that even true?

First, huge genomewide mapping studies routinely, one might say notoriously relative to the genome faith, account for only a fraction, usually small fraction, of the estimated overall genetic contribution as estimated by measures like heritability.  Predictive power is quite limited (and here we're not even considering environments, which cloud the picture greatly).

But second, risk from constitutive genome sequence is, as a rule and especially for complex or late-onset traits that are so important to our health and longevity, accounting only for a fraction of overall risk.  That is, heritability is far below 100%.  So the bulk of risk is not to be found in such sequence data.  And while 'environment' is clearly of major importance, SoMu appears as environment in genomic studies, because the variants are not in constitutive sequences and not shared between parents and offspring in family studies.  This may be especially important for traits that really do seem to involve genes in the cellular mechanism, as so clearly shown by cancers.

Thus, it is not accurate to say that at least we even get the bulk of genetic (meaning inherited) risk accounted for by pie-in-the-sky exhaustive genome sequencing.  Yet, testing for SoMu is not even on the agenda of Big Data advocates.

How much more one would get from a serious approach to SoMu--which would require some serious innovative thinking--remains untested.  It's not on the agenda not because we know its relatively unimportant, but because it's hard to test, and in that sense hard to use to grease the wheels of current projects for which an excuse to keep funding is what is really being sought by the Big Data advocates.  It's safer, even if we know it's got its limits and we don't really know what those limits are.

A real 'genomic' approach should include checking for the problems caused by SoMu--in both directions!

Monday, January 26, 2015

What's 'precise' about 'precision' medicine (besides desperate spin)?

Not very long ago we were promised 'personalized genomic medicine'. Surely you remember.   It was a slogan like any advertising slogan and if you read the fine print (the caveats, if you can find them) you'll see various safety valves.  Medicine has always been 'personalized' but the implication was that we'd be treating everyone in ways that are specifically dictated by what we find in their genome (whole DNA sequence of their 'constitutive' or inherited genome).

The idea was an advertising or promotional way of lobbying for funds for the belief system that genomes cause everything (except, perhaps, the final Super Bowl score...but even that, well, if the quarterbacks' and receivers' skills are--as surely they must be--genetically determined, maybe we can even predict that!).  Of course, when someone carries a particular genotype at some locus with a strong effect, and many of those are known, a clinician should, indeed must, take that into account.

But that is nothing new, and has been the business of the profession of genetic counselors and so on (not necessarily of online DNA businesses one might name).  That sort of personalized genomic medicine is no more novel than 'evidence-based' medicine, which is another slogan, this time perhaps for the for-profit HMO businesses who want to dictate how the doctors in their stable they control treat their patients, and it's nominal objective is to eliminate poor doctoring, which is good, but anyone not thinking it's basically about profits is willfully naive.

Anyway, now we're seeing a new slogan, so mustn't that mean we have successfully achieved the goal of 'personalized genomic medicine'?  Obviously, if we have any accountability left in government spending, even under Republicans, if we didn't solve the previous objective, what's the justification for a new one, much less expecting Congress to allocate the funding for it?  A cynic (but not us!) might say that the lobbying aspect of biomedical research is now onto the next slogan, changing the packaging even if the product's not really changed.  One has to keep changing slogans if one wants to keep customers' (and Congress') attention so they'll keep giving you money.

The new slogan is 'precision medicine'.  Sara Reardon says this about that in Nature:
The agency seems to have been planning the effort for some time, listing 'precision medicine' as one of its four priorities in its 2015 budget proposal; another was 'big data'. Other government agencies are also expected to participate, as may some private companies. There is no word on how much the initiative will cost, but details are likely to trickle out as Obama prepares his budget request for fiscal year 2016, which is due to be released on 2 February.
Wow!  This will replace, one has to assume, all that 'sloppy' medicine of the past, just as 'evidence-based' medicine presumably replaced 'lack-of-evidence-based medicine'.  Now, docs will know precisely what ails you and precisely what to do about it, and this will precisely involve genetic approaches since that's all NIH's leadership seems to understand.

OK, OK, so we're (again) being snide.  But not just snide.  You ask yourself what one might mean by 'precision' medicine?  Does it mean 100% accurate, no mistakes?  Of course not, so then, what?  If it means the doc does his/her best, is that anything new or something to write home about?  Surely Dr Collins isn't suggesting that he's been imprecise with his promises about genomics.  Surely not!  Instead, the claim is that we'll be able to look at your genome and hence know precisely what is in your future and what to do for (or to) you as a patient.   Anyone who knows anything about genetics knows that, with some clear-cut but generally rare exceptions, that's bollocks!

What does 'precision' mean (if anything)?
The goal of precision genomic medicine sounds laudable, though if it refers to everything being precisely based on genes, that would be the only thing that's new.  In some venues, at least, the idea has instantly received the ridicule it precisely deserves from the get-go. At least, conscientious doctors have always done precisely as well as they could, given their knowledge at the time.  So, what must be meant is that now genetics will let us treat patients in a way that is, finally, really precise!

But wait: what does the word 'precise' actually mean? Well, look it up. It means 'marked by exactness or accuracy'.  OK again, that sounds great.....or does it?  Does even Francis Collins, your genome's best friend, really believe that we'll have exact diagnosis or treatment based on genome sequences? Or what about 'accurate'?  Which doesn't necessarily mean correct, or appropriate -- just targeted at a given spot.

'Precision' can refer to perfection or exactness.  Or it can refer to something less, some kind of within-knowable error.  Proper science always deals in precision, but properly, by specifying the degree of accuracy.  One says an estimate is precise to within x percent, based on the current data.  But a risk of, say, 5% can be precise to within , say, 1 or 1/10 of a percent or more or less.  It's not all that reassuring that your genome gives your doc something like that unless the range is narrow and reliable.  In general that is the kind of meaning we can assign to 'precision', when done honestly and honorably, but it's far from the scientific reality in this field.

Indeed, when the precision of estimates is presented properly, one is in essence expressing inexactness--of a machine or instrument or estimate.  To claim that we're going to be using NIH research funds to generate precision medicine is saying that we're going to do the best we can.  One can only hope so!

Precision and accuracy are adjectives that do not in themselves mean anything until the degree is stated, along with how that degree is known and by what criterion.  Does it relate to replication of a process and similarity of results?  Does it mean what will happen or what might happen with some specifiable accuracy or probability?  If you don't state that, you're just playing empty word games.

Worse than that, honest research and clinical treatment has always been 'the best we can' at any given time.  Likewise, genome-based prediction, usually very imprecise both in the sense of being inaccurate and also of being of very poorly known degree of inaccuracy, is nothing new.  It's nobody's fault, because the genome and its interactions with the environment are complex.  One hopes that whatever real genomic information on risk, response to treatment, diagnosis etc. we have will be as precise as possible, in the proper sense of the term, and that new knowledge from research will increase that precision.

But it is dishonorable to imply that this is something new and different, and to suggest even implicitly that genome sequencing and the like are leading us to anything close to what most people think of when they hear the word 'precision', is anything new or is generally even in the cards.  Especially following on the previously largely vacuous promise of 'personalized' genomic medicine.

It is very misleading to suggest otherwise.  It takes guts and ruthless lobbying.  'Precision' is literally almost meaningless in this context!

The million genomes project
In the same breath, we're hearing that we'll be funding a million genomes project.  The implication is that if we have a million whole genome sequences, we will have 'precision medicine' (personalized, too!).  But is that a serious claim or is it a laugh?

A million is a large number, but if most variation in gene-based risk is due, as mountains of evidence shows, to countless very rare variants, many of them essentially new, and hordes of them perhaps per person, then even a million genome sequences will not be nearly enough to yield much of what is being promised by the term 'precision'!  We'd need to sequence everybody (I'm sure Dr Collins has that in mind as the next Major Slogan, and I know other countries are talking that way).

Don't be naive enough to take this for something other than what it really is:  (1) a ploy to secure continued funding perpetrated on his Genome Dream, but in the absence of new ideas and the presence of promises any preacher would be proud of, and results that so far clearly belie it; and (2) a way to protect influential NIH clients with major projects that no longer really merit continued protection, but which will be included in this one (3) to guarantee congressional support from our representatives who really don't know enough to see through it or who simply believe or just want cover for the idea that these sorts of thing (add Defense contracting and NASA mega-projects as other instances) are simply good for local business and sound good to campaign on.

Yes, Francis Collins is born-again with perhaps a simplistic one-cause worldview to go with that.  He certainly knows what he's doing when it comes to marketing based on genetic promises of salvation.  This idea is going to be very good for a whole entrenched segment of the research business, because he's clever enough to say that it will not just be one 'project' but is apparently going to have genome sequencing done on an olio of existing projects.  Rationales for this sort of 'project' are that long-standing, or perhaps long-limping, projects will be salvaged because they can 'inexpensively' be added to this new effort.  That's justified because then we don't have to collect all that valuable data over again.

But if you think about what we already know about genome sequences and their evolution, and about what's been found with cruder data, from those very projects to be incorporated among others, a million genome sequences will not generate anything like what we usually understand the generic term 'precision' to mean.  Cruder data?  Yes, for example, the kinds of data we have on many of these ongoing studies, based on inheritance, on epidemiological risk assessment, or on other huge genomewide mapping has consistently shown that there is scant new serious information to be found by simply sequencing between mapping-marker sites.  The argument that the significance level will raise when we test the actual site doesn't mean the signal will be strong enough to change the general picture.  That picture is that there simply are not major risk factors except, certainly, some rare strong ones hiding in the sequence leaf-litter of rare or functionless variants.

Of course, there will be exceptions, and they'll be trumpeted to the news media from the mountain top.  But they are exceptions, and finding them is not the same as a proper cost-benefit assessment of research priorities. If we have paid for so many mega-GWAS studies to learn something about genomic causation, then we should heed the lessons we ourselves have learned.

Secondly, the data collected or measures taken decades ago in these huge long-term studies are often no longer state of the art, and many people followed for decades are now pushing up daisies, and can't be followed up.

Thirdly, is the fact that the epidemiological (e.g., lifestyle, environment...) data have clearly been shown largely to yield findings that get reversed by the next study down the pike.   That's the daily news that the latest study has now shown that all previous studies had it wrong:  factor X isn't a risk factor after all.  Again, major single-factor causation is elusive already, so just pouring funds on detailed sequencing will mainly be finding reasons for existing programs to buy more gear to milk cows that are already drying up.

Fourth, many if not even most of the major traits whose importance has justified mega-epidemiological longterm follow up studies, have failed to find consistent risk factors to begin with. But for many of the traits, the risk (incidence) has risen faster than the typical response to artificial selection.  In that case, if genomic causation were tractably simple, such strong 'selection' should reflect those few genes whose variants respond to the changed environmental circumstances.  But these are the same traits (obesity, stature, diabetes, autism,.....) for which mapping shows that single, simple genetic causation does not obtain (and, again, that assumes that the environmental risk factors purportedly responsible are even identified, and the yes-no results just mentioned above shows otherwise).

Worse than this, what about the microbiome or the epigenome, that are supposedly so important? Genome sequencing, a convenient way to carry on just as before, simply cannot generally turn miracles in those areas, because they require other kinds of data (and, not available from current sequencing samples nor, of course, from deceased subjects even if we had stored their blood samples).

And these data will be almost completely blind to another potentially very important genetic causal process, that of somatic mutation.  Tomorrow we'll discuss that issue.

Of course, there are many truly convincing genetic factors that are clearly relevant to know and to use in diagnosis or treatment decisions.  Those are precisely the factors to test for, investigate, intervene with and so on.  NIH should be closed down entirely if they are aiming at anything different (and no one suggests that, in general, they are).  Even if rare, or 'orphan' disorders, they are ones for targeting engineering preventive or curative approaches to; if science is good at anything, it is engineering.

Precisely what will the current proposed work yield, then?  It will yield an even better picture of how wasteful this sort of perpetuate-every-big-study-you-can-identify project will have been.  That is, at least, one precise  prediction!

Caveat emptor!  The Ayn Rand factor: don't mistake it for science
Everybody naturally wants precision-based medicine, using genes in those areas in which that is wholly appropriate.  But what we should expect is that NIH puts its resources precisely where they will do the most good for the investment.  Those, including Congress, who think that NIH has been doing that in the recent past are precisely those who should start paying closer attention.

If you are among those who have paid attention to the miracles NIH officials, Dr Collins in particular, have been promising and the language they have used, now for over 20 years, then you should suggest that they leave NIH and get new jobs in a place where truthfulness is not part of the deal: Madison Avenue. Unless, of course, simply finding a rationale for keeping the funding tap open to existing clients is the real underlying objective.

Some readers may say enough winging about this is enough!  After all, genes are important so this will be good science even if promises are exaggerated.  There will be good science, certainly, but this HyperProject will undermine the idea of nimble science, driven by ideas rather than empires.

Science does involve money and so is never too far from underlying politics. In a largely Republican environment, dancing to the ghost of Ayn Rand, it will be interesting to see if those now in power keep indulging the 'haves' in science.  One might expect them to, but at the same time, this is, when you look closely, largely a welfare project for the science in-groups, to keep life in large, long-standing, tired projects well past their point of diminished returns: like bailing out rusting industries, the long-term projects once found useful things but are now clearly degraded from good cost-benefit  profiles.  So one can ask if even the Republicans are paying attention?

A Big Data bailout will, of course, preserve jobs and career status for the main recipients.  The Old Boy networks in science were to some extent dismantled by the democratization of funding that began in NIH and NSF around the 1980s.  Peer review included peers of both genders, racially distributed and funding geographically dispersed.  It was never perfectly equitable, but it was much more open and democratic than the back-room sense that seemed to have prevailed before.

But the new Old Boys (and now Girls, too, just as acquisitive as the Boys) are back with a vengeance!  As always, those at the top, with the proposed bail-out project extension, will hire many minions of workers, including technicians, post-docs and other well-trained scientists.  Those are jobs, certainly, but as before they're largely located in the elite universities that have long had a big grasp on funds (and have munificent private sources they could use instead of feeding so hungrily at the public trough).  You can voice your own view about whether this elitism is what's best for science or not.  But no matter, welfare for scientists and technicians, controlled by the Big Lab aristocracy, is largely what's afoot--again.

Aristocracies maintain themselves by making enough people dependent on them--the research cogs.
If you have a liberal bent of mind, as we tend to, you can't object to the use of public sector resources so people have jobs, though inside tracks for science-related people isn't exactly democratic.  But what the welfare-for-tired-projects will do, as adverted to above, is to deprive even these clever inside people of chances to be innovative, and for science to be more nimble.  That's because the army of scientists and staff, those on the hundreds-of-author papers, are forced by this system to be cogs in the Mega-wheel of Big Data projects.  That's not good for science.

For, not against
Our arguments are not against science, but for it.  They are for nimble, fleet-footed science with a fair, idea-driven marketplace rather than institutionally inertial one.  And, also as noted earlier, a million sounds good but isn't nearly enough for some of the implied success and seems more likely to be intentionally setting the table for the future of this welfare system--whole population sequencing! Why not?  It would be rather surprising if the Director, given his track record, hasn't got this in his back pocket for when the current catch-phrase is worn.

An irony is that our comments here might be interpreted as dismissing the causal importance of genetics in the nature of organisms and their evolution.  In a sense, our message is the opposite: it's that by building genetics into the sociopolitical institutional structure of science, and hence its particular welfare or self-maintenance system, we routinize what isn't yet well enough understood to be routinized.  We trivialize genetics in that way, the opposite of what should be done.  We benumb minds that should be sharpened by facing an open, rather than channeled frontier.

One thing, though: this is not a shell game!  It's all being done in plain sight.  You and everyone who thinks about it, knows what this is.  We personally are newly retired and have no dog in the fight. But one would expect that sooner or later a wide community of scientists will tire of Dr Collins' continual feeding of his narrow ideology (or his dependents, view it how you wish), for lack of better scientific ideas.  If the victims whose careers and ideas are not being protected by this welfare system don't care enough, don't act, or can't find a way to resist by credible challenges to the status quo, the status quo will remain.  It's that simple.

Friday, January 23, 2015

What is 'inappropriate' use of baby aspirin? The risk of estimating risk

Something like a third of the American population* takes a baby aspirin every day to prevent cardiovascular disease (CVD).  But a new study ("Frequency and Practice-Level Variation in Inappropriate Aspirin Use for the Primary Prevention of Cardiovascular Disease : Insights From the National Cardiovascular Disease Registry’s Practice Innovation and Clinical Excellence Registry", Hira et al., J Amer Coll Cardiol) suggests that more than 1 in 10 of these people are taking it 'inappropriately.'



Aspirin slows blood clotting, and blood coagulation plays a role in vascular disease, so the thinking is that some heart attacks and strokes can be prevented with regular use of aspirin, and indeed there is empirical support for this.  As with many drug therapies, it was the side effects of aspirin use for something else, in this case rheumatoid arthritis (RA), that first suggested it could play a role in CVD prevention -- a 1978 study reported that aspirin use lowered the risk of myocardial infarction, angina pectoris, sudden death, and cerebral infarction in RA patients (study cited in an editorial by Freek Verheugt accompanying the Hira paper), a result that kick-started its use for CVD prevention.

The new Hira et al. study included about 68,000 patients in 119 different practices taking aspirin for prevention of a first heart attack or stroke, not recurrence.  The authors looked at clinical records in a network of cardiology practices to assess the proportion of patients in each practice that was taking aspirin, and whether they met the 10-year risk criteria for 'appropriate use' as determined by the Framingham risk calculator.  The calculator uses an algorithm based on age, sex, total cholesterol, HDL cholesterol, smoking status, blood pressure and whether the patient is taking medication to control blood pressure.

Appropriate use, according to Hira et al., is a 10-year risk of greater than 6%.  According to the calculator itself, 6% risk means that 6 of 100 people with whichever set of factors yields this risk will have a heart attack within the next 10 years.  The reason this even has to be thought about is because there is some risk to taking aspirin because it's an anticoagulant and can cause major bleeding, so maximizing the cost/benefit ratio, preventing CVD as well as major bleeds, is what's at issue here.  If the benefit is a long-shot because an aspirin user isn't likely to have CVD anyway, the potential cost can outweigh the pluses.

As Verheugt explains:
Major coronary events (coronary heart disease mortality and nonfatal MI) are reduced by 18% with aspirin but at the cost of an increase of 54% in major extracranial bleeding. For every 2 major coronary events shown to be prevented by prophylactic aspirin, they occur at the cost of 1 major extracranial bleed. Primary prevention with aspirin is widely applied, however. This regimen is used not only because of its cardioprotection but also because there is increasing evidence of chemoprotection of aspirin against cancer.
Hira et al. found that 11.6% of the population of patients visiting a cardiology practice were taking aspirin inappropriately, having a risk less than 6% as calculated by the Framingham calculator.  That is, their risk of bleeding outweighs the potential preventive effect of aspirin.

But, about this 6% risk.  Does it sound high to you?  Would you change your behavior based on a 6% risk, or would you figure the risk is low enough that you can continue to eat those cheese steaks?  Or maybe you'd just start popping aspirin, figuring that made it really safe to continue to eat those cheese steaks?

And why the 6% threshold?  So precise.  Indeed, a 2011 study suggested different risk thresholds for different age categories, increasing with age.  And, different calculators (such as this one from the University of Edinburgh) return different risk estimates, varying by several percentage points given the same data, so so much for precision.

Risk is, of course, estimated from population data, based on the many studies that have found an association between cholesterol, blood pressure, smoking status, and heart attack, particularly in older men.  A distribution of risk factors and outcomes would thus show that for a given set of cholesterol and blood pressure values, on average x% will have a heart attack or stroke.  These are group averages, and using them to make predictions for individuals cannot be done with precision that we know to be true.  Indeed, one of the strongest risk factors known to epidemiology, smoking, causes lung cancer in 'only' 10% of smokers, and it's impossible to predict who. But that's why these CVD risk calculators never estimate 100% risk.  The highest risk I could force them to estimate was "greater than 30%".

Hard to know what that actually means for any individual.  At least, I have a hard time knowing what to make of these figures.  If 6 of 100 people in the threshold risk risk category will have an MI in the next 10 years, this means that 94 will not.  So, another way to think about this is that the risk for 94 people is in fact 0, while risk for the unlucky 6 is 100%.  For everyone over the 6% threshold, the cost -- possible major bleed -- is assumed to be outweighed by the benefit -- prevention of MI --  even when that's in fact only true for 6 out of 100 people in this particular risk category.  But, since it's impossible to predict which 6 are at 100% risk, the whole group is treated as though it's at 100% risk, and put on preventive baby aspirin, and perhaps statins as well, and counseled on lifestyle changes and so on, all of which can greatly affect the outcome, and alter our understanding of risk factors -- or the effectiveness of preventive aspirin.  And what if it's true that a drink a day lowers heart failure risk?  How do we factor that in?

Further, a lot of more or less well-established risk factors for CVD are not included in the calculation. After decades of cardiovascular disease research, it seems to be well-established that obesity is a risk factor, as well as diabetes, and certainly family history.  Why aren't these pieces of information included?  Tens if not hundreds of genes have been identified to have at least a weak effect on risk (and even this number only account for a fraction of the genetic risk as estimated from heritability studies), and these aren't included in the calculation either.  And, we all know people who seemed totally fit, who had a heart attack on the running trail, or the bike trail, so at least some people are in fact at risk even with none of the accepted risk factors.

So, 11.6% of baby aspirin takers shouldn't be taking aspirin.  But, when risk estimation is as imprecise as it is, and as hard to understand, this seems like a number that we should be taking with a grain of salt, if not a baby aspirin.  Well, except that salt is a risk factor for hypertension which is a risk factor for heart disease....or is it?


------------------
*Or something like that.  It turns out that the Hira paper cited a 2007 paper, which cited a 2006 paper, which cited the Behavioral Risk Factor Surveillance System 2003 estimate of 36% of the American population taking a baby aspirin a day.  But this is a 12 year old figure, and I couldn't find anything more recent.

Thursday, January 22, 2015

Your money at work...er, waste: the million genomes project

Bulletin from the Boondoggle Department

In desperate need for a huge new mega-project to lock up even more NIH funds before the Republicans (or other research projects that are actually focused on a real problem) take them away, or before individual investigators who actually have some scientific ideas to test, we read that Francis Collins has apparently persuaded someone who's not paying attention to fund the genome sequencing of a million people!  Well, why not?  First we had the (one) human genome project.  Then after a couple of iterations, the 1000 genomes project, then the hundred thousand genomes 'project'.  So, what next?  Can't just go up by dribs and drabs, can we?  This is America, after all!  So let's open the bank for a cool million. Dr Collins has, apparently, never met a genome he didn't like or want to peer into.  It's not lascivious exactly, but the emotion that is felt must be somewhat similar.

We now know enough to know just what we're (not) getting from all of this sequencing, but what we are getting (or at least some people are getting) is a lot of funds sequestered for a few in-groups or, more dispassionately perhaps, for a belief system, the belief that constitutive genome sequence is the way to conquer every disease known to mankind.  Why, this is better than what you get by going to communion every week, because it'll make you immortal so you don't have to worry that perhaps there isn't any heaven to go to after all.

Anyway, why not, the genomes are there, their bearers will agree and they've got the blood to give for the cause.  Big cheers from the huge labs, equipment manufacturers and those eyeing the Europe and and Chinese to make sure we don't fall behind anyone (and knowing they're eyeing us for the very same reason).  And this is also good for the million author papers that are sure to come.  And that's good for the journals, because they can fill many pages with author lists, rather than substance.

Of course, we're just being snide (though, being retired, not jealous!).  But whether in fact this is good science or just ideology and momentum at work is debatable but won't be debated in our jealous me-too or me-first environment.

Is there any slowing down the largely pointless clamor for more......?

We've written enough over the past few years not to have to repeat it here, and we are by no means the only ones to have seen through the curtain and identified who the Wiz really is.  If this latest stunt doesn't look like a masterful, professionally skilled boondoggle to you, then you're seeing something very different from what we see.  One of us needs to get his glasses cleaned.  But for us it's moot, of course, since we don't control any of the funds.

Wednesday, January 21, 2015

Dragonfly the hunter

For vertebrates and invertebrates alike, hunting is a complex behavior.  Even if it seems to involve just a simple flick of the tongue, the hunter must first note the presence of its prey, and then successfully capture it, even when the prey makes unpredictable moves. Vertebrates hunt by predicting and planning, relying on what philosophers of mind call 'internal models' that allow them to anticipate the movement of their prey and respond accordingly, but whether invertebrates do the same has not been known.  The typical human-centered reflex is to dismiss insects as mere genetic robots, mechanically linking sensory input to automatic, hard-wired action.

But that may be far too egocentric, because a new paper in the January 15 issue of Nature ("Internal models direct dragonfly interception steering," Mischiati et al.) describes the hunting behavior of dragonflies, and suggests that dragonflies have internal models as well.
Prediction and planning, essential to the high-performance control of behaviour, require internal models. Decades of work in humans and non-human primates have provided evidence for three types of internal models that are fundamental to sensorimotor control: physical models to predict properties of the world; inverse models to generate the motor commands needed to attain desired sensory states; and forward models to predict the sensory consequences of self-movement
Dragonflies generally don't hunt indoors, so Mischiati et al. decked out a laboratory to look like familiar hunting grounds, brought some dragonfly fodder indoors, and videotaped and otherwise assessed the behavior of the dragonflies in pursuit of their next meals to determine what they were looking at, and to assess their body movements as they pursued their prey.  These measurements suggested to them that the heads of the dragonflies were moving in sync with their prey, meaning that they were anticipating rather than reacting to the flight of their prey.

Anisoptera (Dragonfly), Pachydiplax longipennis (Blue Dasher), female, photographed in the Town of Skaneateles, Onondaga County, New York. Creative Commons

And this in turn suggests that, like vertebrates, dragonflies have internal models that facilitate their hunting. Rather than dashing after insects after they've already moved, dragonflies are able to predict their movements, and successfully capture their prey 90-95% of the time.  Compared with, say, echolocating bats, this is a remarkable success rate -- e.g., estimates of the success rate of Eptesicus nilssonii, a Eurasian bat, range from 36% for moths to 100% for the slow-moving dung beetle (Rydell, 1992). And it's an even more remarkable success rate compared with Pennsylvania deer hunters -- for every 3 or 4 hunting licenses sold, 1 deer was killed in 2012-13, which means that if, like dragonflies or bats, people had to rely on venison for their survival, they'd be in deep trouble.



But, apparently humans, bats and dragonflies are using essentially the same kind of internal model to hunt, a model that allows them to anticipate the future and take action accordingly.  More specifically, the model is a 'forward model', and it has been thought to be the foundation for cognition in vertebrates, but is at least the basis of motor control (as described here and here).  You can dismissively call it just 'computing' or you can acknowledge it as 'intelligent', but it is clearly more than simple hard-wired reflex: it involves judgment.

This is interesting and relevant, because if all that's required is the ability to predict and plan accordingly, why is there so much variation in the success rate of the hunt, even within a given species?  Clearly other factors and abilities are required -- other aspects of the nervous system, for example, or speed relative to prey, and population density of predator and prey.  Indeed, insects would be expected to vary in their 'intelligence' the way people do, in a way that means that most are able to succeed.

It seems that the study of insect behavior is building a more and more complex model of how insects do what they do.  The view of the insect brain is broadening into one that allows for much more complexity than robotic hard-wired behavior, or motor responses to sensory input.  A few months ago, we blogged about bee intelligence, writing about a PNAS paper that described how bees find their way home, credibly by using a cognitive map.

The author of a recent paper in Trends in Neuroscience ("Cognition with few neurons: higher-order learning in insects," Martin Giurfa, 2013) speculated about unexpected insect cognitive abilities, welcoming an approach to understanding plastic insect behavior that allows for the possibility of complex, sophisticated learned rather than mere associative learning.  But Giurfa cautions that there are many reasons why we don't yet understand insect behavior, including our tendency to anthropomorphize, using words for insect behavior derived from what we know about human abilities that, when applied to insects, imply more complexity than warranted, or to interpret experimental results as though they represent all that insects can do, rather than all that they were asked to do in the study.

On the other hand, many of the genes insects use for their sensory and neural functions are evolutionarily related to the genes mammals, including humans, use. So we likely share many similar genetically based mechanisms.

From the outside of this field looking in, it seems as though it's early days in understanding invertebrate brains.  And it seems to me that this is largely because observational studies are difficult to do on insects, must be interpreted because insects can't talk, and our interpretations are necessarily built on our assumptions about insect behavior, which in turn seem to follow trends in what people are currently thinking about cognition.  Until recently, researchers have assumed that insects, with far fewer neurons than we have, are pretty dumb.  The dragonfly hunter's success rate alone should be humbling enough to challenge this assumption.

In this sense, it's wrong to think simply that size matters.  Maybe its organization that matters more.

Monday, January 19, 2015

We can see the beast....but it's been us!

The unfathomable horrors of what the 'Islamists' are doing these days can hardly be exaggerated.  It is completely legitimate, from the usual mainstream perspective at least, to denigrate the perpetrators in the clearest possible way, as simply absolute evil.  But a deeper understanding raises sobering questions.

It's 'us' pointing at 'them' at the moment, and some aspects of what's going on reflect religious beliefs: Islam vs Christianity, Judaism, or the secular western 'faith'.  If we could really believe that we were fundamentally better than they are we could feel justified in denigrating their wholly misguided beliefs, and try to persuade them to come over to our True beliefs about morally, or even theologically acceptable behavior.

Unfortunately, the truth is not so simple.  Nor is it about what 'God' wants.  The scientific atheists (Marxist) slaughtered their dissenters or sent them to freeze in labor camps by the multiple millions. It was the nominally Christian (and even Socialist) Nazis who gassed their targets by the millions. And guess who's bombing schools in Palestine these days?

Can we in the US feel superior?  Well, we have the highest per capita jailed population, and what about slavery and structural racism?  Well, what about the Asians?  Let's see, the rape of Nanking, Mao's Cultural Revolution, the rapine Huns.....

Charlie Hebdo is just a current example that draws sympathy, enrages, and makes one wonder about humans.  Haven't we learned?  I'd turn it around and ask: has anything even really changed?

Christians have made each other victims, of course.  Read John Fox's Book of Martyrs from England in the 1500's (or read about the more well-known Inquisition).  But humans are equal opportunity slaughterers. Think of the crusades and back-and-forth Islamic-Christian marauding episodes.  Or the Church's early systematic 'caretaking' of the Native Americans almost from the day Columbus first got his sneakers wet in the New World, not to mention its finding justification for slavery (an idea going back to those wonderful classic Greeks, and of course previously in history).  Well, you know the story.

Depiction of Spanish atrocities committed in the conquest of Cuba in Bartolomé de Las Casas's "Brevisima relación de la destrucción de las Indias", 1552.   The rendering was by the Flemish Protestantartist Theodor de Bry. Public Domain. 

But this post was triggered not just by the smoking headlines of the day, but because I was reading about that often idealized gentle, meditative Marcus Aurelius, the Roman Emperor in the second century AD.  In one instance, some--guess who?--Christians had been captured by the Romans and were being tortured: if they didn't renounce their faith, they were beheaded (sound familiar?) or fed to the animals in a colosseum.  And this was unrelated to the routine slavery of the time. Hmmm...I'd have to think about whether anyone could conceive of a reason that, say, lynching was better than beheading.

It is disheartening, even in our rightful outrage at the daily news from the black-flag front, to see that contemporary horrors are not just awful, they're not even new!  And, indeed, part of our own Western heritage.

Is there any science here?  If not, why not?
We try to run an interesting, variable blog, mainly about science and also its role in society.  So the horrors on the Daily Blat are not as irrelevant as they might seem:  If we give so much credence, and resources, to science, supposedly to make life better, less stressful, healthier and longer, why haven't we moved off the dime in so many of these fundamental areas that one could call simple decency--areas that don't even need much scientific investment to document?

Physics, chemistry and math are the queens of science.  Biology may be catching up, but that would seem today mainly to be to the extent we are applying molecular reductionism (everything in terms of DNA, etc). That may be physics worship or it may be good; time will tell, but of course applied biology can claim many major successes. The reductionism of these fields gives them a kind of objective, or formalistic, rigor.  Controlled samples or studies, with powerful or even precise instrumentation are possible to measure and evaluate data, and to form testable credible theory about the material world.

But a lot of important things in life seem so indirect, relative to molecules, that one would think there could also be, at least in principle,  comparably effective social and behavioral sciences that did more than lust after expensive, flashy reductionist equipment (DNA sequencing, fMRI imaging, super-computing, etc.) and the like.  Imaging and other technologies certainly have made much of the physical sciences possible by enabling us to 'see' things our organic powers, our eyes, nose, ears, etc.,  could not detect.  But the social sciences?  How effective or relevant is that lust to the problems being addressed?

The cycling and recycling of social science problems seems striking.  We have plentiful explanations for things behavioral and cultural, and many of them sound so plausible.  We have formal theories structured as if they were like physics and chemistry: Marxism and related purportedly materialist theories of economics, cultural evolution, and behavior, and 'theories' of education, which are legion yet the actual result has been sliding for decades.  We have libraries-full of less quantitively or testably rigorous, more word-waving 'theories' by psychologists, anthropologists, sociologists, economists and the like.  But the flow of history and, one might say, its repeated disasters, shows, to me, that we as yet don't in fact have nothing very rigorous, despite a legacy going back to Plato and the Greek philosophers.

We spend a lot of money on the behavioral and social sciences with 'success' ranging from very good for very focal types of traits, to none at all when it comes to what are the major sociocultural phenomena like war, equity, and many others.  We have journal after journal, shelves full of books of social 'theory', including some (going back at least to Herbert Spencer) that purport to tie physical theory to biology to society, and Marx and Darwin are often invoked, along with ideas like the second law of thermodynamics and so on.  Marx wanted a social theory as rigorous as physics, and materialist, too, but in which there would be an inevitable, equitable end to the process.  Spencer had an end in mind, too, but one with a stable inequality of elites and the rest.  Not exactly compatible!

And this doesn't include social theories derived from this or that world religion.  Likewise, of course, we go through psychological and economic theories as fast as our cats go through kibbles, and we've got rather little to show for it that could seriously claim respect as science in the sense of real understanding of the phenomena.  When everyone needs a therapist, and therapists are life-long commitments, something's missing.





Karl Marx and Herbert Spencer, condemned to face each other for eternity at Highgate Cemetery in London (photos: A Buchanan)

Either that, or these higher-levels of organized traits simply don't follow 'laws' the way the physical phenomena do.  But that seems implausible since we're made of physical stuff, and such a view would take us back to the age-old mind-matter duality, endless debate about free will, consciousness, soul, and all the rest back through the ages.  And while this itemization is limited to western culture, there isn't anything more clearly 'true' in the modern East, nor in the cultures elsewhere or before ours.

Those with vested interests in their fMRI machines, super-computer modeling, or therapy practices will likely howl 'Foul!' It's hard not to believe that in the past there were a far smaller percentage of people with various behavioral problems needing chemical suppression or endless 'therapy' than there is today.  But if there were, and things are indeed changing for the worse, this further makes the point.  Why aren't mental health problems declining, after so much research?

You can defend the social sciences if you want, but in my personal view their System is, like the biomedical one, a large vested interest that keeps students off the street for a few years, provides comfy lives for professors, fodder for the news media and lots of jobs in the therapy and self-help industries (including think-tanks for economics and politics).....but has not turned daily life, even in the more privileged societies, into Nirvana.

One can say that those interests just like things to stay the way they are, or argue that while their particular perspective can't predict every specific any more than a physicist can predict every molecule's position, generic, say, Darwinian competition-is-everything views are simply true. Such assertions--axioms, really--are then just accepted and treated as if they're 'explanations'. If you take such a view, then we actually do understand everything!  But even if these axioms--Darwinian competition, e.g.--were true, they have become such platitudes that they haven't proven themselves in any serious sense, because if they had we would not have multiple competing views on the same subjects.  Despite debates on the margins, there is, after all, only one real chemistry, or physics, even if there are unsolved aspects of those fields.

The more serious point is this:  we have institutionalized research in the 'soft' as well as 'hard' sciences.  But a cold look at much of what we spend funding on, year after year without demanding actual major results, would suggest that we should be addressing the lack of real results as perhaps the more real or at least more societally important problem these fields should be addressing--and with the threat of less or no future funding if something profoundly better doesn't result.  In a sense, engineering works in the physical sciences because we can build bridges without knowing all the factors involved in precise detail.  But social engineering doesn't work that way.

After all, if we are going to spend lots of money on minorities (like professors, for example), we would be better to take an engineering approach to problems like 'orphan' (rare) diseases, which are focused and in a sense molecular, and where actual results could be hoped for.  The point would be to shift funds from wasteful, stodgy areas that aren't going very far.  Even if working on topics like orphan diseases is costly, there are no other paths to the required knowledge other than research with documentable results.  Shifting funding in that direction would temporarily upset various interests, but would instead provide employment dollar to areas and people who could make a real difference, and hence would not undermine the economy overall.

At the same time, what would it take for there to be a better kind of social science, the product of which would make a difference to human society, so we no longer had to read about murders and beheadings?

Thursday, January 15, 2015

When the cat brings home a mouse

To our daughter's distress, she needs to find a new home for her beloved cats, so overnight we've gone from no cats to three cats, while we try to find them someplace new.  I haven't lived with cats since I was a kid really, because I was always allergic.  When I visited my daughter, I'd get hives if Max, her old black cat, sadly now gone, rubbed against my legs, and I always at least sneezed even when untouched by felines.  But now with three cats in the house, I'm allergy-free and Ken, never allergic to cats before, is starting to sneeze -- loudly.


Old Max

Casey


Oliver upside-down


But the mystery of the immune system is just one of the mysteries we're confronting -- or that's confronting us -- this week.  Here's another.  The other day my daughter brought over a large bag of dry cat food.  I put it in a closet, but the cats could smell it, and it drove them nuts, so I moved it into the garage.  A few days later I noticed that the cats were all making it clear that they really, really wanted to go into the garage, but we were discouraging that given the dangers of spending time in a location with vehicles that come and go unpredictably. I just assumed they could smell the kibbles, or were bored and wanted to explore new horizons.

But two nights ago I went out to the garage myself to get pellets for our pellet stove, and Mu managed to squeeze out ahead of me.  He made a mad dash for the kibbles.  Oliver was desperate to follow, but I squeezed out past him and quickly closed the door.  At which point, Mu came prancing back, squeaking.  Oh wait, he wasn't squeaking, it was the mouse he was carrying in his mouth that was squeaking!  He was now just as eager to get back in the house as he'd been to get out.  After a few minutes he realized that wasn't going to happen, so he dropped the now defunct mouse, and I let him back in.

Mu, the Hunter
So, that 'tear' in the kibbles bag that I'd noticed a few days before?  Clearly made by a gnawing mouse (mice?).  And the cats obviously had known about this long before I did.  But how did Mu know exactly where to make a beeline to to catch the mouse?  He'd never seen where I put the bag, nor the mouse nibbling at it!  And I have to assume the other cats would have been equally able hunters had they been given the chance.

Amazing.  A whole undercurrent of sensory awareness and activity going on right at our feet, and we hadn't clued in on any of it.  I'd made unwarranted assumptions about holes in the bag, but the cats knew better.  Yes, I could have looked more closely at the kibble that had spilled out of the bag and noticed the mouse droppings.  But I didn't, because, well, because it didn't occur to me.

Though, now that I'm clued in, I believe we've got another mouse...


Mu and Ollie at the door to the garage yesterday afternoon


And?
I might even have been able to detect the mouse without seeing any of the evidence, just like the cats, if I'd tuned in more attentively, but I'm pretty sure it would have required better hearing.  In any case, other bits of evidence more suited to my perceptive powers were available, but I didn't notice.  I take this as yet another cautionary tale about how we know what we know, and I will claim it applies as well to politics, economics, psychology, forensics, religion, science, and more.  We build our case on preconceived notions, beliefs, assumptions, what we think is true, rarely re-evaluating those beliefs -- unless we're forced to, when, say, Helicobacter pylori is found to cause stomach ulcers, or our college roommate challenges our belief in God, or economic austerity does more harm than good.

As Holly often says, scientists shouldn't fall in love with their hypothesis.  Hypotheses are made to be tested; stretched, pounded, dropped on the floor and kicked, and afterwards, and continually, examined from every possible angle, not defended to the death.  But we often get too attached, and don't notice when the cat brings home a mouse.

An illustrative blog post in The Guardian by Alberto Nardelli and George Arnett last October tells a similar tale (h/t Amos Zeeberg on Twitter).  "Today’s key fact: you are probably wrong about almost everything."  Based on a survey by Ipsos Mori, Nardelli and Arnett report disconnects between what people around the world believe is true about the demographics of their country, and what's actually true.

So, people in the US overestimate the percentage of Muslims in the country, thinking it's 15% when it's actually 1%.  Japanese think the percentage of Muslims is 4% when it's actually 0.4%, and the French think it's 31% while it's actually 8%.

In the US, we think immigrants make up 32% of the population, but in fact they are 13%.  And so on.  We think we know, but very often we're wrong.  We're uninformed, ill-informed, or under informed, even while we think we're perfectly well informed.

Source: The Guardian

The Guardian piece oozes political overtones, sure.  But I think it is still a good example of how we go about our days, thinking we're making informed decisions, based on facts, but it's not always so.  A minority of Americans accept evolution, despite the evidence; you made up your mind about whether Adnan is guilty or innocent if you listened to Serial, even though you weren't a witness to the murder, and the evidence is largely circumstantial.  And so on.  And this all has consequences.

In a sense, even if we are right about what we think, or its consequences, based on what we know, it's hard to know if we are missing relevant points because we simply don't have the data, or haven't thought to evaluate it correctly, as me in regard to Mu and the mouse.  We have little choice but to act on what we know, but we do have a choice about how much confidence, or hubris, we attribute to what we know, to consider that what we know may not be all there is to know.

This is sobering when it comes to science, because the evidence for a novel or alternative interpretation might be there to be seen in our data, but our brains aren't making the connections, because we're not primed to or because we're unaware of aspects of the data.  We think we know what we're seeing, and it's hard to draw different conclusions.

Fortunately, occasionally an Einstein or a Darwin or some other grand synthesizer comes along and looks at the evidence in a different way, and pushes us forward.  Until then, it's science as usual; incremental gains based on accepted wisdom.  Indeed, even when such a great synthesizer provides us with dramatically better explanations of things, there is a tendency to assume that now, finally, we know what's up, and to place too much stock in the new theory......repeating the same cycle again.