Thursday, April 26, 2018

Gene mapping: More Monty Python than Monty Python

The gene for ...... (Monty  Python)
Here's a link to a famous John Cleese (of Monty Python fame) sketch on gene mapping.  We ask you to decide whether this is funnier than the daily blast of GWAS reports and their proclaimed transformative findings: which is more Monty than the full Monty.

Why we keep spending money on papers that keep showing how MontyPythonish genomewide association with complex traits is, is itself a valid question.  To say, with a straight face, that we now know of hundreds, much less of thousands, of genomewide sites that affect some trait--in some particular sample of humans, with much or most of the estimated heritability yet unaccounted for, without saying that enough is enough, is almost in itself a comedy routine.

We have absolutely no reason--or, at least, no need--to criticize anything about individual mapping papers.  Surely there are false findings, mis-used statistical tests, and so on, but that is part of the normal life in science, because we don't know everything and have to make assumptions, etc.  Some of the findings will be ephemeral, sample-specific, and so on.  That doesn't make them wrong.  Instead, the critique should be aimed at authors who present such work with a straight face as if it is (1) important, (2) novel in any really novel way, and (3) not saying that the paper shows why, by now with so many qualitatively similar results, we should stop public funding of this sort of work.  We should move on to more cogent science that reflects, but doesn't just repeat, the discovery of genomic causal (or, at least, associational) complexity.

The bottom line
What these studies show, and there is no reason to challenge the results per se, is that complex traits are not to be explained by simple, much less additive genetic models.  There is massive causal redundancy with similar traits due to dissimilar genotypes.  But this shouldn't be a surprise.  Indeed, we can easily account for this in terms of evolutionary phenomena, both related to processes like gene duplication and the survival protection that alternative pathways provides.

Even if each GWAS 'hit' is correct and not some sort of artifact, it is unclear what the message is.  To us, who have no vested interest in continuing, open-ended GWAS efforts with ever-larger samples, the bottom line is that this is not the way to understand biological causation.

We reach that view on genomic considerations alone, without even considering the environmental and somatic mutation components of phenotype generation, though these are often obviously determinative (as secular trends in risk clearly show).  We reach this view without worrying about the likelihood that many or perhaps even most of these 'hits' are some sort of statistical, sampling, analytic or other artifact, or are so indirectly related to the measured trait, or so environment-dependent as to be virtually worthless in any practical sense.

What GWAS ignore
There are also three clear facts that are swept under the rug, or just ignored, in this sort of work.  One is somatic mutation, which are not detected in constitutive genomewide studies but could be very important (e.g., cancer).  The second is that DNA is inert and does something only in interaction with other molecules.  Many of those relate to environmental and lifestyle exposures, which candid investigators know are usually dreadfully inaccurately measured.  The third is that future mutations, not to mention future environments are unpredictable, even in principle.  Yet the repeatedly stressed objective of GWAS is 'precision' predictive medicine.  It sounds like a noble objective, but it's not so noble given the known and knowable reasons these promises can't be met.

So, if biological causation is complex, as these studies and diverse other sorts of direct and indirect evidence clearly show, then why can't we pull the plug on these sorts of studies, and instead, invest in some other mode of thinking, some way to do focused studies where genetic causation is clear and real, rather than continuing to feed the welfare state of GWAS?

We're held back by inertia, and the lack of better ideas, but another important if not defining constraint is that investigator careers depend on external funding and that leads to safe me-too proposals.  We should stop imitating Monty Python, and recognize that if the gene-causation question even makes sense, some new way of thinking about it is needed.

Wednesday, April 25, 2018

Improving access to healthcare can usually make malaria go away

Drug resistant malaria has emerged in Southeast Asia several times in history and subsequently spread globally. When there are no other antimalarials to use this has led to public health and humanitarian disasters, especially in high transmission settings (parts of sub-Saharan Africa).

Currently there is a single effective antimalarial left: Artemisinin. But malaria parasites in Southeast Asia are already developing resistance to this antimalarial, leading many in the malaria research community and in public health to worry that we will soon be left with untreatable malaria.

One proposed solution to this problem has been to attempt to eliminate the parasite from regions where drug resistance consistently emerges. The proposed strategy uses a combination of increasing access to health care (so that ill people can be quickly diagnosed and treated, therefore reducing transmission) and targeting asymptomatic reservoirs by asking everyone who lives in a community where there is a large reservoir to take antimalarials, regardless of whether or not they feel ill (mass drug administration).

In Southeast Asia malaria largely persists in areas that are difficult to access and remote. The parasite thrives in conflict zones and in the fringes of society. These are the areas that frequently don’t have strong healthcare or surveillance systems and some have even argued that control or elimination would be impossible in such areas because of these difficulties.

Today on World Malaria Day my colleagues and I published the results after 3 years of an elimination campaign in Karen State of Myanmar.  The job is not complete. But this work has shown that it is feasible to set up a health care system, even in remote and difficult-to-access areas, and that most villages can achieve elimination through beefing up of the health care system alone. In places where there are high proportions of people with asymptomatic malaria, access to health care alone doesn’t suffice and malaria persists for a longer period of time. With high participation in mass drug administration, which requires a large amount of community engagement, these communities are able to quickly eliminate the parasites as well. We are hopeful that similar programs will be expanded throughout Southeast Asia, regardless of the geographic and political characteristics of the regions, so that elimination can be achieved and sustained.

Malaria (P. falciparum) incidence in the target area over three years. The project expanded over the three years, and overall incidence has decreased.

Link to the main paper:
Effect of generalised access to early diagnosis and treatment and targeted mass drug administration on Plasmodium falciparum malaria in Eastern Myanmar: an observational study of a regional elimination programme

Link to a detailed description of the setup of the project:

Tuesday, April 24, 2018

Throw 'em down the stairs! (making grant review fair)

When I was active in the grant process, including my duty to serve as a panelist for NIH and NSF, I realized that the work overload, and the somewhat arbitrary sense that if any reviewer spoke up against a proposal it got conveniently rejected without much if any discussion, meant that reviews were usually scanty at best.  Applications are assigned to several reviewers to evaluate thoroughly, so the entire panel doesn't have to read every proposal in depth, yet each member must vote on each proposal.  Even with this underwhelming consideration, the panel members simply cannot carefully evaluate the boxes full of applications for which they are responsible.  In my experience, once we got down to business, for those applications not immediately NRF'ed (not recommended for funding), there would be some discussion of the surviving proposals; but even then, with still tens of applications to evaluate, most panelists hadn't read the proposal and it seemed that even some of the secondary or tertiary assignees had only scanned it.  The rest of the panel usually sat quietly and then voted as the purported assigned readers recommended.  Obviously (sssh!), much of the final rankings rested on superficial consideration.

When a panel has a heavy overload of proposals it is hard for things to be otherwise, and one at least hoped that the worst proposals got rejected, those with fixable issues were given some thoughtful suggestions about improvement and resubmission, and at least that the best ones were funded.

But there was always the nagging question as to how true that hopeful view was.  We used to joke that a better, fairer reviewing system was to put the proposals to the Stairway Test: throw them down the stairs and the ones that landed closest to the bottom would be funded!

Well, that was a joke about the apparent fickleness (or, shall we say randomness?) of the funding process, especially when busy people had to read and evaluate far, far too many proposals in our heavily overloaded begging system, in which not just science but careers depend on the one thing that counts: bringing in the bucks.
The Stairway Test (technical criteria)

Or was it a joke?  A recent analysis in PNAS showed that randomness is perhaps a best way to characterize the reviewing process.  One can hope that the really worst proposals are rejected, but about the rest.....the evidence suggests that the Stairway Test would be much fairer.

I'm serious!  Many faculty members' careers literally depend on the grant system.  Those whose grants don't get funded are judged to be doing less worthy work, and loss of jobs can literally be the direct consequence, since many jobs, especially in biomedical schools, depend on bringing in money (in my opinion, a deep sin, but in the context of our venal science support system, one not avoidable).

The Stairway Test would allow those who did not get funding to say, quite correctly, that their 'failure' was not one of quality but of luck.  Deans and Chairs would, properly, be less able to terminate jobs because of failure to secure funding, if they could not claim that the victim did inferior work.  The PNAS paper shows that the real review system is in fact not different from the Stairway Test.

So let's be fair to scientists, and the public, and acknowledge honestly the way the system works.  Either reform the system from the ground up, to make it work honorably and in the best interest of science, or adopt a formal recognition of its broken-nature: the Stairway Test.