Friday, May 5, 2017

Eugenics, such an old-fashioned idea

It's the age of genetics.  Billions of dollars have been spent on identifying genes for important traits like diseases, fun traits like ear wax type and hair color, politically-charged traits like who we vote for and whether we're criminals, and much more.  "Precision Medicine," the idea that with bigger and better genetic data we'll be able to predict future diseases and then, presumably, prevent them, is au courant, and well-funded.

The assumption that genes determine not only our disease futures but our personalities, our preferences, and our behavior, appeals to a lot of people; some of us are naturally good and some of us are naturally bad.  And this has lead many of us to worry about the return of eugenics, the darwinian idea that populations can be improved by controlled reproduction.  That is, that we control their reproduction.  Those of us who are naturally bad just shouldn't be allowed to reproduce.  This was an idea that early 20th century America translated into the forced sterilization of the intellectually or socially inferior other, and that the Nazis translated (in many ways copying our lead) into mass murder of anyone they didn't like.

It turns out, though, that the worry about eugenics is now out-of-date.  It's too finely-honed a tool. The Republican majority in the US House of Representatives, with the enthusiastic support of our 45th president, has just passed a bill to repeal and replace the Affordable Care Act (ACA), President Obama's signature program to expand affordable access to medical care to 45 million people who had no health insurance, and to those for whom it was prohibitively expensive.

One of the most humane, important, and best-liked provisions of the ACA was that it did not allow insurers to discriminate against people who had "pre-existing conditions", illnesses that preceded their insurance coverage.  Insurance companies don't like to have to cover sick people because they cost money.  Fair enough, I suppose, given that insurers are businesses, not philanthropies, and have to make a profit (unlike a civilized country's national healthcare system, which is by and for the people rather than the plutocrats).  But, this is how all insurance works, car, home, flood and otherwise -- we all pay in, some of us cost more than we pay in, and some of us cost less.  If it's only sick people, or bad drivers, or people in hurricane zones who buy insurance, insurance companies would all quickly be out of business, which of course is why we all are required to buy car and home insurance.

But it turns out that there are good moral reasons to discriminate against people with pre-existing conditions -- according to a member of the Republican, white, male "Freedom Caucus", the extreme, and let's be honest, extremely ill-informed right-wingers in the House, pre-existing conditions don't happen to people who live good lives.  (Funny how their new list of pre-existing conditions includes pregnancy, rape, sexual harassment, breast cancer, among many other things, but not erectile dysfunction or prostate cancer. Nice discussion of this topic here.)

To ensure that covering actual sick people was going to be affordable, the ACA mandated that everyone have health insurance.  The political right never liked this provision of the law -- depending on your reading, this was due to the libertarian view that governments shouldn't be able to require that we do anything, or because they didn't want their money covering them, or perhaps a toxic mix of both -- and they've been fighting it ever since.  It's long been clear that that have no idea why a mandate was essential.  Because, who knew that health care was so complicated?

As is well known, the Republicans voted at least a zillion times to repeal the ACA while Obama was president.  Finally, yesterday, under the caring leadership of our current president, the Republican-led House passed a repeal-and-replace bill that would essentially eliminate protection for people with pre-existing conditions, as well as the requirement that healthy people purchase insurance.  And, in an ugly and cynical move that makes abundantly clear the racist and other lies behind this bill, they voted to exempt themselves from its new constraints (of course, because they're the good guys!).

This bill is bad medicine.  But that's irrelevant to Republicans and their supporters.  It's not meant to be much more than a tax cut for the rich (protecting wealth being the only core tenet of that party). And a thumb in the eye of anyone who benefitted from the ACA; the poor, the sick, and the Democrats.  It will definitely be a money saver, when 24 million people lose coverage, and then die of things that those with money don't have to die of.  As Jimmy Kimmel said in his emotional defense of insurance for all.

And this is what brings us back to eugenics.  Who needs the kind of very expensive, targeted precision promised by knowledge of genes to cherrypick those who should live and those who should die?  Let's just take away access to medical care from all of Them.  And make our country great for the oligarchs again.

Thursday, April 27, 2017

The Law of No Restraint

There's a new law of science reporting or, perhaps more accurately put, of the science jungle.  The law is to feed any story, no matter how fantastic, to science journalists (including your university's PR spinners), and they will pick up whatever can be spun into a Big Story, and feed it to the eager mainstream media.  Caveats may appear somewhere in the stories, but not the headlines so that, however weak or tentative or incredible, the story gets its exposure anyway.  Then on to tomorrow's over-sell.

One rationale for this is that unexpected findings--typically presented breathlessly as 'discoveries'--sell: they rate the headline. The caveats and doubts that might un-headline the story may be reported as well, but often buried in minimal terms late in the report.  Even if the report balances skeptics and claimants, simply publishing the story is enough to give at least some credence to the discovery.

The science journalism industry is heavily inflated in our commercial, 24/7 news environment. It would be better for science, if not for sales, if all these hyped papers, rather than being publicized at the time the paper is published, first appeared in musty journals for specialists to argue over, and in the pop-sci news only after some mature judgments are made about them.  Of course, that's not good for commercial or academic business.

We have just seen a piece reporting that humans were in California something like 135,000 years ago, rather than the well-established continental dates of about 12,000.  The report which I won't grace by citing here, and you've probably seen it anyway, then went on to speculate about what 'species' of our ancestors these early guys might have been.

Why is this so questionable?  If it were a finding on its own, it might seem credible, but given the plethora of skeletal and cultural archeological findings, up and down the Americas, such an ancient habitation seems a stretch.  There is no comparable trail of earlier settlements in northeast Asia or Alaska that might suggest it, and there are lots of animal and human archeological remains--all basically consistent with each other, so why has no earlier finding yet been made?  It is of course possible that this is the first and is a correct one, but it is far too soon for this to merit a headline story, even with caveats.

Another piece we saw today reported that a new analysis casts doubt on whether diets high in saturated fat are bad for you.  This was a meta-analysis of various other studies that have been done, and got some headline treatment because the authors report that, contrary to many findings over many years, saturated fats don't clog arteries. Instead, they say, coronary heart disease is a chronic inflammatory condition.  Naturally, the study's basic data are being challenged, as reflected in this story's discussion, by critiques of its data and method.  These get into details we're not qualified to judge, and we can't comment on the relative merits of the case.

However, one thing we can note is that with respect to coronary heart disease, study after study has reported more or less the same, or at least consistent findings about the correlation between saturated fats and risk. Still, despite so very much careful science, including physiological studies as well as statistical analysis of population samples, can we still apparently not be sure about a dietary component that we've been told for years should play a much reduced role in what we eat?  How on earth could we possibly still not know about saturated fat diets and disease risk?

If this very basic issue is unresolved after so long, and the story is similar for risk factors for many complex diseases, then what is all this promise of 'precise' medicine all about?  Causal explanations are still fundamentally unclear for many cancers, dementias, psychiatric disorders, heart disease, and so on.  So why isn't the most serious conclusion that our methods and approaches themselves are for some reason simply not adequate to answer such seemingly simple questions as 'is saturated fat bad for you?'  Were the plethora of previous studies all flawed in some way?  Is the current study?  Do the publicizing of the studies themselves change behaviors in ways that affects future studies?

There may be no better explanation than that diets and physiology are hard to measure and are complex, and that no simple answer is true.  We may all differ for genetic and other reasons to such an extent that population averages are untrustworthy, or our habits may change enough that studies don't get consistent answers.  Or asking about one such risk factor when diets and lifestyles are complex is a science modus operandi that developed for studying simpler things (like exposure to toxins or bacteria, the basis of classical epidemiology), and we simply need a better gestalt from which to work.

Clearly a contributory sociological factor is that the science industry has simply been cruising down the same rails despite constant popping of promise bubbles, for decades now.  It's always more money for more and bigger studies.  It's rarely let's stop and take a deep breath and think of some better way to understand (in this case) dietary relationships to physical traits.  In times past, at least, most stories like the ancient Californian didn't get ink so widely and rapidly.  But if I'm running a journal, or a media network, or am a journalist needing to earn my living, and I need to turn a buck, naturally I need to write about things that aren't yet understood.

Unfortunately, as we've noted before, the science industry is a hungry beast that needs its continual feeding, and (like our 3 cats) always demands more, more, and more.  There are ways we could reform things, at least up to a point.  We'll never end the fact that some scientists will claim almost anything to get attention, and we'll always be faced with data that suggest one thing that doesn't turn out that way.  But we should be able to temper the level of BS and get back more to sober science rather than sausage factory 'productivity'.  And educate the public that some questions can't be answered the way we'd like, or aren't being asked in the right way.  But that is something science might address effectively, if it weren't so rushed and pressured to 'produce'.

Thursday, April 20, 2017

Some genetic non-sense about nonsense genes

The April 12 issue of Nature has a research report and a main article about what is basically presented as the discovery that people typically carry doubly knocked-out genes, but show no effect. The idea as presented in the editorial (p 171) notes that the report (p235) uses an inbred population to isolate double knockout genes (that is, recessive homozygous null mutations), and look at their effects.  The population sampled, from Pakistan, has high levels of consanguineous marriages.  The criteria for a knockout mutation was based on the protein coding sequence.

We have no reason to question the technical accuracy of the papers, nor their relevance to biomedical and other genetics, but there are reasons to assert that this is nothing newly discovered, and that the story misses the really central point that should, I think, be undermining the expensive Big Data/GWAS approach to biological causation.

First, for some years now there have been reports of samples of individual humans (perhaps also of yeast, but I can't recall specifically) in which both copies of a gene appear to be inactivated.  The criteria for saying so are generally indirect, based on nonsense, frameshift, or splice-site mutations in the protein code.  That is, there are other aspects of coding regions that may be relevant to whether this is a truly thorough search to see that whatever is coded really is non-functional.  The authors mention some of these.  But, basically, costly as it is, this is science on the cheap because it clearly only addresses some aspects of gene functionality.  It would obviously be almost impossible to show either that the gene was never expressed or never worked. For our purposes here, we need not question the finding itself.  The fact that this is not a first discovery does raise the question why a journal like Nature is so desperate for Dramatic Finding stories, since this one really should be instead a report in one of many specialty human genetics journals.

Secondly, there are causes other than coding mutations for gene inactivation. They have to do with regulatory sequences, and inactivating mutations in that part of a gene's functional structure is much more difficult, if not impossible, to detect with any completeness.  A gene's coding sequence itself may seem fine, but its regulatory sequences may simply not enable it to be expressed. Gene regulation depends on epigenetic DNA modification as well as multiple transcription factor binding sites, as well as the functional aspects of the many proteins required to activate a gene, and other aspects of the local DNA environment (such as RNA editing or RNA interference).  The point here is that there are likely to be many other instances of people with complete or effectively complete double knockouts of genes.

Thirdly, the assertion that these double KOs have no effect depends on various assumptions.  Mainly, it assumes that the sampled individuals will not, in the future, experience the otherwise-expected phenotypic effects of their defunct genes.  Effects may depend on age, sex, and environmental effects rather than necessarily being a congenital yes/no functional effect.

Fourthly, there may be many coding mutations that make the protein non-functional, but these are ignored by this sort of study because they aren't clear knockout mutations, yet they are in whatever data are used for comparison of phenotypic outcomes.  There are post-translational modification, RNA editing, RNA modification, and other aspects of a 'gene' that this is not picking up.

Fifthly, and by far most important, I think, is that this is the tip of the iceberg of redundancy in genetic functions.  In that sense, the current paper is a kind of factoid that reflects what GWAS has been showing in great, if implicit, detail for a long time: there is great complexity and redundancy in biological functions.  Individual mapped genes typically affect trait values or disease risks only slightly.  Different combinations of variants at tens, hundreds, or even thousands of genome sites can yield essentially the same phenotype (and here we ignore the environment which makes things even more causally blurred).

Sixthly, other samples and certainly other populations, as well as individuals within the Pakistani data base, surely carry various aspects of redundant pathways, from plenty of them to none.  Indeed, the inbreeding that was used in this study obviously affects the rest of the genome, and there's no particular way to know in what way, or more importantly, in which individuals.  The authors found a number of basically trivial or no-effect results as it is, even after their hunt across the genome. Whether some individuals had an attributable effect of a particular double knockout is problematic at best.  Every sample, even of the same population, and certainly of other populations, will have different background genotypes (homozygous or not), so this is largely a fishing expedition in a particular pond that cannot seriously be extrapolated to other samples.

Finally, this study cannot address the effect of somatic mutation on phenotypes and their risk of occurrence.  Who knows how many local tissues have experienced double-knockout mutations and produced (or not produced) some disease or other phenotype outcome.  Constitutive genome sequencing cannot detect this.  Surely we should know this very inconvenient fact by now!

Given the well-documented and pervasive biological redundancy, it is not any sort of surprise that some genes can be non-functional and the individual phenotypically within a viable, normal range. Not only is this not a surprise, especially by now in the history of genetics, but its most important implication is that our Big Data genetic reductionistic experiment has been very successful!  It has, or should have, shown us that we are not going to be getting our money's worth from that approach.  It will yield some predictions in the sense of retrospective data fitting to case-control or other GWAS-like samples, and it will be trumpeted as a Big Success, but such findings, even if wholly correct, cannot yield reliable true predictions of future risk.

Does environment, by any chance, affect the studied traits?  We have, in principle, no way to know what environmental exposures (or somatic mutations) will be like.  The by now very well documented leaf-litter of rare and/or small-effect variants plagues GWAS for practical statistical reasons (and is why usually only a fraction of heritability is accounted for).  Naturally, finding a single doubly inactivated gene may, but by no means need, yield reliable trait predictions.

By now, we know of many individual genes whose coded function is so proximate or central to some trait that mutations in such genes can have predictable effects.  This is the case with many of the classical 'Mendelian' disorders and traits that we've known for decades.  Molecular methods have admirably identified the gene and mutations in it whose effects are understandable in functional terms (for example, because the mutation destroys a key aspect of a coded protein's function).  Examples are Huntington's disease, PKU, cystic fibrosis, and many others.

However, these are at best the exceptions that lured us to think that even more complex, often late-onset traits would be mappable so that we could parlay massive investment in computerized data sets into solid predictions and identify the 'druggable' genes-for that Big Pharma could target.  This was predictably an illusion, as some of us were saying long ago and for the right reasons.  Everyone should know better now, and this paper just reinforces the point, to the extent that one can assert that it's the political economic aspects of science funding, science careers, and hungry publications, and not the science itself, that leads to the persistence of drives to continue or expand the same methods anyway.  Naturally (or should one say reflexively?), the authors advocate a huge Human Knockout Project to study every gene--today's reflex Big Data proposal.**

Instead, it's clearly time to recognize the relative futility of this, and change gears to more focused problems that might actually punch their weight in real genetic solutions!

** [NOTE added in a revision.  We should have a wealth of data by now, from many different inbred mouse and other animal strains, and from specific knockout experiments in such animals, to know that the findings of the Pakistani family paper are to be expected.  About 1/4 to 1/3 of knockout experiments in mice have no effect or not the same effect as in humans, or have no or different effect in other inbred mouse strains.  How many times do we have to learn the same lesson?  Indeed, with existing genomewide sequence databases from many species, one can search for 2KO'ed genes.  We don't really need a new megaproject to have lots of comparable data.]

Wednesday, April 12, 2017

Reforming research funding and universities

Any aspect of society needs to be examined on a continual basis to see how it could be improved.  University research, such as that which depends on grants from the National Institutes of Health, is one area that needs reform. It has gradually become an enormous, money-directed, and largely self-serving industry, and its need for external grant funding turns science into a factory-like industry, which undermines what science should be about, advancing knowledge for the benefit of society.  

The Trump policy, if there is one, is unclear, as with much of what he says on the spur of the moment. He's threatened to reduce the NIH budget, but he's also said to favor an increase, so it's hard to know whether this represents whims du jour or policy.  But regardless of what comes from on high, it is clear to many of us with experience in the system that health and other science research has become very costly relative to its promise and too largely mechanical rather than inspired.

For these reasons, it is worth considering what reforms could be taken--knowing that changing the direction of a dependency behemoth like NIH research funding has to be slow because too many people's self-interests will be threatened--if we were to deliver in a more targeted and cost-efficient way on what researchers promise.  Here's a list of some changes that are long overdue.  In what follows, I have a few FYI asides for readers who are unfamiliar with the issues.

1.  Reduce grant overhead amounts
FYI:  Federal grants come with direct and indirect costs.  Direct costs pay the research staff, the supplies and equipment, travel and collecting data and so on.  Indirect costs are worked out for each university, and are awarded on top of the direct costs--and given to the university administrators.  If I get $100,000 on a grant, my university will get $50,000 or more, sometimes even more than $100K.  Their claim to this money is that they have to provide the labs, libraries, electricity, water, administrative support and so on, for the project, and that without the project they'd not have these expenses. Indeed, an indicator of the fat that is in overhead is that as an 'incentive' or 'reward', some overhead is returned as extra cash to the investigator who generated it.]

University administrations have notoriously been ballooning.  Administrators and their often fancy offices depend on individual grant overhead, which naturally puts intense pressure on faculty members to 'deliver'.  Educational institutions should be lean and efficient. Universities should pay for their own buildings and libraries and pare back bureaucracy. Some combination of state support, donations, and bloc grants could be developed to cover infrastructure, if not tied to individual projects or investigators' grants. 

2.  No faculty salaries on grants
FYI:  Federal grants, from NIH at least, allow faculty investigators' salaries to be paid from grant funds.  That means that in many health-science universities, the university itself is paying only a fraction, often tiny and perhaps sometimes none, of their faculty's salaries.  Faculty without salary-paying grants will be paid some fraction of their purported salaries and often for a limited time only.  And salaries generate overhead, so they're now well paid: higher pay, higher overhead for administrators!  Duh, a no-brainer!]

Universities should pay their faculty's salaries from their own resources.   Originally, grant reimbursement for faculty investigators' salaries were, in my understanding, paid on grants so the University could hire temporary faculty to do the PI's teaching and administrative obligations while s/he was doing the research.  Otherwise, if they're already paid to do research, what's the need? Faculty salaries paid on grants should only be allowed to be used in this way, not just as a source of cash.  Faculty should not be paid on soft money, because the need to hustle one's salary steadily is an obvious corrupting force on scientific originality and creativity. 

3.  Limit on how much external funding any faculty member or lab could have
There is far too much reward for empire-builders. Some do, or at least started out doing, really good work, but that's not always the case and diminishing returns for expanding cost is typical.  One consequence is that new faculty are getting reduced teaching and administrative duties so they can (must!) write grant applications. Research empires are typically too large to be effective and often have absentee PIs off hustling, and are under pressure to keep the factory running.  That understandably generates intense pressure to play it safe (though claiming to be innovative); but good science is not a predictable factory product. 

4.  A unified national health database
We need health care reform, and if we had a single national health database it would reduce medical costs and could be anonymized so research could be done, by any qualified person, without additional grants.  One can question the research value of such huge databases, as is true even of the current ad hoc database systems we pay for, but they would at least be cost-effective.

5. Temper the growth ethic 
We are over-producing PhDs, and this is largely to satisfy the game of the current faculty by which status is gained by large labs.  There are too many graduate students and post-docs for the long-term job market.  This is taking a heavy personal toll on aspiring scientists.  Meanwhile, there is inertia at the top, where we have been prevented from imposing mandatory retirement ages.  Amicably changing this system will be hard and will require creative thinking; but it won't be as cruel as the system we have now.

6. An end to deceptive publication characteristics  
We routinely see papers listing more authors than there are residents in the NY phone book.  This is pure careerism in our factory-production mode.  As once was the standard, every author should in principle be able to explain his/her paper on short notice.  I've heard 15 minutes. Those who helped on a paper such as by providing some DNA samples, should be acknowledged, but not listed as authors. Dividing papers into least-publishable-units isn't new, but with the proliferation of journals, it's out of hand.  Limiting CV lengths (and not including grants on them) when it comes to promotion and tenure could focus researchers' attention on doing what's really important rather than chaff-building.  Chairs and Deans would have to recognize this, and move away from safe but gameable bean-counting.  

FYI: We've moved towards judging people internally, and sometimes externally in grant applications, on the quantity of their publications rather than the quality, or on supposedly 'objective' (computer-tallied) citation counts.  This is play-it-safe bureaucracy and obviously encourages CV padding, which is reinforced by the proliferation of for-profit publishing.  Of course some people are both highly successful in the real scientific sense of making a major discovery, as well as in publishing their work.  But it is naive not to realize that many, often the big players grant-wise, manipulate any counting-based system.  For example, they can cite their own work in ways that increase the 'citation count' that Deans see.  Papers with very many authors also lead to red-claiming that is highly exaggerated relative to the actual scientific contribution.  Scientists quickly learn how to manipulate such 'objective' evaluation systems.] 

7.  No more too-big-and-too-long-to-kill projects
The Manhattan Project and many others taught us that if we propose huge, open-ended projects we can have funding for life.  That's what the 'omics era and other epidemiological projects reflect today.  But projects that are so big they become politically invulnerable rarely continue to deliver the goods.  Of course, the PIs, the founders and subsequent generations, naturally cry that stopping their important project after having invested so much money will be wasteful!  But it's not as wasteful as continuing to invest in diminishing returns.  Project duration should be limited and known to all from the beginning.

8.  A re-recognition that science addressing focal questions is the best science
Really good science is risky because serious new findings can't be ordered up like hamburgers at McD's.  We have to allow scientists to try things.  Most ideas won't go anywhere.  But we don't have to allow open-ended 'projects' to scale up interminably as has been the case in the 'Big Data' era, where despite often-forced claims and PR spin, most of those projects don't go very far, either, though by their size alone they generate a blizzard of results. 

9. Stopping rules need to be in place  
For many multi-year or large-scale projects, an honest assessment part-way through would show that the original question or hypothesis was wrong or won't be answered.  Such a project (and its funds) should have to be ended when it is clear that its promise will not be met.  It should be a credit to an investigator who acknowledges that an idea just isn't working out, and those who don't should be barred for some years from further federal funding.  This is not a radical new idea: it is precedented in the drug trial area, and we should do the same in research.  

It should be routine for universities to provide continuity funding for productive investigators so they don't have to cling to go-nowhere projects. Faculty investigators should always have an operating budget so that they can do research without an active external grant.  Right now, they have to piggy-back their next idea by using funds in their current grant, and without internal continuity funding, this is naturally leads to safe 'fundable'  projects, rather than really innovative ones.  The reality is that truly innovative projects typically are not funded, because it's easy for grant review panels to fault-find and move on the safer proposals.

10. Research funding should not be a university welfare program
Universities are important to society and need support.  Universities as well as scientists become entrenched.  It's natural.  But society deserves something for its funding generosity, and one of the facts of funding life could be that funds move.  Scientists shouldn't have a lock on funding any more than anybody else. Universities should be structured so they are not addicted to external funding on grants. Will this threaten jobs?  Most people in society have to deal with that, and scientists are generally very skilled people, so if one area of research shrinks others will expand.

11.  Rein in costly science publishing
Science publishing has become what one might call a greedy racket.  There are far too many journals, rushing out half-way reviewed papers for pay-as-you-go authors.  Papers are typically paid for on grant budgets (though one can ask how often young investigators shell out their own personal money to keep their careers).  Profiteering journals are proliferating to serve the CV-padding hyper-hasty bean-counting science industry that we have established.  Yet the vast majority of papers have basically no impact.  That money should go to actual research.

12.  Other ways to trim budgets without harming the science 
Budgets could be trimmed in many other ways, too:  no buying journal subscriptions on a grant (universities have subscriptions), less travel to meetings (we have Skype and Hangout!), shared costly equipment rather than a sequencer in every lab.  Grants should be smaller but of longer duration, so investigators can spend their time on research rather than hustling new grants. Junk the use of 'impact' factors and other bean-counting ways of judging faculty.  It had a point once--to reduce discrimination and be more objective, but it's long been strategized and manipulated, substituting quantity for quality.  Better evaluation means are needed.  

These suggestions are perhaps rather radical, but to the extent that they can somehow be implemented, it would have to be done humanely.  After all, people playing the game today are only doing what they were taught they must do.  Real reform is hard because science is now an entrenched part of society.  Nonetheless, a fair-minded (but determined!) phase-out of the abuses that have gradually developed would be good for science, and hence for the society that pays for it.

***NOTES:  As this was being edited, NY state has apparently just made its universities tuition-free for those whose families are not wealthy.  If true, what a step back towards sanity and public good!  The more states can get off the grant and other grant and strings-attached private donation hooks, the more independent they should be able to be.

Also, the Apr 12 Wall St Journal has a story (paywall, unless you search for it on Twitter) showing the faults of an over-stressed health research system, including some of the points made here.  The article points out problems of non-replicability and other technical mistakes that are characteristic of our heavily over-burdened system.  But it doesn't go after the System as such, the bureaucracy and wastefulness and the pressure for 'big data' studies rather than focused research, and the need to be hasty and 'productive' in order to survive.

Wednesday, March 29, 2017

The (bad) luck of the draw; more evidence

A while back, Vogelstein and Tomasetti (V-T) published a paper in Science in which it was argued that most cancers cannot be attributed to known environmental factors, but instead were due simply to the errors in DNA replication that occur throughout life when cells divide.  See our earlier 2-part series on this.

Essentially the argument is that knowledge of the approximate number of at-risk cell divisions per unit of age could account for the age-related pattern of increase in cancers of different organs, if one ignored some obviously environmental causes like smoking.  Cigarette smoke is a mutagen and if cancer is a mutagenic disease, as it certainly largely is, then that will account for the dose-related pattern of lung and oral cancers.

This got enraged responses from environmental epidemiologists whose careers are vested in the idea that if people would avoid carcinogens they'd reduce their cancer risk.  Of course, this is partly just the environmental epidemiologists' natural reaction to their ox being gored--threats to their grant largesse and so on.  But it is also true that environmental factors of various kinds, in addition to smoking, have been associated with cancer; some dietary components, viruses, sunlight, even diagnostic x-rays if done early and often enough, and other factors.

Most associated risks from agents like these are small, compared to smoking, but not zero and an at least legitimate objection to V-T's paper might be that the suggestion that environmental pollution, dietary excess, and so on don't matter when it comes to cancer is wrong.  I think V-T are saying no such thing.  Clearly some environmental exposures are mutagens and it would be a really hard-core reactionary to deny that mutations are unrelated to cancer.  Other external or lifestyle agents are mitogens; they stimulate cell division, and it would be silly not to think they could have a role in cancer.  If and when they do, it is not by causing mutations per se.  Instead mitogenic exposures in themselves just stimulate cell division, which is dangerous if the cell is already transformed into a cancer cell.  But it is also a way to increase cancer by just what V-T stress: the natural occurrence of mutations when cells divide.

There are a few who argue that cancer is due to transposable elements moving around and/or inserting into the genome where they can cause cells to misbehave, or other perhaps unknown factors such as of tissue organization, which can lead cells to 'misbehave', rather than mutations.

These alternatives are, currently, a rather minor cause of cancer.  In response to their critics, V-T have just published a new multi-national analysis that they suggest supports their theory.  They attempted to correct for the number of at-risk cells and so on, and found a convincing pattern that supports the intrinsic-mutation viewpoint.  They did this to rebut their critics.

This is at least in part an unnecessary food-fight.  When cells divide, DNA replication errors occur.  This seems well-documented (indeed, Vogelstein did some work years ago that showed evidence for somatic mutation--that is, DNA changes that are not inherited--and genomes of cancer cells compared to normal cells of the same individual.  Indeed, for decades this has been known in various levels of detail.  Of course, showing that this is causal rather than coincidental is a separate problem, because the fact of mutations occurring during cell division doesn't necessarily mean that the mutations are causal. However, for several cancers the repeated involvement of specific genes, and the demonstration of mutations in the same gene or genes in many different individuals, or of the same effect in experimental mice and so on, is persuasive evidence that mutational change is important in cancer.

The specifics of that importance are in a sense somewhat separate from the assertion that environmental epidemiologists are complaining about.  Unfortunately, to a great extent this is a silly debate. In essence, besides professional pride and careerism, the debate should not be about whether mutations are involved in cancer causation but whether specific environmental sources of mutation are identifiable and individually strong enough, as x-rays and tobacco smoke are, to be identified and avoided.  Smoking targets particular cells in the oral cavity and lungs.  But exposures that are more generic, but individually rare or not associated with a specific item like smoking, and can't be avoided, might raise the rate of somatic mutation generally.  Just having a body temperature may be one such factor, for example.

I would say that we are inevitably exposed to chemicals and so on that will potentially damage cells, mutation being one such effect.  V-T are substantially correct, from what the data look like, in saying that (in our words) namable, specific, and avoidable environmental mutations are not the major systematic, organ-targeting cause of cancer.  Vague and/or generic exposure to mutagens will lead to mutations more or less randomly among our cells (maybe, depending on the agent, differently depending on how deep in our bodies the cells are relative to the outside world or other means of exposure).  The more at-risk cells, the longer they're at risk, and so on, the greater the chance that some cell will experience a transforming set of changes.

Most of us probably inherit mutations in some of these genes from conception, and have to await other events to occur (whether these are mutational or of another nature as mentioned above).  The age patterns of cancers seem very convincingly to show that.  The real key factor here is the degree to which specific, identifiable, avoidable mutational agents can be identified.  It seems silly or, perhaps as likely, mere professional jealousy, to resist that idea.

These statements apply even if cancers are not all, or not entirely, due to mutational effects.  And, remember, not all of the mutations required to transform a cell need be of somatic origin.  Since cancer is mostly, and obviously, a multi-factor disease genetically (not a single mutation as a rule), we should not have our hackles raised if we find what seems obvious, that mutations are part of cell division, part of life.

There are curious things about cancer, such as our large body size but delayed onset ages relative to the occurrence of cancer in smaller, and younger animals like mice.  And different animals of different lifespans and body sizes, even different rodents, have different lifetime cancer risks (some may be the result of details of their inbreeding history or of inbreeding itself).  Mouse cancer rates increase with age and hence the number of at-risk cell divisions, but the overall risk at very young ages despite many fewer cell divisions (yet similar genome sizes) shows that even the spontaneous mutation idea of V-T has problems.  After all, elephants are huge and live very long lives; why don't they get cancer much earlier?

Overall, if if correct, V-T's view should not give too much comfort to our 'Precision' genomic medicine sloganeers, another aspect of budget protection, because the bad luck mutations are generally somatic, not germline, and hence not susceptible to Big Data epidemiology, genetic or otherwise, that depends on germ-line variation as the predictor.

Related to this are the numerous reports of changes in life expectancy among various segments of society and how they are changing based on behaviors, most recently, for example, the opiod epidemic among whites in depressed areas of the US.  Such environmental changes are not predictable specifically, not even in principle, and can't be built into genome-based Big Data, or the budget-promoting promises coming out of NIH about such 'precision'.  Even estimated lifetime cancer risks associated with mutations in clear-cut risk-affecting genes like BRCA1 mutations and breast cancer, vary greatly from population to population and study to study.  The V-T debate, and their obviously valid point, regardless of the details, is only part of the lifetime cancer risk story.

ADDENDUM 1
Just after posting this, I learned of a new story on this 'controversy' in The Atlantic.  It is really a silly debate, as noted in my original version.  It tacitly makes many different assumptions about whether this or that tinkering with our lifestyles will add to or reduce the risk of cancer and hence support the anti-V-T lobby.  If we're going to get into the nitty-gritty and typically very minor details about, for example, whether the statistical colon-cancer-protective effect of aspirin shows that V-T were wrong, then this really does smell of academic territory defense.

Why do I say that?  Because if we go down that road, we'll have to say that statins are cancer-causing, and so is exercise, and kidney transplants and who knows what else.  They cause cancer by allowing people to live longer, and accumulate more mutational damage to their cells.  And the supposedly serious opioid epidemic among Trump supporters actually is protective, because those people are dying earlier and not getting cancer!

The main point is that mutations are clearly involved in carcinogenesis, cell division life-history is clearly involved in carcinogenesis, environmental mutagens are clearly involved in carcinogenesis, and inherited mutations are clearly contributory to the additional effects of life-history events.  The silly extremism to which the objectors to V-T would take us would be to say that, obviously, if we avoided any interaction whatsoever with our environment, we'd never get cancer.  Of course, we'd all be so demented and immobilized with diverse organ-system failures that we wouldn't realize our good fortune in not getting cancer.

The story and much of the discussion on all sides is also rather naive even about the nature of cancer (and how many or of which mutations etc it takes to get cancer); but that's for another post sometime.

ADDENDUM 2
I'll add another new bit to my post, that I hadn't thought of when I wrote the original.  We have many ways to estimate mutation rates, in nature and in the laboratory.  They include parent-offspring comparison in genomewide sequencing samples, and there have been sperm-to-sperm comparisons.  I'm sure there are many other sets of data (see Michael Lynch in Trends in Genetics 2010 Aug; 26(8): 345–352.  These give a consistent picture and one can say, if one wants to, that the inherent mutation rate is due to identifiable environmental factors, but given the breadth of the data that's not much different than saying that mutations are 'in the air'.  There are even sex-specific differences.

The numerous mutation detection and repair mechanisms, built into genomes, adds to the idea that mutations are part of life, for example that they are not related to modern human lifestyles.  Of course, evolution depends on mutation, so it cannot and never has been reduced to zero--a species that couldn't change doesn't last.  Mutations occur in plants and animals and prokaryotes, in all environments and I believe, generally at rather similar species-specific rates.

If you want to argue that every mutation has an external (environmental) cause rather than an internal molecular one, that is merely saying there's no randomness in life or imperfection in molecular processes.  That is as much a philosophical as an empirical assertion (as perhaps any quantum physicist can tell you!).  The key, as  asserted in the post here, is that for the environmentalists' claim to make sense, to be a mutational cause in the meaningful sense, the force or factor must be systematic and identifiable and tissue-specific, and it must be shown how it gets to the internal tissue in question and not to other tissues on the way in, etc.

Given how difficult it has been to chase down most environmental carcinogenic factors, to which exposure is more than very rare, and that the search has been going on for a very long time, and only a few have been found that are, in themselves, clearly causal (ultraviolet radiation, Human Papilloma Virus, ionizing radiation, the ones mentioned in the post), whatever is left over must be very weak, non tissue-specific, rare, and the like.  Even radiation-induced lung cancer in uranium minors has been challenging to prove (for example, because miners also largely were smokers).

It is not much of a stretch to simply say that even if, in principle, all mutations in our body's lifetime were due to external exposures, and the relevant mutagens could be identified and shown in some convincing way to be specifically carcinogenic in specific tissues, in practice if not ultra-reality, then the aggregate exposures to such mutations are unavoidable and epistemically random with respect to tissue and gene.  That I would say is the essence of the V-T finding.

Quibbling about that aspect of carcinogenesis is for those who have already determined how many angels dance on the head of a pin.

Friday, March 24, 2017

Paid To Prey (PTP) journals

In the bad old days if you as a scientist had something worth saying, a journal would (after vetting through a mainly fair confidential review system) publish it.  If you had good things to say, whether or not you had grants, your ideas were heard, and you could make a career on the basis of the depth of your thought, your careful results, and so on.

If you needed funds to do your research, such as to travel or run a laboratory, well, you needed a grant to do your work.  This was the system we all knew.  You had to have funding, but you couldn't just pay your way through to publishing.  Also, if you were junior, start-up funds were typically made available if you needed them, to give you a leg up and a chance to get your career going.

Publishing has always had costs, of course, but the journals survived by library and personal subscriptions, often based on professional society memberships, where the fees were modest, especially for the most junior members.

Now what we have is a large pay-to-play (PTP) industry.  Pay-to-play journals are almost synonymous with corruption.  The mass of nearly-criminal ones prey on the career fears of desperate students, post-docs, and faculty (especially junior faculty, perhaps).  Even the honest PTP journals, of which there are many, essentially prey on investigators, and taxpayers, but the horde of dishonorable ones are no better than highwaymen, robbing the most vulnerable.  A story in the NY Times exposes some of the schemes and scams of the dishonorable PTPers.  But it doesn't go nearly far enough.

How cruel is this rat race?  Where does the PTP money come from?
We have every moral as well as fiscal right to ask where the PTP subscriptions are coming from.  Are low-paid, struggling post-docs, students, junior or even more senior faculty members using their own personal funds to keep in the publication score-counting game?  How much taxpayer money goes, even via legitimate grants, to these open-source publishers rather than to the research costs for which these grants were intended.  In the past, you might have had to pay for color figures, or for reprints, and these costs did come generally from grant funds, but they were not very expensive.  And of course grants often pay for faculty salaries (a major corruption of the system that nobody seems able to fix and on which too many depend to criticize).

The idea of open-source journals sounded good, and not like a private-profiteering scam.  But too many have turned out to be the latter, chickens laying golden eggs even for the better journals, when there is profit to be made. The original, or at least more publicly proclaimed open-source idea was that even if you couldn't afford a subscription or didn't have access to a university library--especially, for example, if you were in a country with a paucity of science resources--you would have access to the world's top science anyway.  But even if the best of the open-source organizations are non-profit, non-predatory PTP operations, and how would we know?, we are clearly preying on the fears of those desperate for careers in heavily oversubscribed, heavily Malthusian overpopulated science industries.

There is no secret about that, but too many depend on the growth model for there to be an easy fix, except the painful one of budget cuts.  The system is overloaded and overworked and that suggests that even if everyone were doing his/her best, sloppy or even corrupt work would make it through the minimal PTP quality control sieve.  And that makes it easy to see why many may be paying with personal funds or submitting sloppy (or worse) work--and too much of it, too fast.

There isn't any obvious solution in an overheated hyper-competitive system.  We do have the web, however, and one might suggest shutting down the PTP industry, or at least somehow closing its predatory members, and using the web to publicize new findings.  Perhaps some of the open review sources, like ArXiv, can deal with some of the peer reviewing issues to maintain a quality standard.

Of course, Deans and Chairs would have to actually do the work of evaluating the quality of their faculty members' works (beyond 'impact factors', grant totals, paper counts, and so on) to reward quality of thought rather than any quantity-based measures.  That would require the administrators to actually think, know their fields, and take the time to do their jobs.  Perhaps that's too much to ask of a system now sometimes proudly proclaiming it's on the 'business model'.

But what we're seeing is what we deserve because we've let it happen.

Thursday, March 16, 2017

Higher resolution discrimination: The GOP wants to allow employers to require genetic testing

This morning, Ed Yong published an article that takes on issues that we at the The Mermaid's Tale care very deeply about.
Link to article
The consequences for important medical research are not going to be pretty.

And I can't help but be angry about this for threatening to take away the fun of genetics too. If we can't have some control over our genetic testing, we can't do it for fun, for education, for finding out more about ourselves, for the awe of it, for innerspace exploration in the technology age. They're taking that away from us by eroding GINA.

I have lots of other thoughts... like about how this fits in so nicely with (not all of) the right's racist/eugenics inclinations.

And juxtapose this view from the political right where there is full-on acceptance of actually-more-than-genetics-can-even-deliver against their anti-science politics and policy...

It's like science is totally fine for Republicans as long as Mother Nature is a dictator.

If it's more complicated than that, then deny it, defund it, bulldoze it. The reality is, genetics is largely probabilistic; it is not a dictatorship. It's just so hard to convince people that it isn't. The ideological drive to justify behavioral differences and socioeconomic inequality with Nature above all is just too strong. If it's Nature, then we don't have to do the hard work of addressing the problems because Nature is Nature is Nature. This is really old thinking that really new knowledge (both through lots of science and lots of lived experience and lots of humanities and lots of art) has overturned but has not managed to catch on all that well. Along with new knowledge we get increasing understanding of genetics so these ancient beliefs can just be spouted by politicians using new-fangled science jargon.

This is really hard to write about today as all the stories about the proposed (and highly probable) budget cuts to science and the arts are blasting through my newsfeeds. It's overwhelming me today. I'm feeling hopeless and angry on behalf of science, art, knowledge, medicine, humanity, humans, children, teenagers, grown-ups, geezers. It's too much today.

But, back to Ed's article, I do need to put this here because it mentions that I have taught with 23andMe and longtime readers of the MT might know about that:

I don't teach with 23andMe anymore. I was doing it for as long as my university would pay for the kits. It was totally voluntary and students had to read Misha Angrist's book and endure long discussions and pass a quiz before deciding whether to go through with the testing. It was so powerful for teaching evolution, genetics, anthropology, etc... and we critiqued the hell out of it. My university said I needed to pay for the kits through course fees from now on. Before any of these threats to GINA, I decided not to do that and to stop using 23andMe. Now, even if my university reconsidered and funded the kits, I still wouldn't take it up again as a teaching tool.