In cancer science, many 'discoveries' don't hold up
NEW YORK - A former researcher at AmgenInc has found that many basic studies on cancer -- ahigh proportion of them from university labs -- are unreliable,with grim consequences for producing new medicines in thefuture.
During a decade as head of global cancer research at Amgen,C. Glenn Begley identified 53 "landmark" publications -- papersin top journals, from reputable labs -- for his team toreproduce. Begley sought to double-check the findings beforetrying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He describedhis findings in a commentary piece published on Wednesday in thejournal Nature.
"It was shocking," said Begley, now senior vice president ofprivately held biotechnology company TetraLogic, which developscancer drugs. "These are the studies the pharmaceutical industryrelies on to identify new targets for drug development. But ifyou're going to place a $1 million or $2 million or $5 millionbet on an observation, you need to be sure it's true. As wetried to reproduce these papers we became convinced you can'ttake anything at face value."
The failure to win "the war on cancer" has been blamed onmany factors, from the use of mouse models that are irrelevantto human cancers to risk-averse funding agencies. But recently anew culprit has emerged: too many basic scientific discoveries,done in animals or cells growing in lab dishes and meant to showthe way to a new drug, are wrong.
Begley's experience echoes a report from scientists at BayerAG last year. Neither group of researchers allegesfraud, nor would they identify the research they had tried toreplicate.
But they and others fear the phenomenon is the product of askewed system of incentives that has academics cutting cornersto further their careers.
George Robertson of Dalhousie University in Nova Scotiapreviously worked at Merck on neurodegenerative diseasessuch as Parkinson's. While at Merck, he also found many academicstudies that did not hold up.
"It drives people in industry crazy. Why are we seeing acollapse of the pharma and biotech industries? One possibilityis that academia is not providing accurate findings," he said.
Believe it or not
Over the last two decades, the most promising route to newcancer drugs has been one pioneered by the discoverers ofGleevec, the Novartis drug that targets a form ofleukemia, and Herceptin, Genentech's breast-cancer drug. In eachcase, scientists discovered a genetic change that turned anormal cell into a malignant one. Those findings allowed them todevelop a molecule that blocks the cancer-producing process.
This approach led to an explosion of claims of otherpotential "druggable" targets. Amgen tried to replicate the newpapers before launching its own drug-discovery projects.
Scientists at Bayer did not have much more success. In a2011 paper titled, "Believe it or not," they analyzed in-houseprojects that built on "exciting published data" from basicscience studies. "Often, key data could not be reproduced,"wrote Khusru Asadullah, vice president and head of targetdiscovery at Bayer HealthCare in Berlin, and colleagues.
Of 47 cancer projects at Bayer during 2011, less thanone-quarter could reproduce previously reported findings,despite the efforts of three or four scientists working fulltime for up to a year. Bayer dropped the projects.
Bayer and Amgen found that the prestige of a journal was noguarantee a paper would be solid. "The scientific communityassumes that the claims in a preclinical study can be taken atface value," Begley and Lee Ellis of MD Anderson Cancer Centerwrote in Nature. It assumes, too, that "the main message of thepaper can be relied on ... Unfortunately, this is not always thecase."
When the Amgen replication team of about 100 scientistscould not confirm reported results, they contacted the authors.Those who cooperated discussed what might account for theinability of Amgen to confirm the results. Some let Amgen borrowantibodies and other materials used in the original study oreven repeat experiments under the original authors' direction.
Some authors required the Amgen scientists sign aconfidentiality agreement barring them from disclosing data atodds with the original findings. "The world will never know"which 47 studies -- many of them highly cited -- are apparentlywrong, Begley said.
The most common response by the challenged scientists was:"you didn't do it right." Indeed, cancer biology is fiendishlycomplex, noted Phil Sharp, a cancer biologist and Nobel laureateat the Massachusetts Institute of Technology.
Even in the most rigorous studies, the results might bereproducible only in very specific conditions, Sharp explained:"A cancer cell might respond one way in one set of conditionsand another way in different conditions. I think a lot of thevariability can come from that."
The best story
Other scientists worry that something less innocuousexplains the lack of reproducibility.
Part way through his project to reproduce promising studies,Begley met for breakfast at a cancer conference with the leadscientist of one of the problematic studies.
"We went through the paper line by line, figure by figure,"said Begley. "I explained that we re-did their experiment 50times and never got their result. He said they'd done it sixtimes and got this result once, but put it in the paper becauseit made the best story. It's very disillusioning."
Such selective publication is just one reason the scientificliterature is peppered with incorrect results.
For one thing, basic science studies are rarely "blinded"the way clinical trials are. That is, researchers know whichcell line or mouse got a treatment or had cancer. That can be aproblem when data are subject to interpretation, as a researcherwho is intellectually invested in a theory is more likely tointerpret ambiguous evidence in its favor.
The problem goes beyond cancer.
On Tuesday, a committee of the National Academy of Sciencesheard testimony that the number of scientific papers that had tobe retracted increased more than tenfold over the last decade;the number of journal articles published rose only 44 percent.
Ferric Fang of the University of Washington, speaking to thepanel, said he blamed a hypercompetitive academic environmentthat fosters poor science and even fraud, as too manyresearchers compete for diminishing funding.
"The surest ticket to getting a grant or job is gettingpublished in a high-profile journal," said Fang. "This is anunhealthy belief that can lead a scientist to engage insensationalism and sometimes even dishonest behavior."
The academic reward system discourages efforts to ensure afinding was not a fluke. Nor is there an incentive to verifysomeone else's discovery. As recently as the late 1990s, mostpotential cancer-drug targets were backed by 100 to 200publications. Now each may have fewer than half a dozen.
"If you can write it up and get it published you're not eventhinking of reproducibility," said Ken Kaitin, director of theTufts Center for the Study of Drug Development. "You make anobservation and move on. There is no incentive to find out itwas wrong."