Updated by Julia Belluz on
July 28, 2016
The road to getting a new pharmaceutical drug on the market is long and
It begins with a novel compound that must first be tested in
cells and then animals. This "pre-clinical" phase
of drug development, which can last for several years, allows researchers to
understand how potential therapies might work on different diseases and
whether the drugs are likely to be safe or toxic in people.
But for every
5,000 compounds assessed at this stage, only about five are promising
enough to even try in humans. And after the clinical trials in humans, only
one will actually reach pharmacy shelves.
This overwhelming rate of
failure here is often attributed to the fact that mice and cells are poor
substitutes for people.
But it increasingly looks like that gap may be
caused by something else entirely: thequality of animal and cell studies.
"There is this idea that drugs seem to work in animals, and then when
you test the same drugs in humans, they fail," said Emily
Sena, a researcher at the University of Edinburgh who has been
dissecting the world of animal studies. Instead, some drugs may fail, Sena
argues, because some animal studies are just so poorly designed.
became attuned to the bad animal science problem about a decade ago, when
she and her colleagues began
to investigate why NXY-059, a highly touted drug for stroke patients,
failed in human studies after extremely promising results in animals.
They quickly realized the animal research on the drug was flawed because the
researchers had not taken basic measures to reduce the risk of bias.
NXY-059 turned out to be just one case study in a sea of examples. Many
basic studies, Sena and other researchers discovered, are too small, riddled
with flaws, or so contaminated as to be useless for testing whether drugs
This epidemic of bad basic science means breakthroughs are lost in
translation, and potentially lifesaving therapies, unnecessarily delayed on
their way to helping people who need them. It also means animal lives are
being wasted on sloppy studies that can't really tell us anything about the
Animals studies are often so small they're irrelevant
new drug, there are myriad questions about how the human body will absorb it
and respond to it. But testing new drugs in people is risky and expensive.
So researchers heavily rely on animals like mice that are biologically very
similar to humans (we share 95 percent of the same genes, and get many of
the same diseases) to answer some of the most basic questions about a
potential new medicine.
Over the years, animal rights activists and
ethicists have pointed out that tests on animals can be unnecessarily cruel,
painful, and wasteful of animal lives.
This awareness has led to mandates
like the UK's 3Rs principles: Any
animal experiment application that gets ethical approval should demonstrate
that it has considered how to reduce the number of animals used in research,
replace the use of animals with other models, and prevent unnecessary
suffering by refining the ways scientists care for animals. In the US, the Animal
Welfare Act of 1966 led to similar protections.
Sena argue that there's been an unintended consequence of the push to reduce
the number of animals used in studies: Too many animal studies are now so
small as to be meaningless.
RESEARCHERS ANALYZED MORE THAN 2,600 STUDIES
THAT USED ANIMAL MODELS, AND FOUND THAT ONLY 1 PERCENT REPORTED A SAMPLE
Sena leads the Collaborative Approach to Meta-Analysis
and Review of Animal Data in Experimental Studies (or CAMARADES),
an international group that's dedicated to systematically analyzing animal
data across a range of different conditions. (They're basically the Cochrane
Collaboration of animal research on diseases.) In one study, Sena and
found that fewer than half of the animal experiments they looked at
included a large enough sample size (i.e., enough animals) to be
In another, they analyzed more than 2,600
UK studies that used animal models and found that only 1 percent
bothered to report a sample size calculation. In an ideal world, researchers
should publish how they determined the minimum number of animals required to
make sure they could answer the question they set out to answer. But in
animal studies, that step overwhelmingly doesn't happen.
Sena thinks this
is because researchers have misinterpreted the push to reduce the number of
animals in studies. "Use the fewest number of animals to answer your
research question has turned into using the fewest animals," she said. And
this means, too often, animal studies are so small as to be irrelevant.
"I don't think it's ethical to do an experiment with five animals in each
group when that's [underpowered]," she says. That's a nuanced message that
hasn't been easy to get across, she added. "I have a few folks who
misinterpret my stance. They think it's, ‘Don't do animal studies.' I'm not
saying that at all. I'm saying do them properly, and you probably need to do
Small studies also undercut the validity of the findings
and the goal of using fewer animals in the long run.
also give larger effect sizes," said University of Edinburgh's Malcolm
Macleod, a pioneer in this field who helped establish CAMARADES. "They
create red herrings that other people have to come along and fix later. And
it always takes longer to fix something … so you end up using more animals."
study, Macleod and his colleagues looked at the use of p-values, those
tests of statistical significance that are now commonly perceived as a
signal of a study's worth, in animal studies of neurological disorders. They
wanted to test whether there were too many studies with "positive," or
statistically significant, results. Of the 4,445 studies they looked at,
1,719 boasted a "positive" result -- nearly double what they calculated would
be statistically possible.
Animal studies aren't just too small -- they're
also rife with biases
"Publication bias" is
a big problem in the world of research: Not all studies that are conducted
actually get published in journals, and the ones that do tend to have
positive and dramatic conclusions, leaving a skewed impression of the
Estimates suggest the findings from half of all clinical
trials that are conducted in humans are never published. It turns out the
problem in basic research is not much better.
analysis of systematic reviews of animal stroke studies, the CAMARADES
team estimated 20 percent of animal studies were unpublished, which leads to
an overestimate of the effects of treatments. And the problem likely extends
beyond just stroke research. "It is probable that publication bias has an
important impact in other animal disease models, and more broadly in the
life sciences," the researchers wrote.
So not only are researchers
failing to publish all their work on clinical trials in humans but the
publication track record of pre-human findings looks similarly lackluster.
The CAMARADES group also found that researchers conducting animal studies
often don't take the simple steps to reduce bias: randomizing which animals
get the placebo and which get the control, and being transparent about
potential conflicts of interest.
Looking at a sample of 2,600 animal
studies of drugs, the CAMARADES
team found only 622 (or 23 percent) used randomization. Meanwhile, 308
(or 11.5 percent) included a statement on potential conflicts of interest.
For high quality studies, Sena said, "You want animals to be randomized,
and you want conflicts of interest to be declared -- all that [should be]
upfront so you can interpret that." But again, it's more the exception than
the rule in the world of animal research.
Up to 36 percent of cell
lines are misidentified or contaminated
HeLa cell line -- the most commonly used in cell research -- has also been subject
Along with animal research, early testing in
cells is another common pre-clinical step in drug development. Researchers
use cell lines, which have been grown in controlled conditions, to better
understand how diseases work. They can use them to see how, for example,
malignant or healthy tissues might respond to potential cancer drugs or
At first glance, this may seem like a purer endeavor
compared with messy human experimentation. But meta-researchers who study
cell research say that can't be further from the truth. Over the years, they've
found that a lot of basic research on cells isn't reproducible because
it's also plagued by flaws and biases.
In a new study
published Tuesday in the journal Scientific
Reports, researchers found that breast cancer tumor cells from the same
cell bank responded differently to the same chemical treatments. The cells
were supposed to be clones -- they had been vetted for quality before the
experiment, and they came from one of the world's best cell banks. They
should have had the exact same responses to the chemicals.
were dramatic [genetic] differences from one vial of these cells to
another," said study leader Thomas Hartung, a professor in environmental
health sciences and molecular microbiology and immunology at Johns Hopkins.
He and his co-authors decided to investigate why, and discovered that the
cells had undergone genetic drift.
"THEY COULD BE DOING REALLY GOOD
SCIENCE BUT WITH A FUNDAMENTALLY FLAWED SYSTEM, WHICH MEANS WHATEVER THEY
FIND IS WRONG"
Since the cell line in question is very common -- there
have been about 23,000 articles written about MCF-7 cells -- Hartung said
he's concerned about what his findings mean for other research involving
Troubles with cell quality aren't exclusive to genetic
variation, said Andy
Bradford, a University of Colorado researcher. Researchers are supposed
to validate that they're working on the correct cell model, and whether
their cells have been contaminated, before starting a study. In other words,
they should make sure the breast cancer cells they think they're working on
are indeed breast cancer cells and that those breast cancer cells haven't
been mixed up with lymphoma cells, for example. But all too often, Bradford
said, that doesn't happen.
And that's a big problem, because
researchers have found that up
to 36 percent of cells lines are misidentified or contaminated. In one
instance, Bradford explained, a researcher thought she was working on a
thyroid cell line, testing a potential therapeutic, when in fact she had
been working on melanoma cells.
"That misidentified cell line led
investigators to pursue a misdirection," Bradford said. "They could be doing
really good science but with a fundamentally flawed system, which means
whatever they find is wrong."
To guard against the problem, some
journals (including Nature) and funders (like the National Institutes of
Health) are requiring researchers to confirm they've validated their cells
to make sure they're the correct type before starting an experiment. But as
Ivan Oransky and Adam Marcus pointed out at Stat
News, progress has been slow. Researchers have known about the problem
than 50 years, and yet the vast majority of journals still require no
such validation step.
The problems that plague animal and cell
research affect the rest of science
We know that perverse incentives are
behind a lot of bad research. That's what we found when we asked 270
scientists about all the ways
research can go wrong. In order to keep their jobs and continue doing
science, researchers told us they are pressured to publish and attract lots
of funding, and that pressure too often leads to exaggerated findings and
Macleod said that's just as true in the world of
animal research. "Academic institutions have become factories," he said.
"They now have a business model that requires people to get grants."
That pressure can lead to corner cutting in basic research -- which sets
researchers on wrong and misleading paths long before the messy and
expensive science of testing in humans.
Still, bad science surely is
not the only reason so many drug trials in humans fail. Sena, Macleod, and
other researchers have produced good
evidence that animals just aren't great models for some human diseases.
But reducing the amount of bad animal science out there will help the
situation. For their part, the CAMARADES team helped set up Multi-PART,
a European funding project to enable multiple institutions to collaborate
around higher-quality animal studies. Some of their objectives include
increasing randomization and blinding in basic research.
advocates for funders to withhold a portion of research grant funding until
researchers have published their work. With more formal acknowledgements of
these problems, we may start to see some change.