Evidence underpinning approval of new cancer drugs raises questions

The findings add weight to existing research that raises serious concerns about low standards of evidence supporting new cancer drugs, and highlight the need to improve the design, conduct, analysis, and reporting of cancer drug trials.

In the European Union, the European Medicines Agency (EMA) is responsible for evaluating the clinical effectiveness and safety of new medicines.

In 2017, more than a quarter (24 of 92) of EMA approvals were for cancer drugs, most of which were based on evidence from randomised controlled trials, considered to be the “gold standard” for evaluating treatment effectiveness.

However, flaws in the design, conduct, analysis, or reporting of randomised controlled trials can distort estimates of treatment effect, potentially jeopardising the validity of their findings.

To evaluate these flaws in more detail, a team of international researchers examined the design, risk of bias, and reporting of randomised controlled trials that supported European approvals of cancer drugs from 2014 to 2016.

During this period, the EMA approved 32 new cancer drugs on the basis of 54 studies. Of these, 41 (76%) were randomised controlled trials; 39 had available publications and were therefore included in the study.

Only 10 trials (26%) measured overall survival as a main (primary) endpoint. The remaining 29 trials (74%) evaluated indirect (surrogate) measures of clinical benefit, which do not always reliably predict whether a patient will live longer or have a better quality of life.

Overall, 19 trials (49%) were judged to be at high risk of bias because of deficits in their design, conduct, or analysis. Trials that evaluated overall survival were at lower risk of bias than those that evaluated surrogate measures of clinical benefit.

Regulators identified additional problems with 10 of the 32 new drugs (31%) approved over this period, but these concerns rarely surfaced in the scientific literature.

The researchers point to several limitations. For example, they did not include clinical study reports, which contain detailed information about trial methods and results, and they focused only on cancer drug trials, so findings may not apply to trials in other therapeutic areas.

In addition, they evaluated “risk” of bias rather than bias itself: it remains a possibility that the methodological deficits identified by the authors did not lead to biased findings.

They also acknowledge that some of the bias might be unavoidable because of the complexity of cancer trials, but say their findings should prompt policymakers, investigators, and clinicians “to carefully consider risk of bias in pivotal trials that support regulatory decisions, and the extent to which new cancer therapies offer meaningful benefit to patients.”

Scientific publications and regulatory documents should make it easy for patients and clinicians to understand how well a study is conducted,” they add.

In a linked editorial, Australian-based researchers argue that uncertainty and exaggeration of the evidence that supports approval of cancer drugs “causes direct harm if patients risk severe or fatal adverse effects without likely benefit, or forgo more effective and safer treatments.”

Inaccurate evidence also leads to intangible harms, which encourage false hope and create a distraction from needed palliative care, they add.

This study shows that trial evidence alone is not enough, they write. Quality assessment of that evidence is also needed to ensure that these trials accurately estimate treatment effects.

https://www.sciencedaily.com/rss/all.xml

Leave a Reply