In randomized clinical trials, commercial sponsorship influences how studies are designed and the results reported in ways that often benefit the study’s sponsor, Weill Cornell Medicine and NewYork-Presbyterian investigators report in a new study. The findings underscore the need to improve study design, reporting and guidelines to avoid bias in these trials, the authors say.
The study, published June 1 in JAMA Internal Medicine, focuses on coronary, vascular and structural interventional cardiology, and vascular and cardiac surgeries because of the enormous burden cardiovascular disease places on public health. In the United States, it accounts for approximately 800,000 deaths per year and 6 percent of total dollars spent on healthcare. A rigorous approach to evaluating new interventions for heart disease is critical.
“In medicine in general, but in particular cardiovascular medicine, we see randomized clinical trials as the best form of evidence,” said lead author Dr. Mario Gaudino, a professor in cardiothoracic surgery and director of translational and clinical research in cardiothoracic surgery at Weill Cornell Medicine, and a cardiovascular surgeon at NewYork-Presbyterian/Weill Cornell Medical Center. “Our practice is very heavily influenced by the results of randomized clinical trials. If those trials are not properly performed and reported, there’s a risk that we use the wrong strategy and don’t treat patients in the best possible way.”
The investigators analyzed data from 216 randomized clinical trials published between 2008 and mid-2019 that involved invasive cardiovascular treatments. Among the factors they assessed using a variety of quantitative measures: the trial characteristics (including design, outcomes, and reporting) and whether or not the study was funded by a commercial sponsor.
They were surprised to find relatively few trials (less than 20 per year, on average) had been conducted, the majority of which were small and sponsored by industry. Most of the trials had limited power to detect large treatment effects and followed patients for a limited period of time. The industry-sponsored trials were less likely to look at the most clinically important outcomes and more likely to focus on outcomes that are not very relevant for patients. Industry-sponsored trials were more likely to find results that favor the industry product. Also, in case of industry sponsored trials where no differences between groups was found, researchers found evidence of interpretation bias favoring the sponsor.
The investigators also used a metric called the Fragility Index to assess the strength of the trial results. Clinical trials track the number of patients who experience adverse events, such as stroke or heart attack, and record the outcomes of those who don’t have these as “nonevents”. The Fragility Index of a clinical trial can be determined by calculating the minimum number of patients whose outcomes, if switched from a nonevent to an event would change the results of the trial. Lower values indicate less robust results, and the investigators found the overall Fragility Index to be quite low. They also found that commercially sponsored trials were more robust and less fragile.
“We need to do a better job in designing trials that are important for patients—in the end, the call of clinical research is to provide information that’s relevant to the patient; it isn’t to sell a device,” said Dr. Gaudino, adding that the role of sponsors needs to be more clearly defined. “Industry is important because it provides support to clinical research, but it must be clearly regulated, there must be transparency, and every conflict of interest needs to be fully declared.”