Real-World Evidence: Understanding Sources of Variability Through Empirical Analysis
Abstract
In this issue of Value in Health, Thompson1 reviews the merits and potential pitfalls of efforts to replicate randomized controlled trials (RCTs) using real-world data (RWD). We agree with much of the content in the article but disagree with the author’s primary conclusion that there is only downside to the emulation activities because failure to emulate will undermine confidence in real-world-data (RWD) studies. Indeed, the primary objectives of emulation studies are to identify the sources of variability in RWD, understand its limitations and advantages, and thereby appreciate its appropriate utility and value.
There are many reasons why RCTs might generate different treatment effect estimates than observational studies. In fact, it is somewhat surprising that observational studies would ever be able to account for sufficient real-world confounders to emulate the results of a RCT. But this is an empirical question, and it is central to a number of RCT emulations currently underway. Randomization controls for bias from both observed and unobserved confounders. With the exception of instrumental variables techniques, most statistical methods used in observational studies control only for observed confounders. Thus, the efficacy–effectiveness gap largely amounts to whether the important confounders are known and can be measured in available databases; this is likely to vary by data source, disease state, eligibility criteria, outcome measures, and others. Bartlett et al2 found that in a cross-sectional review of 220 clinical trials published in high-impact journals in 2017, only 15% could potentially be emulated using data available from administrative claims or electronic health records (EHR).
Authors
William H. Crown Barbara E. Bierer