POLICY ANALYSIS
Meta-Analysis of Complex Data via Mixed Models: Tools for Synthesizing Evidence-Based Medicine
Mireya Diaz PhD, Department of Epidemiology and Biostatistics, Case Western Reserve University, Cleveland, OH, USA
Evidence-Based Medicine and Meta-Analysis
Evidence-based medicine (EBM) is the process of generalized medical-decision making based on the systematic and critical evaluation of the existing medical evidence about the care of individuals. That is, it extends the realm of individual medical care to the health care policy arena by means of the summarization of outcomes from a hierarchy of study designs, of which the systematic overview of randomized trials occupies the pinnacle. his summarization process ultimately aims at the reduction of disparities in practices to a minimum. The regular use of EBM has increased in the last 10 years by more than 100 fold from 20 publications presenting EBM as subject heading in 1995 [1]. Meta-analysis forms the methodological pillar of EBM, providing the quantitative summarization of the reviewed evidence. Its use has also increased since 1995 but at a slower rate, given that at that time it already counted with substantial acceptance within the medical literature. The merits of meta-analysis lie in its ability to: 1) evaluate objectively the efficacy of interventions; 2) combine the existing evidence in order to resolve issues with high uncertainty; 3) to explore and explain differences among results from distinct studies; and 4) foster the design and execution of new studies.
Statistical Models in Meta-Analysis
First attempts to provide a quantitative summarization of different studies were based on the known fixed effects model. This is a statistical model that considers the sample of studies obtained as the entire population of studies of interest. It assumes that the estimate of the effect is the same across studies that is, it is homogeneous. However, rarely, studies are that similar and there is variation among studies beyond that expected from imprecision of results. This variation is known as heterogeneity. Heterogeneity can arise from different sources: statistical (outcome definition, parameter estimates), methodological (designs, protocols) or clinical (patient populations, practitioners). Also, heterogeneity can manifest in two ways, depending whether it is observed across studies for a given treatment (i.e. additive heterogeneity), or if it is observed with regards to the relative performance of two interventions within a study (i.e. interactive heterogeneity) [2]. In the presence of heterogeneity the simple fixed effects model fails to incorporate adequately the additional variability and therefore underestimate the precision of the overall effect. On its rescue comes the random effects model [3], which by considering the sample of studies being examined as a sample, acknowledges the potential existence of sources of variation beyond that one observed. When the effect estimate is in fact similar across studies, fixed and random effects estimates are alike.
Having achieved this level of statistical sophistication, it is possible nowadays to combine more complex outcomes and/or study designs within a single meta-analysis. In this way, we will see more meta-analyses dealing with 1) outcomes other than the standard means or proportions; 2) multiplicities of different nature -several strata, several points in time, several outcomes; and 3) mixed comparisons.
Mixed Comparisons
The term mixed comparison defines the use of both direct and indirect comparisons of effects across studies. That is effects of two or more interventions for the same condition are compared directly in the same study vs. the effects for those same interventions being examined in different studies and then compared indirectly. The concept can be extended to a broader context such as comparison of strata rather than interventions. The use of indirect comparisons has resulted advantageous in the case where no direct estimates are available, when the power provided by direct comparisons is small and there is a body of evidence regarding the individual treatment effects or comparisons with other interventions, or when interest lies on comparisons across a multitude of interventions [6, 7]. However, indirect comparisons cannot be used indiscriminately. Its advantages take place when conditions such as similarities between samples, study designs and implementation of interventions are present. In such scenarios, empirical studies have shown that generally both types of comparisons are similar [6].
More recently Lumley [8] proposed a measure to assess the level of consistency or “coherence” of treatment comparisons across studies when there is redundancy of information with respect to particular interventions. This measure is used not to select which comparisons should be incorporated into the meta-analysis, but rather to consider the additional level of uncertainty brought about by the indirect comparison in the measures of confidence of the effect estimate (i.e. confidence intervals). He proposed the coherence measure as a partial solution to the problem of reliability of cross-study comparisons.
Future Developments
Much has been done in terms of providing tools for practitioners in order that more and more complex meta-analyses of the clinical evidence can be performed. However, still much have to be done within statistical methodology in relation to the use of mixed estimates. Must of the work done so far, which is somewhat limited in itself relates to paired comparisons. The body of work so far does not assess the robustness of methods for the incorporation of single arm studies into paired comparisons in order to increase overall power. The utility of the coherence measure should be assessed in this perspective. This goes in hand to the future research endeavors proposed by Lumley [8] for network meta-analysis in the context of multi-armed trials. Two other issues within the framework of mixed comparisons highlighted by other authors [7, 8] that deserve further consideration are the covariance structures and assessment of the assumption of additive effects of interventions in these networks of treatment comparisons.
References
1. Zou KH, Fielding JR, Ondategui-Parra JR. What is evidence-based medicine ? Acad Radiol 2004;11:127-33.
2. Berry SM. Understanding and testing for heterogeneity across 2x2 tables: application to meta-analysis. Statist Med 1998;17:2353-69.
3. Berkey C, Hoaglin D, Mosteller F, Colditz G. A random effects regression model for meta-analysis. Statist Med 1995;14:395-411.
4. Efron B. Empirical Bayes methods for combining likelihoods. JASA 1996;91:538-50.
5. Morris CN. Parametric empirical Bayes inference: theory and applications. JASA 1983;78:47-55.
6. Song F, Altman DG, Glenny A-M, Deeks JJ. Validity of indirect comparison for estimating efficacy of competing interventions:
empirical evidence from published meta-analysis. BMJ 2003;326:1-5.
7. Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Statist Med 2004;23:3105-24.
8. Lumley T. Network meta-analysis for indirect treatment comparisons. Statist Med 2002;21:2313-24.