Guiding Principles for Using Clinical Outcomes Assessments in Real-World Studies: What to Do When There Is No Regulatory Guidance
Angela Rylands, CPsychol, PhD, BSc, International Outcomes Research Kyowa Kirin, Marlow, England, UK; Ana Maria Rodriguez, PhD, MSc, BSc, PT, IQVIA, Madrid, Spain and McGill University, Montreal, Quebec, Canada; Elizabeth Molsen-David, RN, ISPOR, Lawrenceville, NJ, USA on behalf of the ISPOR Clinical Outcomes Assessment Special Interest Group
Introduction
Real-world data (RWD) and real-world evidence (RWE) are increasingly important in healthcare decision making. Medical product developers generate RWD and RWE to support and add value to their randomized clinical trial (RCT) findings (eg, via data captured in electronic health records, patient-generated data from surveys, wearables, mobile devices, etc, and from healthcare claims and disease registries).
Real-world studies can be used to generate insights that contextualize and generalize the findings from clinical trials where participants are selected by strict inclusion/exclusion criteria. Regulatory agencies and reimbursement authorities also use RWD and RWE to monitor the efficacy, safety, and cost-effectiveness of novel products. Well-designed real-world studies can provide additional evidence for clinical effectiveness and safety for patients under an array of heterogenous conditions, as well as demonstrate the patient-relevant value of new products to end users (ie, patients, carers, physicians, and payers).
A clinical outcome assessment (COA) is a clinical evaluation instrument that is used to measure patient outcomes in a clinical trial. There are 4 types of COAs: patient-reported outcomes, clinician-reported outcomes, observer-reported outcomes, and performance-based outcomes assessments. However, there are limitations to incoporating the patients’ voice obtained from RWE data into clinical and regulatory decision making due to lack of standardization among real-world studies. Real-world prospective studies are designed to reflect clinical experience across a broader and more diverse distribution of patients than an RCT and use the same COAs developed in trials. This is because many prospective real-world studies seek to provide a line of complementary evidence to that of an RCT.
Nonetheless, this approach can become problematic given the current practices surrounding the implementation and interpretation of COA data are variable, particularly in real-world practice whereby data are collected outside of the constraints of the RCT. While some studies are designed robustly with clear study hypotheses and research objectives using validated COAs to derive data, other studies are carried out without the inclusion of validated or reliable COAs, leading to questionable and ambiguous study findings. When COAs are used in real-world studies to measure patient-reported endpoints, a robust study design is critical to ensure the appropriate use and application of COAs and patient-relevant data analysis for high-quality real-world study findings.
"Well-designed real-world studies can provide additional evidence for clinical effectiveness and safety for patients under an array of heterogenous conditions, as well as demonstrate the patient-relevant value of new products to end users (ie, patients, carers, physicians, and payers)."
To date, there is no regulatory or health technology appraisal guidance or publications pertaining to the standardization of COA usage in real-world studies. This differs from clinical trials. The US Food and Drug Administration (FDA) has produced guidance on patient-focused drug development1,2 and patient-reported outcomes in the seminal 2009 Guidance for Industry Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims.3 ISPOR has supplemented the 2009 guidance publishing 12 ISPOR Good Practices Reports (Table 1) that provide additional detail for trial conduct for medical label claims.4-15
Table 1. ISPOR’s 12 PRO and COA Good Practices Reports
In addition, the European Medicines Agency has published guidance on the incorporation of COAs (again, in this case on patient-reported outcomes16) as a measure of treatment efficacy in clinical trials. Finally, other organizations, such as the CONSORT consortium, are working on standardization of COA use in clinical trials.17
While the existing guidance sets a high precedent in terms of study design, guidance documents were written in the context of regulatory approvals or with a specific purpose for RCTs. These types of guidance may fail to address the nuances that arise in studies carried out in the real-world setting. This is problematic because if we are to truly capture the patient voice using COA in real-world studies, guidance needs to address the particularities of COAs data outside RCTs, including heterogeneous patient samples, biases potentially created by open-label use, data collection practices, etc, that do not reflect common clinical practice or the impact of different study settings.
The ISPOR COA SIG’s Member Engagement Working Group undertook an ISPOR-wide survey project to determine the importance of guidance on incorporating COAs into real-world studies. The survey’s primary objective was to determine interest in (1) best practices for the design, use, and analysis of COA data in real-world studies, (2) methods for operationalization of COAs in real-world studies, and (3) regulatory guidance for the use of COAs in real-world studies.
With the survey’s findings, the working group developed a thought-provoking roundtable discussion, “Guiding Principles for Using COAs in Real-World Studies” to discuss the challenges in conducting these studies and potential solutions. The panelists discussed 4 primary concerns with the current use of COAs in real-world studies and proposed corresponding solutions, especially when compared to the use of COAs in clinical trials (Table 2).
Table 2. Identified concerns with clinical outcome assessment use in real-world
studies
The first concern was a lack of transparency about study design in real-world studies when compared with the transparency in clinical trials. For instance, many real-world studies develop stand-alone questions for use in a study rather than searching for and using existing and validated COAs.
A potential solution proposed was the creation of decision panels for specific therapeutic areas with the purpose of recommending appropriate validated COAs within each context of use. This could generate a known set of COAs, which could be used across studies to allow for greater consistency in study design, which could then facilitate comparison between studies. Emphasis should be made on the importance of interdisciplinary collaborations, where patient-centricity/COA specialists should be involved in outcomes decision making, analyses, interpretations, and education throughout the product evidence life cycle.
The second issue raised was around the analysis of COA data in real-world studies, specifically a lack of a priori planning. Panelists emphasized the need to justify the selection of the COA at the beginning of the study and then prespecify the endpoints to be analyzed, especially given that many COAs can generate multiple endpoints. This justification should include the reasons for the scoring algorithm, including the domain score, as well as the total scores of the COAs.
"There was consensus that there needs to be more rigor in the selection, implementation, and analysis of COAs in real-world studies."
A third concern was around mitigating missing data, which tends to be a concern in real-world studies because of the inability to impose study visits or data capture when it is not routine practice, resulting in less monitoring of data capture. It was proposed that at the outset attention should be paid to questionnaire length and the order of questions because missing data often occur on the last few questions so responses that may derive a primary or key secondary endpoint should be queried first. A further suggestion to minimize missing data was use of electronic data capture whenever possible.
The final problem discussed was the fact that current guidelines (eg, Framework for FDA’s Real-World Evidence Program) do not sufficiently cover use of COAs in a real-world context. For instance, patients are unblinded to treatment in real-world studies, which raises the concern of bias in their answers. Furthermore, real-world studies can report on data from multiple stakeholders, including international collaboration that can bring in issues of data governance.
Generally, it was agreed that real-world studies would benefit from the use of existing FDA guidance documents for the use of COAs in clinical trials.1-3 Additionally, it was suggested that COAs developed in accordance with the best practices outlined in the 2009 FDA guidance should be considered during the study design phase for use in real-world studies, although adaptations may be needed. There was consensus that there needs to be more rigor in the selection, implementation, and analysis of COAs in real-world studies.
The findings from the survey and roundtable discussion demonstrate the need for guidance to standardize the current variable approaches. The development of emerging good practices for COAs in real-world studies like the previously mentioned ISPOR Good Practices Reports would be a step in the right direction. The refinement and standardization of current practices will ultimately lead to more robust, patient-relevant data generated from real-world studies that are invaluable to the multiple stakeholders involved in healthcare decision making.
Acknowledgment: The authors gratefully acknowledge the following COA SIG members who contributed to the development of the survey and roundtable: Katja Rudell, Paraxel; Laurie Batchelder, IQVIA; Martha Bayliss, Optum; Laurie Burke, LORA Group; David Churchman, University of Oxford; Helen Doll, Clinical Outcomes Solutions; Coleen McHorney, Evidera; Sara Nazha, McGill University; Hye Jin Park, Johnson & Johnson; Vanessa Patel, Covance; Jiat Ling Poon, Eli Lilly; Ana Popielnicki, TransPerfect; Justin Raymer, University of Oxford; Tara Symonds, Clinical Outcomes Solutions; Michelle Tarver, US Food & Drug Administration; Robyn von Maltzahn, GSK; and Paul Williams, IQVIA.
References
1. US Food and Drug Administration. Patient-Focused Drug Development: Collecting Comprehensive and Representative Input. Rockville, MD: Food and Drug Administration, US Dept of Health and Human Services; 2018. Accessed July 12, 2021. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/patient-focused-drug-development-collecting-comprehensive-and-representative-input
2. US Food and Drug Administration. Patient-Focused Drug Development: Methods to Identify What Is Important to Patients Guidance for Industry, Food and Drug Administration Staff, and Other Stakeholders. Rockville, MD: Food and Drug Administration, US Dept of Health and Human Services. Published February 2022. Accessed October 3, 2022. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/patient-focused-drug-development-methods-identify-what-important-patients
3. US Food and Drug Administration. Guidance for Industry Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labelling Claims. Rockville, MD: Food and Drug Administration, US Dept of Health and Human Services; 2009. Accessed July 1, 2021. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/patient-reported-outcome-measures-use-medical-product-development-support-labeling-claims
4. Wild D, Grove A, Martin M, et al. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes (PRO) measures: report of the ISPOR Task Force for Translation and Cultural Adaptation. Value Health. 2005;8(2):94-104.
5. Wild D, Eremenco S, Mear I, et al. Multinational trials—recommendations on the translations required, approaches to using the same language in different countries, and the approaches to support pooling the data: the ISPOR Patient-Reported Outcomes Translation and Linguistic Validation Good Research Practices Task Force Report. Value Health. 2009;12(4):430-440.
6. Coons SJ, Gwaltney CJ, Hays RD, et al. Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO Good Research Practices Task Force Report. Value Health. 2009;12(4):419-429.
7. Rothman M, Burke L, Erickson P, et al. Use of existing patient-reported outcome (PRO) instruments and their modification: the ISPOR Good Research Practices for Evaluating and Documenting Content Validity for the Use of Existing Instruments and Their Modification PRO Task Force Report. Value Health. 2009;12(8):1075-1083.
8. Patrick DL, Burke LB, Gwaltney CJ, et al. Content validity—establishing and reporting the evidence in newly developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO Good Research Practices Task Force Report: part 1—eliciting concepts for a new PRO instrument. Value Health. 2011;14(8):967-977.
9. Patrick DL, Burke LB, Gwaltney CJ, et al. Content validity—establishing and reporting the evidence in newly-developed patient-reported outcomes (PRO) Instruments for medical product evaluation: ISPOR PRO Good Research Practices Task Force Report: part 2—assessing respondent understanding. Value Health. 2011;14(8);978-988.
10. Zbrozek A, Hebert J, Gogates G, et al. Validation of electronic systems to collect patient-reported outcome (PRO) data - recommendations for clinical trial teams: report of the ISPOR ePRO Systems Validation Good Research Practices Task Force. Value Health. 2013;16(4):480-489.
11. Matza LS, Patrick D, Riley AW, et al. Pediatric patient-reported outcome instruments for research to support medical product labeling: report of the ISPOR PRO Good Research Practices for the Assessment of Children and Adolescents Task Force. Value Health. 2013;16(4):461-479.
12. Eremenco S, Coons SJ, Paty, J, et al. PRO data collection in clinical trials using mixed modes: report of the ISPOR PRO Mixed Modes Good Research Practices Task Force. Value Health. 2014;17:501-516.
13. Walton MK, Powers JH III, Hobart J, et al. Clinical outcome assessments: a conceptual foundation—report of the ISPOR Clinical Outcomes Assessment Emerging Good Practices Task Force. Value Health. 2015;18(6):741-752.
14. Powers JH III, Patrick DL, Walton MK, et al. Clinician-reported outcome (ClinRO) assessments of treatment benefit: report of the ISPOR Clinical Outcome Assessment Emerging Good Practices Task Force. Value Health. 2017; 20(1):2-14.
15. Benjamin K, Vernon MK, Patrick DL, Perfetto E, Nestler-Parr S, Burke L. Patient-reported outcome and observer-reported outcome assessment in rare disease clinical trials—an ISPOR COA Emerging Good Practices Task Force Report. Value Health. 2017;20(7):838-855.
16. European Medicines Agency. Committee for Medicinal Products for Human Use (CHMP). Appendix 2 to the guideline on the evaluation of anticancer medicinal products in man: the use of patient-reported outcome (PRO) measures in oncology studies. 2016. Accessed October 27, 2021. https://www.ema.europa.eu/en/documents/other/appendix-2-guideline-evaluation-anticancer-medicinal-products-man_en.pdf
17. Calvert M, Blazeby J, Altman DG, et al. Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA. 2013;309(8):814-822.