Classifying Study Designs in HTA: A New Tool to Assist in the Identification of Study Designs for the Purposes of HTA
Speaker(s)
Ferrante di Ruffano L1, Bishop E2, Reddish K2, Watkins D2, Edwards M3, McCool R2
1York Health Economics Consortium, York, NYK, UK, 2York Health Economics Consortium, York, UK, 3York Health Economics Consortium, York, YOR, UK
Presentation Documents
OBJECTIVES: Randomised controlled trials (RCTs) are the gold standard for assessing efficacy and safety. Increasingly, health technology assessment (HTA) considers evidence from non-randomised studies. Guidance recommends synthesising different designs separately due to their different inherent biases/limitations. If reviewers misclassify studies, it can affect which studies are included, potentially impacting review findings and the robustness of evidence available to decision-makers and patients. This research aims to develop a clear study design classification system based on ROBINS-I terminology, for use by reviewers of any experience, to use when performing HTA of pharmaceutical interventions.
METHODS: We performed a pragmatic web-based search for existing tools and appraised them, from which to develop a clear algorithm. Tool utility, consistency and user experience was first assessed by web-based survey in a small internal sample of reviewers, each independently using the system to categorise 18 published studies. Following improvements, the updated version was tested in a larger group of reviewers from multiple commercial and public organisations.
RESULTS: We present a graphic tool to identify study designs when performing HTA of pharmaceuticals. In piloting, a median of 7 reviewers (range 4-8) categorised each study. Rater agreement varied widely, with 100% agreement on the designs of 3/18 studies (17%), and ≥75% agreement on one design for 9/18 studies (50%). The most common sources of disagreement were between different types of cohort studies, and between case series and controlled cohort studies, largely due to inconsistent reporting. Results from testing the revised tool in the larger sample will be available shortly.
CONCLUSIONS: The pilot tool led to too much variation in study design categorisation to be useful. Consequently we present a revised version evaluated across a larger sample of reviewers. Further research will also investigate whether using the tool would change the results of systematic reviews, using a sample of published reviews.
Code
SA114
Topic
Study Approaches
Topic Subcategory
Literature Review & Synthesis
Disease
No Additional Disease & Conditions/Specialized Treatment Areas