Can We Speak to GPT to Inform Health Technology Assessments?
Author(s)
Srivastava T
ConnectHEOR, London, LON, UK
Presentation Documents
OBJECTIVES:
Health technology assessments (HTAs) play a pivotal role in informing healthcare decision-making. The emergence of advanced generative artificial intelligence large language models (LLMs), like OpenAI's GPT4 and Google's PaLM2, presents an opportunity to revolutionize HTAs. This abstract examines the potential of LLMs in HTAs.METHODS:
LLMs possess the ability to comprehend and generate human-like text, enabling automated analysis of vast volumes of data from diverse sources. Incorporating LLMs into HTAs can streamline and accelerate evidence generation, including literature review, synthesis, economic modeling and real-world evidence (RWE).RESULTS:
One of the key applications of LLMs in HTAs is automating the systematic literature review process. LLMs can rapidly review and extract relevant information from a multitude of studies, saving significant time and resources. Secondly, LLMs can aid in identifying and assessing novel treatment comparators, especially in single-arm trials, by analyzing various data sources. Thirdly, LLMs can assist in synthesizing and analyzing RWE to evaluate the long-term effectiveness and safety of healthcare interventions. LLMs can identify trends, patterns, and safety signals that may not be easily identifiable through traditional methods. Additionally, LLMs can facilitate the conceptualization of health economic models by enabling testing of various model structures and clinical pathways, exploring alternative modeling approaches, and refining the model to accurately represent complexities of the healthcare system. By iteratively refining with the assistance of LLMs, analysts can enhance the model's validity, improve its predictive capabilities, reduce uncertainty, and strengthen its suitability for HTAs, which may be challenging to achieve solely through human intelligence.CONCLUSIONS:
Challenges must be addressed to ensure the successful integration of LLMs in HTAs. These include the need to address issues related to algorithmic transparency, interpretability, bias, and data quality. Ethical considerations, such as privacy protection and accountability, must also be carefully addressed to build trust and ensure the responsible use of LLMs.Conference/Value in Health Info
2023-11, ISPOR Europe 2023, Copenhagen, Denmark
Value in Health, Volume 26, Issue 11, S2 (December 2023)
Code
HTA207
Topic
Health Technology Assessment, Methodological & Statistical Research, Study Approaches
Topic Subcategory
Artificial Intelligence, Machine Learning, Predictive Analytics, Decision & Deliberative Processes, Literature Review & Synthesis
Disease
No Additional Disease & Conditions/Specialized Treatment Areas