Abstract: Large Language Models as Data Interfaces for Health Applications
Large Language Models (LLMs) can solve a variety of problems with little to no supervision and have the potential to automate increasingly complex cognitive tasks. However, LLMs still have fundamental limitations, e.g., generated outputs can contain harmful biases and hallucinations that can be subtle but consequential, particularly in high stakes domains. How can LLMs be used, not to automate decisions, but to empower experts to make more informed decisions?
In this talk, Amir will propose enlisting LLMs as data interfaces to extract key pieces of information and retrieve evidence from large collections of unstructured data. Amir will first discuss some of the tradeoffs of framing information extraction and retrieval tasks as conditional language generation. Also: a focus on health applications and present three use cases of LLMs as data interfaces to:
– Extract medical claims from social media and retrieve trustworthy evidence to support/refute said claims — this can help social media content moderators combat medical mis- and disinformation, and public health experts monitor the needs and concerns of different populations.
– Extract key medical concepts and evidence regarding the efficacy of different interventions from abstracts describing randomized controlled trials to inform evidence based medicine practitioners.
– Retrieve evidence from the clinical notes of a patient’s Electronic Health Records to help physicians either confirm a potential diagnosis or identify alternative diagnoses based on the evidence.
Flip through the slides from the presentation here:
Biography
Silvio Amir is a core faculty member at EAI and assistant professor in the Khoury College of Computer Sciences. His research develops Natural Language Processing and Machine Learning methods for personal and user generated text, such as social media and clinical notes from Electronic Health Records. Amir is primarily interested in methods for tasks involving subjective, personalized or user-level inferences (e.g., opinion mining and digital phenotyping). In particular, his work aims to improve the reliability, interpretability and fairness of predictive models and analytics derived from personal and user generated data. His research is part of ongoing efforts to develop Human-centered AI (i.e., to empower rather than replace humans) and AI for Social Good (i.e., to tackle meaningful social, societal, and humanitarian challenges). To achieve these goals, he often collaborates with domain experts in multidisciplinary projects to address real-world problems in the social sciences, medicine, and epidemiology.
Amir earned his doctorate from the University of Lisbon, conducting part of his doctoral research as a visiting researcher at the University of Texas at Austin and at Northeastern University in Boston. He then moved to John Hopkins University, where he completed his postdoctoral research in the Center for Language and Speech Processing and served as a lecturer at the Whiting School of Engineering.