New Institute Collaboration Aims to Prepare Public Health Officials For The Next Pandemic

The Institute for Experiential AI (EAI) at Northeastern University is pleased to announce an innovative new collaboration with the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS) to improve public health officials’ response to the next pandemic.

Under the partnership, the institute is developing a large language model (LLM) that will simulate emerging global health crises and evaluate user responses. The program is designed to show decision makers where gaps lie in their thinking and challenge their assumptions.

For EAI Director of AI + Life Sciences Sam Scarpino, who is leading the initiative with Northeastern Professor of the Practice Kenneth Church, the project exemplifies the unique strengths of the institute.

“What’s special about EAI—and what will make this collaboration successful—is that we have the right combination of subject matter expertise and AI expertise under one roof,” Scarpino says. “That’s all combined with a focus on experiential, human-in-the-loop AI and a delivery model that works for organizations with specific objectives and very tight timelines. It’s hard to imagine a better example of the kinds of work we want to do than this project.”

CAPTRS is a nonprofit organization that seeks to improve the ability of decision makers to address societal threats through simulation gaming. The collaboration will contribute to CAPTRS’ “Universe of Threats,” a catalog of human- and AI-generated scenarios designed to improve preparedness for the threats like pandemics, natural disasters, and more, which will be used in the games CAPTRS is creating.

“These game simulations serve kind of like a flight simulation for a pilot,” CAPTRS co-founder Phil Siegel explains. “So if you’re in FEMA, the CDC, or a state public health organization, you’re practicing your craft and getting better at it so you don’t crash the plane.”

To create the first-of-its-kind system, institute researchers organized data from thousands of global outbreaks that have occurred over the past 20 years from publicly available sources. They then trained a large language model on the data and evaluated its ability to create hyper-realistic scenarios by comparing it to both generic models like ChatGPT and real-world outbreaks. Users type in responses and conversationally engage with the program to gauge their performance.

The researchers found their specialized model produced extremely realistic outbreaks that mimicked the characteristics of real diseases, patient symptoms, doctor responses, and messaging from local health organizations. Results will be published in a forthcoming paper.

“The conclusion is that the fine-tuned models generate outbreak alerts that look and feel like real alerts,” Scarpino says. “When you evaluate them quantitatively, they are clearly different from what ChatGPT would generate and much more similar—in fact in many cases indistinguishable—from the characteristics of the actual alerts.”

The realism of the simulations is important because it helps evaluate how people will actually behave in a real disease outbreak. It will also allow the team to assess public health organizations’ methods for monitoring health crises more broadly.

“We expect decision makers to learn where their own gaps are in terms of their response to disease outbreaks, but we’ll also learn what our surveillance systems can tell us about what is making people ill—if you remember it was weeks before we knew what was making people ill in 2020,” Scarpino says. “It will both expand people’s repertoire to include more of these plausible pathogen threats and also identify opportunities for improved surveillance, response plans, and more.”

The project brings together experts in a broad range of fields. Scarpino has deep expertise in infectious disease modeling, Church is a leader in computational linguistics and large language models, and CAPTR’s founders spent the Covid-19 pandemic working with governments on models to improve their response. CAPTRS’ Chief Scientist for Gaming has experience in wargaming for organizations including NATO.

“At the Institute for Experiential AI, we’re committed to leveraging trans-disciplinary teams to tackle big problems at the interface between humans and machine intelligence,” Scarpino says.

CAPTRS has already worked with public health officials in a large city and said the experience underscored the need for these simulations.

“The first time people play it’s a disaster,” Siegel says. “That’s why you have to do it over and over again.”

Now the goal is to expand the collaboration to generate scenarios around natural disasters like forest fires and attacks on critical infrastructure.

“We looked at other organizations and companies, but the thing about EAI that was intriguing is that its AI group is structured not just around artificial intelligence, but also verticals like health care and finance,” Siegel says. “That’s important when you’re creating these targeted applications. I’m not sure a generic AI expert would know how to find this data and use it so effectively.”

Details on the expanded collaboration are expected in coming months.

Learn more about Scarpino’s vision for the institute’s AI + Life Sciences research focus area here.

Latest Posts

Inside Intuit’s Quest to Leverage the “Transformative” Powers of AI

Some companies are only just starting to think about AI. Others have already begun to embed it in their operations. Then there’s Intuit, which has been investing in AI for years. The financial technology company, which is known for products like TurboTax, QuickBooks, and Credit Karma, has a history of getting out in front of […]

Six Leading AI Experts Weigh in on the White House Executive Order

Last month, the Biden administration issued an Executive Order (EO) establishing rules and guidelines on the use of artificial intelligence. Through an assortment of benchmarks, appropriations, pilot projects, and reporting requirements, the order’s stated goals are to preserve privacy, protect vulnerable groups, promote competition, and advance civil rights, among other ideals. Our Directors of research, […]

Chegg CEO Announces A New Age Of Learning With Generative AI

Dan Rosensweig laid out a compelling vision for AI to radically transform the education industry at the Institute’s Leading With AI Responsible conference.  In January Dan Rosensweig, the CEO of education technology company Chegg, met with OpenAI CEO Sam Altman to discuss ChatGPT, which had been making waves since Altman’s company released the chatbot […]

AI and the Environment: Is It Enough to Lead by Example?

AI systems are nothing if not power hungry. Researchers from the University of Massachusetts found training just a single AI model can emit the same amount of carbon dioxide as about 63 gasoline-powered cars driven for a year. Another study estimated that emissions from the Information and Communications Technology (ICT) industry as a whole will […]

Peter Norvig Redefines AI Success with Call For Human-Centered Solutions

At the Institute’s Leading With AI Responsibly conference, the industry pioneer made the case that businesses should focus on AI’s broad societal impact as they develop products. Peter Norvig literally wrote the book on artificial intelligence. The California native is the co-author of one of the most popular textbooks on the subject, “Artificial Intelligence: A […]