Is Statistical Modeling Superior to Wisdom of Crowds?
By Byron Spice
Computer scientists and statisticians at Carnegie Mellon University are using both statistical modeling and the wisdom of crowds to guide their efforts at forecasting 2016-17 flu activity. Past experience suggests it remains an open question as to which is better at predicting the disease’s spread week by week.
The Delphi research group, uniting faculty and students from CMU’s Machine Learning, Statistics, Computer Science and Computational Biology Departments, is part of a research initiative with the U.S. Centers for Disease Control and Prevention (CDC) to develop methods of accurately forecasting flu activity.
In the previous flu season, 2015-16, three forecasting systems developed by CMU proved to be the most accurate, besting the 11 competing systems fielded by 10 other external groups participating in the initiative that season.
“Our predictions last season proved to be reasonable, but the truth is that when it comes to forecasting epidemics, whether it be for the flu or for other diseases, we’re just getting our feet wet,” said Roni Rosenfeld, professor in the School of Computer Science’s Machine Learning Department and Language Technologies Institute.
Last season’s predictions by the top-ranked CMU forecast system, for instance, were within 25 percent of the CDC’s best estimate of flu activity just 75 percent of the time, said Ryan Tibshirani, associate professor of statistics and machine learning. Though the forecasts are made week by week during the flu season, the CDC’s best estimate is not available until June, well after the flu season has ended.
Making those predictions more reliable on a weekly basis would no doubt be necessary before such forecasts might be used for deciding when to launch flu information and vaccination campaigns or for making staffing and scheduling decisions within the healthcare industry, he added.
“We’re still trying to squeeze everything we can from these models,” Tibshirani said.
Much epidemiological forecasting has been based on mechanistic models that consider how diseases spread and who is susceptible to them. But the Delphi group’s top-ranked system was a non-mechanistic model which uses a type of statistical modeling called machine learning to make predictions based on past patterns and on input from the CDC’s domestic influenza surveillance system. The surveillance system includes reports from doctor’s offices and clinics regarding the prevalence of flu-like symptoms.
But Delphi’s second-ranked system uses a much different approach – using weekly predictions by humans that, together, reflect the wisdom of crowds. This human system was the top-ranked forecasting system for the 2014-15 flu season, Rosenfeld said.
“It’s humbling,” Rosenfeld said, from a computer scientist’s point of view, that the human system has been neck-and-neck with the statistical, machine-learning system. “Any one human did not do better than the statistical system – they did worse. But in the aggregate, the human system did better that season.”
“The human system is more robust to unusual circumstances,” Rosenfeld explained, so may do well when flu activity falls outside normal bounds. “Humans are very good at improvising when they encounter novel circumstances.”
It is far too early to say how the systems are doing this season, he emphasized. Forecasts begin in October and continue through May, with forecasts issued each week for flu activity nationally and for each of 10 regions within the United States. But because of lags in reporting, the true flu activity levels will not be known until the season is over.
“Obtaining high quality data is critical for epidemiological forecasting, but it’s hard to get,” he explained.
Flu is useful for developing forecasting systems because data is plentiful. It is notoriously “noisy” data, however, because the data usually are based on symptoms, not tests for the flu viruses themselves, Rosenfeld said.
“You begin with the flu because that’s where you have the most data,” he said. But the Delphi group also is developing forecasting systems for dengue fever, which sickens about 100 million people worldwide each year, killing thousands. And the group would like to apply forecasting tools to such diseases and conditions as HIV, drug resistance, Ebola, Zika and Chikungunya.
Epidemiological modeling and forecasting is a highly interdisciplinary endeavor. In addition to Rosenfeld and Tibshirani, the Delphi group that worked on the CDC flu challenges included David Farrow, a recently graduated Ph.D. student in computational biology; Logan Brooks, a Ph.D. student in computer science, and Justin Hyun, a Ph.D. student in statistics.
“We are trying to create a broader technology for epidemics,” Rosenfeld said.
In the meantime, people can help the Delphi group’s efforts by joining its “wisdom of crowds” forecasting system. Anyone can participate by registering at http://epicast.org/.
CMU’s Delphi group belongs to a University of Pittsburgh-based MIDAS National Center of Excellence, a National Institutes of Health-funded network of researchers developing computational models to guide responses to disease outbreak.