Why I Reject the Comparison of Metaculus to Prediction Markets
Metaculus is not a prediction market. Metaculus and prediction markets both aggregate users’ forecasts, and both reward users for accurately anticipating the future. But the mechanism for doing so says a lot about the values of each. The purpose of markets is to determine price, and participants try to maximize profits. The purpose of science is to improve our understanding of the world, and participants are motivated to find the truth. Prediction markets conflate the two missions and motivations, whereas Metaculus is a scientific platform specifically designed to maximize epistemic value, not monetary value.
In prediction markets, participants buy and sell contracts that pay out based on the outcome of a future event. They place bets, just as you would at a casino, and the rewards are financial and zero-sum. You win money that someone else loses. The most generous interpretation of this arrangement is that bettors produce more accurate forecasts because they have “skin in the game.” But research shows that this is not necessarily true: forecasting platforms like Metaculus can outperform prediction markets. What’s more, when the rewards are big enough, prediction markets become unstable. Participants are incentivized to manipulate rather than predict the outcome, which is one reason regulators restrict their activity.
In contrast, at Metaculus we reward forecasters over time for their accuracy and calibration across many predictions. The platform’s architecture is inspired by Bayesian statistics, ensemble modeling, time series analysis, machine learning, and related techniques. Metaculus is creating an entire forecasting ecosystem that enables collective intelligence to be harnessed in the service of accurately anticipating the future. We offer newcomers a chance to learn and practice the skill of forecasting in a rich and rigorous intellectual community, and we offer advanced forecasters the opportunity to work on undeniably interesting and important questions, investigating such topics as the course of the war in Ukraine, the impact of artificial intelligence, and our ability to respond to future pandemics.
We do financially reward forecasters who perform exceptionally well over a set of predictions–e.g., we offer cash prizes to top performers in many of our tournaments–and we pay top forecasters for their contributions to our policy work, compensating them for their time just as you would anyone with a valuable skill. But the dynamic is primarily cooperative, not competitive. What draws forecasters to Metaculus is the community we have built–a community of practitioners who care about the respect of their peers and value the impact they can have through our policy programs.
As I’ve said before, Metaculus represents a novel experiment in epistemic infrastructure. Our forecasting platform, organization, community, and core values function well together because we have tightly integrated forecasting research, infrastructure, community building, talent identification, professionalization, and policy programs. The different parts of the ecosystem provide the forecasting community with intellectual challenges, learning opportunities, social engagement, financial rewards, a path to prestige, the satisfaction of public-spirited impact, and, yes, a chance to have fun.
Even though the platform will mark its 8th anniversary later this year, I believe we are still very much at the beginning of our journey. (Given the number of Metaculus forecasts that resolve in 2050, 2100 and beyond, we need to have a long-term plan.) There’s still a lot of learning, iteration, and innovation ahead of us, but based on what has worked thus far, here is the synthesis of what we do:
1. Model parameterization
We collaborate with modelers and researchers to generate hybrid human-ML models that match or outperform state-of-the-art forecasting systems. Our longstanding collaborations with researchers at the University of Virginia’s Biocomplexity Institute, as well as Tom McAndrew’s biostatistics lab at Lehigh University, have led to a number of promising experiments in using human judgmental forecasting to parameterize computational models.
- Aggregating probabilistic predictions of the safety, efficacy, and timing of a COVID-19 vaccine, June 2021
- Chimeric forecasting: combining probabilistic predictions from computational models and human judgment, Feb 2022
- Early human judgment forecasts of human monkeypox, May 2022, July 2022
- Utility of human judgment ensembles during times of pandemic uncertainty: A case study during the COVID-19 Omicron BA.1 wave in the USA, October 2022
This line of research connecting forecasting and modeling is ongoing, and continues to generate extremely promising results.
Researchers partnering with us on model parameterization benefit from expanded access to the Metaculus API.
2. Scenario <> Indicator <> Forecasting Workshops
We convene multi-stakeholder workshops connecting scenarios, indicators, and forecasts, coordinating the work of experts, forecasters, and policymakers. This enables decision processes that utilize the basic methodology and cultural norms of science in a structured and repeatable way. They typically serve as modeling charrettes, the outputs of which become precursors for future model parameterization work. They generate a lot of focus, and are a lot of fun, involving the hands-on participation and creativity of the entire group.
3. Public forecasting platform: Metaculus.com
We develop a core forecasting platform that enables the aggregation of knowledge and collective learning, with incentives for forecasters that reward accuracy and calibration. With this platform, we foster a community of practitioners, build and foster excellence in forecasting, and identify and recruit the best forecasting talent. This is currently the most visible part of what we do. Metaculus.com serves as the convening space for:
- Talent identification via tournaments and essay competitions. Our on-platform contests focus forecaster and analyst attention on important topics and serve as a critical tool for talent identification. Employers are increasingly taking note of forecasting experience on Metaculus: a talented Metaculus forecaster with over 11,000 public predictions under their belt was recently hired by the CDC to work on disease modeling.
- Community building. Metaculus was the first major forecasting platform to enable our community members to write questions, and we recently released a feature that enables them to co-author questions. We also host events such as journal clubs and forecasting hackathons, with participants from around the world.
- Long-term horizon scanning. The Metaculus platform can serve as a global sensor to help responders and humanitarian agencies identify potential conflict zones and crises around the world, and to help policymakers prepare for significant changes. It has successfully captured early signals on pandemics, wars, and technological progress. As researcher Daniel Eth has noted: “In January 2020, when few people were concerned about COVID, Metaculus predicted that more than 100,000 people would become infected with the disease. Metaculus anticipated a breakthrough in the computational biology technique of protein structure prediction, before DeepMind’s AI AlphaFold astounded scientists with its performance in this task.”
- Rapid community and programmatic response to crises. In many crises, hours and days matter. Metaculus.com and our programs team are set up to rapidly respond, and to do so in a fairly decentralized way that leverages our global forecasting network. Once an early signal has turned into a true cause for alarm, or an outright disaster, Metaculus can serve as a central hub for rapid information aggregation and response. Already, our team and forecasting community have quickly and efficiently responded to the Covid-19 pandemic, the invasion of Ukraine, the Mpox outbreak of 2022, and the current H5N1 moment of early 2023.
As we continue to add new platform capabilities over the next several years, I suspect that the comparison to prediction markets will occur less frequently. Earlier this week, we launched Conditional Pairs, a feature that lets forecasters explore dependencies and relationships between events. Models not only help us predict what will happen, but they also explain why. And, crucially, they enable us to simulate different scenarios, providing critical information to decision-makers. As we continue to expand our platform capabilities, we are augmenting our capacity to empower better policy through the rigorous process of science.
Thank you to Anthony Aguirre and J. Peter Scoblic for their helpful comments and suggestions, which undoubtedly improved this piece. Any errors are my own.