This study looks at the usage of policy modelling and simulation.
It includes 5 cases plus a meta-review of models used to predict the Covid-19 epidemics. See the report and the annex with in depth case studies.
The key recommendations are:
- Timely collection and transparency of data. It is crucial to ensure that the data collected are updated and that are collected at regular and timely intervals. In fact, in order to ensure the relevance of the policies, they should build on timely analysis and results. Further, it is important to provide specific and complete information about the methodology and procedures for the data collection, in order to inform the users of the models of the caveats and shortcomings. Also, it is important to provide stakeholders with access to results and outputs used to develop the different scenarios, in order to ensure comparability.
- Transparency and openness of assumptions and models. Trust in the results stemming from the model are increased if all the assumptions made by the modellers are transparent and available for the other experts to criticize and scrutiny. In fact, openness of assumptions and modelling structure improves the comparability of the analysis and projections produced by different organizations using different models. There are cases in which results of the analysis vary in a number of important methodological aspects, and without a way to clearly compare one analysis and set of results to one another, decision-makers may not understand the range of possibilities envisioned by different short-, medium- and long-term projections, or the assumptions that underpin those projections. In that regard, transparency in the modelling methodology helps in ensuring transparency and trust in the resulting policy making process.
- Use and re-use of data and software modules. Apart from transparency of data, it is also important to make databases as open as possible in order to allow other researchers to replicate the results of the analysis carried, as well as to use the data for other research purposes. In fact, such modelling endeavours produce a wealth of data that should not be wasted. This is also clearly linked to the issue of transparency, as the availability of metadata helps the researchers in understanding the weaknesses of the data produced and therefore the suitable methodologies of analysis. By the same token, the models should be built in modules, to be made available to researchers for re-use and recombination (see point 4). This allows researchers and practitioners to download, re-adapt and re-use the modules for their analysis, therefore conceiving new applications.
- Perform validation and sensitivity analysis exercises. As we have seen, the results of many modeling exercises have been deeply influenced by the modeling and estimation techniques used. In this respect, a core activity ensuring the robustness of the modelling exercises performed consists in applied different modelling and estimation techniques to the same set of data, as well as changing the values of the input and internal parameters of a model to determine the effect upon the model output. Related to this issue is the necessity to validate the models by employing them on comparable but different data sources to see how the model results change, and to keep them open in order to scrutiny and criticisms by other researchers. Last but not least, also keeping data open allows to carry out different modelling and estimation techniques by different researchers.
- Generate collaborative model simulations and scenarios. Clearly the collaboration of several individuals in the simulation and scenario generation allows for policies and impact thereof to be better understood by non-specialists and even by citizens, ensuring a higher acceptance and take up. On the other hand, modelling co-creation has also other advantages: no person typically understands all requirements and understanding tends to be distributed across a number of individuals; a group is better capable of pointing out shortcomings than an individual; individuals who participate during analysis and design are more likely to cooperate during implementation. In the case at hand, the joint elaboration of simulations and scenarios by policy makers and scientists helps in producing models that are refined to tackle the containment policies adopted.
- Develop easy to use visualizations. There are several data aggregators that visualize the data coming from the field every day and that improve the situational awareness of the policy makers. Further, an interesting feature of many models that have been developed and used by policy makers to tackle the COVID-19 pandemic is the use of visualization tools depicted the results of the underlying simulation models. In this regard, policy makers should be able to independently visualize results of analysis, make sense of data and interact with them. This will help policy makers and citizens to understand the impact of containment policies: interactive visualization is instrumental in making evaluation of policy impact more effective.
- Consider carefully the sources of uncertainty in the model. As the other simulation models, also the ones used to tackle the COVID-19 pandemics suffer from several sources of uncertainty. Such uncertainty could be merely statistically related (e.g. confidence intervals), related to parameters in the model that are difficult to estimate (e.g. the rate of transmission), concerning the data used (e.g. data on fatality rate might be not precisely measured), or of a more conceptual level (e.g. assuming a representative agent).
- Tailor the model to specific questions you are trying to address. Specific modelling strategies (and level of complexity) should be used to address specific research questions. For instance, the simplest structure of predictive simulation is given by the aforementioned SIR models, which use few data inputs and can be useful to assess the epidemic outbreak in the short term. Such models cannot be used to depict uncertainty, complexity and behavioural change. Another class of models is given by forecasting models, which use existing data to project conclusions over the medium term. Finally, strategic models that encompass multiple scenarios assessing the impact of different interventions are able to capture some uncertainty underlying the epidemic outbreak and the behaviour of the population and are the foundation for policy making activity.
- Use models properly. Models are not a commodity that provide a number which the policy makers use to take decisions. There needs to be a full understanding of the subtleties involved, the levels of uncertainty, the risk factors. In other words, you need in-house data and model literacy embedded in the policy making process, in house. You can’t outsource that. Indeed, a recent report for the US highlighted the limitations of a process that involved experts on an ad hoc, on demand basis, leaving much arbitrariness to the process: “Expert surge capacity exists in academia but leveraging those resources during times of crisis relies primarily on personal relationships rather than a formal mechanism.” On a similar token, in the UK, a recent article pointed out that experts involved in the SAGE were too "narrowly drawn as scientists from a few institutions". By the same token, there was insufficient in house capacity to manage this input: In the US, “there is currently limited formal capacity within the federal government”, while in the UK, “the criticism levelled at the prime minister may be that, rather than ignoring the advice of his scientific advisers, he failed to question their assumptions”.
- Models integration. Finally, there is the need for a flexible modelling framework for the comprehensive assessment of major challenges in the analysed domain and to be used in conjunction with other models in order to address major global challenges in a holistic way. In this respect, integration of sectoral models is a key issue to assess important interrelations and feedbacks. More generally, models should be developed in modules and in a flexible way in order to allow integration with other models.
Do you agree with these recommendations? Why?
Do you have specific comments or questions on the report?