Models – what are they good for?

Models are everywhere at the moment! Everyone in Australia will have heard of the Doherty model, which has helped set Australia’s path out of the pandemic. Modelling from the Burnett Institute is helping to steer both New South Wales and Victoria out of their lockdowns.

But what are scientific models, and why are they useful? Answering these questions is not easy. Sure, there are various answers to the questions. But the answers are not always easy to communicate, and secondly, the answers depend on the purpose of the models. While models are used for a range of reasons including synthesis, explanation, estimation, experimental design, etc., I will focus here on models that are used for prediction.

I teach Environmental Modelling to graduate students at The University of Melbourne. The subject introduces students to a wide range of models used in environmental management – the topics covered include noise propagation, hydrology, climate, species distributions, and population dynamics. The population dynamics ones are particularly relevant when thinking about epidemiological models – epi models are almost identical mathematically to predator-prey models.

I have a couple of major aims when teaching this subject. Firstly, I want students to become less intimidated by models. Secondly, I want students to better understand the steps of modelling so that they are better placed to use and critique models.

I aim for my students to be less intimidated by models. Some models might be hard to understand, but in the end they are something that people created. So it is possible to understand them with sufficient effort (image from despair.com).

One of the most persistent, yet naïve, critiques of models is that they are not sufficiently realistic. Let me say up front – models are meant to be imperfect descriptions of reality. That is, arguably, the whole point of using models instead of reality. The key, to paraphrase Einstein, is to make the models as simple as possible, but no simpler. That is easy to say, but it is perhaps the most challenging thing to deliver.
So why do we want models to be imperfect? Because we need a simplification to make sense of complicated systems. Essentially, models are useful when reality is complicated. Models help to describe the system we are studying in simpler terms so that we can make predictions within a reasonable time scale, and better understand the key processes.

Models are meant to be imperfect descriptions of reality – that is their entire point. Models encapsulate all the good, bad and ugly assumptions that are thought to be true. And then they predict the logical consequence of those assumptions.

So why should we trust a model’s predictions? Well, should we trust them? Lots of lives and livelihoods currently depend on the predictions of epidemiological models. Perhaps in answering that question of trust, we can first consider what the predictions represent. I think the simplest way to think about them is that model predictions are the logical consequences of a set of assumptions. The model encapsulates a set of assumptions, and the model then simply tells us the consequence of those assumptions.

For example, build a model of COVID transmission among people that describes: the rate of transmission under different scenarios of public health orders; the effect and uptake of vaccines; the rate at which people enter hospital and/or die; how vaccination influences those rates; the effectiveness of contract tracing to identify cases and reduce transmission; etc. Each of those components will have their own details. It gets complicated quickly. And that is without considering every nuance of human behaviour. But once the model is built, we can then ask, “How many deaths and hospitalisations should we expect as a logical consequence of these assumptions?” The model provides a precise answer to that question for a given set of assumptions.

We can then ask how sensitive the predictions are to changes in the assumptions. Change one or more assumptions, and we get a different answer. This sensitivity analysis is valuable, because it tells us where we might want to focus policy interventions, and also where we might want to get better data.
What would be the alternative to using models? An obvious alternative is to let people make their own judgements with the same information as used in the model. That would certainly be simpler. The drawbacks of this approach are many. The logic of such subjective decisions is opaque. You might counter that models are opaque. But how about you try to get your mind around the thinking of a decision maker where their assumptions are not spelled out in black and white?

Subjective decisions are also prone to a wide range of biases. And I’m not just talking the biases that might arise from the influence of lobby groups. Even well-intentioned decision makers are prone to the effects of biases.

Perhaps the biggest benefit of using models to support decisions is that their predictions are transparent. If the predictions are wrong, it can tell us that there were one or more errors in the set of assumptions that underpinned the model. Perhaps the model omitted an important detail. Or one or more of the model’s parameters were astray. Regardless, errors in predictions challenge the assumptions that underpinned the model and allow us to refine our understanding.

So, what are models good for? They allow us to predict the logical consequences of what we believe to be true, and test the degree to which the outcomes depend on those assumptions. In short, it seems wise to use a model to test a policy with far-reaching implications for lives and livelihoods before taking that policy into the real world.

About Michael McCarthy

I conduct research on environmental decision making and quantitative ecology. My teaching is mainly at post-grad level at The University of Melbourne.
This entry was posted in Communication, COVID, Ecological models and tagged , , , . Bookmark the permalink.

1 Response to Models – what are they good for?

  1. Pingback: Dbytes #495 (29 September 2021) | Dbytes

Comments are closed.