COVID hotel quarantine failures – a hierarchical model

With another lockdown in Victoria, the media has been asking why Victoria has had a worse COVID experience than other Australian states. Much can be explained by the challenges of the second wave and an overwhelmed contact tracing system. Contact tracing is now much improved, with tens of thousands of contacts being identified and managed quickly during the current outbreak.

Another media focus has been rates of escape from hotel quarantine, which are now the main source of outbreaks in Australia. If you look at the raw numbers, it seems that some states might have better systems than others, especially when considering the number of travellers who have been quarantined. But how sure can we be of that? Can some states feel smug superiority, or have they just been lucky?

Leah Grout and colleagues examined the rate of failure of quarantine facilities in Australia and New Zealand, reporting the number of failures, the number of quarantined travellers, and (importantly) the number of those travellers who were COVID positive (as of early 2021). This latter number is one of the keys – you could have the worst quarantine system imaginable but you won’t have any failures if none of the travellers are COVID positive.

So what do the numbers tell us? New South Wales has processed the most COVID positive travellers of the jurisdictions in the database (1581) but it has also had the most identified failures (5 to that point). Victoria has had almost as many identified breaches (4) but many fewer positive cases in their quarantine system (462).

But when looking at these data, you might notice that the number of failures is low. The small numbers mean that any estimates of the rate of failure of hotel quarantine will be uncertain. How uncertain? Very uncertain!

Even in states/territories with zero breaches so far, the rate of failure might be no lower than in the jurisdictions that appear to be performing worst. Tasmania and the ACT are cases in point – both have had no recorded breaches of hotel quarantine but they have also processed very few COVID positive travellers (21 and 25 respectively).

Even the Northern Territory, with its much vaunted Howard Springs facility, had only 88 COVID cases in Grout et al.’s data. Consequently, the uncertainty around an estimate of the rate of failure of hotel quarantine is large.

To estimate the rate of failure, let’s first treat each of the COVID positive cases as a simply Bernoulli trial, with the probability of failure being the same for each person within the jurisdiction. You can see the estimates below. Do you notice the large intervals around the estimates for the NT, ACT and Tasmania? Using only the observed rates of failure, we can’t really be sure that those jurisdictions are better than any other.

Estimated probability of failure of COVID hotel quarantine for each infected traveller, based on the failures reported in Grout et al. (in review). The dots are the point estimates (failures divided by cases), and the bars are 95% credible intervals, assuming a uniform prior for the probability of failure.

The cabins at Howard Springs have the advantage of almost eliminating aerosol transmission between rooms, which appears to be a problem in the hotel systems that operate elsewhere. But we know, intuitively, that the hotel quarantine facilities in places like the ACT and Tasmania are likely to suffer similar risks to similar systems in other jurisdictions. Given that, the risks are unlikely to be as high as indicated by the upper bounds of the intervals above. But likewise, the probability of quarantine failure is unlikely to be zero.

How can we “borrow” information about rates of failure in other jurisdictions while at the same time allowing for some differences in rates of failure between jurisdictions? Well, let’s say hello to a hierarchical statistical model!

For our hierarchical statistical model of failure rates from COVID quarantine hotels, we assume that the rate of failure of each jurisdiction is drawn from a common pool of possible failure rates. This pool of possible failure rates is defined by a probability distribution – hierarchical modelling estimates this distribution. Each jurisdiction has its own particular failure rate – if the rates differ a lot between between jurisdictions, then the distribution that defines the pool of possible rates will be wide. If the rates are similar to each other, then the distribution will be narrow.

You can think about what such a hierarchical model might mean when estimating the failure rate in places like Tasmania and the ACT where data are scarce. If we look at the jurisdictions with more data, the rate of failure is unlikely to be larger than 0.02 or so (the upper bounds of the 95% intervals in the figure above). So, these more precisely estimated rates will tend to constrain the variation in the pool of possible rates.

To formalise this idea, we need to define a model for that pool of possible failure rates. When dealing with probabilities, we know that the values are constrained to be between zero and one. However, probabilities can also be expressed as odds. The odds that an event will happen is the probability of it happening divided by the probability of it not happening. So if p is a probability of an event, then p/(1-p) is the odds of the event*. Odds are constrained to be between 0 (when p = 0) and infinity (when p = 1).

Now, if we take the logarithm of the odds, then the resulting value is the log-odds, and this number can take any real value between minus infinity (when p = 0) and plus infinity (when p = 1)**. This transformation is useful, because it is straightforward to define a distribution on this interval – we can use a normal distribution. Now our pool of possible failure rates can be defined by a normal distribution (with the values drawn from this distribution being back-transformed to become probabilities). The statistical model*** then simply needs to estimate the mean and standard deviation of this underlying normal distribution.

So what do the results of the hierarchical model indicate? Well, that model suggests that differences between jurisdictions in the rate of hotel failure might be as large as an order of magnitude or so (e.g., compare the extreme upper limit of one jurisdiction to the lower limits of others). But equally, there is also not compelling evidence that the risks differ much at all (e.g., note the large overlap of the 95% intervals). So when I hear pundits declaring how one state’s quarantine system is better than another’s based on the rate of failure, I just roll my eyes – they don’t quite understand how variable chance can be, especially when estimating rates of rare events.

Estimated probability of failure of COVID hotel quarantine for each infected traveller, based on a hierarchical model of the failures reported in Grout et al. (in review). The dots are means of posterior distributions, and the bars are 95% credible intervals. The right hand value is the average over the Australian states and territories.

Since the preparation of Grout et al.’s paper, we’ve seen further escapes from hotel quarantine – Victoria’s current outbreak being a case in point. And there is a second possible escape too, although sequencing so far has been unable to pinpoint the source of the delta variant in Victoria. But Grout et al.’s data suggest that an escape will be identified for every 200 or so COVID-positive cases in hotel quarantine. With COVID cases continuing to be common around the world, we’ll see more COVID cases in hotel quarantine with more outbreaks expected.

Notes

* You will have seen odds in horse racing. These define the payout from the bookmaker. They are essentially the odds of the horse not winning (while also factoring a small margin to pay for the bookmaker’s investment portfolio and collection of fancy cars).

** This is the logit transformation and is the basis of logistic regression.

For those interested in the details, here is BUGS code for the hierarchical model:

model
{
for (i in 1:8) # for each of the 8 jurisdictions.
{
re[i] ~ dnorm(0, tau) # how the probability of failure for each jurisdiction varies
logit(p[i]) <- logit(pav) + re[i] # the prob of failure for each jurisdiction
fails[i] ~ dbin(p[i], cases[i]) # failures treated as a set of Bernoulli events
}
OzHQ <- mean(p[2:8]) # average probability of failure for Aust hotels

pav ~ dunif(0, 1) # the prior for the average probability of failure
tau <- 1 / (s * s) # BUGS defines variation of dnorm by precision = 1/variance
s ~ dunif(0, 100) # the prior sd
}


And here’s the data used (NT is excluded given it includes the non-hotel site of Howard Springs; the order is: NZ, ACT, NSW, Qld, SA, Tas, Vic, WA):

cases[] fails[]
758 10
25 0
1581 5
543 3
230 1
21 0
462 4
450 1
END

About Michael McCarthy

I conduct research on environmental decision making and quantitative ecology. My teaching is mainly at post-grad level at The University of Melbourne.
This entry was posted in Communication, COVID, Probability and Bayesian analysis and tagged , , , . Bookmark the permalink.