This is a preliminary sensitivity analysis to test data, emulators and techniques, and shouldn’t be used to draw conclusions. There appears to be at least one design point where MetaWards “breaks”, for example infecting more people by lockdown than there are in the population.
We use a Gaussian process emulator from package DiceKriging to model the variation of cumulative infections at lockdown (23rd March 2020) with parameters. We then run a simple sensitivity analysis, and an initial history-matching type exercise, comparing the model with external estimates of cumulative cases from Jit et al. (2020).
Load packages and helper functions.
library(DiceKriging)
library(sensitivity)
## Registered S3 method overwritten by 'sensitivity':
## method from
## print.src dplyr
library(RColorBrewer)
source("https://raw.githubusercontent.com/dougmcneall/packages-git/master/emtools.R")
source("https://raw.githubusercontent.com/dougmcneall/packages-git/master/imptools.R")
source("https://raw.githubusercontent.com/dougmcneall/packages-git/master/vistools.R")
Load the cumulative infections by day 79 (23rd March 2020) in local authorities. This data has the parameters BUT NOT THE RUN NUMBER or unique ID.
The first 5 columns of the data are the design, normalised to -1,1.
# This data from the initial ensemble is available at
dat <- read.csv("https://github.com/UQ4covid/data/raw/master/metawards/initial_ensemble/data/pre_lockdown_1/lads_by_day_cumulative_79.csv")
# This is the design in the original parameterspace but it does not have the repeat rows (it has a "repeats" column instead)
uq3a <- read.csv("https://raw.githubusercontent.com/UQ4covid/data/master/metawards/initial_ensemble/inputs/uq3a_out.csv")
design <- read.csv("https://raw.githubusercontent.com/UQ4covid/data/master/metawards/initial_ensemble/inputs/design.csv")
# This provides an index back to each raw data folder
uq4 <- read.csv("https://raw.githubusercontent.com/UQ4covid/data/master/metawards/initial_ensemble/data/uq4.csv")
Plot up the cumulative infections in (e.g.) Exeter vs the parameters.
The “post-lockdown” parameters shouldn’t have any effect here - this should provide a useful test for the sensitivity analysis. We should expect the sensitivity analysis to show zero impact of the lockdown parameters on the infections before lockdown. Any other result and we know the emulator is overfitting or the sensitivity analysis is bad.
par(mfrow = c(2,3))
for(i in 1:5){
plot(dat[, i], dat$Exeter ,
xlab = colnames(dat)[i], ylab = 'Cumulative infections at lockdown in Exeter',
pch = 19)
}
Have a look at some aggregated data to see if everything is OK. It looks like we have a couple of outliers with more infections than the population of the UK! We’ll remove the outliers so as not to corrupt the emulator.
Y.raw <- dat[, 6:ncol(dat)]
y.national.raw <- apply(Y.raw, 1,sum)
hist(y.national.raw, col = 'lightgrey')
high.ix <- which(y.national.raw > 7e+7)
# Looks like they're all from one run: 46, 47, 48, 49, 50
dat[high.ix ,1:5]
## incubation_time infectious_time r_zero lock_1_restrict
## 46 -0.8391755 -0.6039355 0.9800747 0.1609147
## 47 -0.8391755 -0.6039355 0.9800747 0.1609147
## 48 -0.8391755 -0.6039355 0.9800747 0.1609147
## 50 -0.8391755 -0.6039355 0.9800747 0.1609147
## lock_2_release
## 46 0.6335409
## 47 0.6335409
## 48 0.6335409
## 50 0.6335409
# let's remove all the runs with those input parameters, to be safe.
rm.ix <- 46:50
We can see where the outliers are in the design (red points)
colvec <- rep('black', nrow(dat))
colvec[rm.ix] <- 'red'
pairs(dat[, 1:5], col = colvec, pch = 19)
# Remove the design points where the data is obviously in error. We've removed the whole block of the design point where more people are infected than the entire UK population.
# Now, the first 120 rows will be 24 design points with 5 repeats each.
dat.clean <- dat[-rm.ix, ]
I’m worried that having repeat runs might break a DiceKriging Emulator, so I’ll further pre-process the design and output. At the moment, we simply take the first example from each of the 24 remaining repeated design points.
dix <- c(seq(from = 1, to = 120, by = 5), 121:195)
X <- dat.clean[dix, 1:5]
X.norm <- normalize(X) # necessary to make e.g. sensitivity analysis code work
X.unnorm <- unnormalize(X.norm, un.mins = apply(uq3a[-rm.ix, 1:5],2,FUN = min), un.maxes = apply(uq3a[-rm.ix,1:5],2, FUN = max))
Y <- dat.clean[dix, 6:ncol(dat)]
Find the national level cumulative infections.
y.national <- apply(Y,1,sum)
hist(y.national, main = 'National cumulative infections at lockdown', col = 'lightgrey')
Cumulative infections vary marginally with each parameter.
par(mfrow = c(2,3))
for(i in 1:5){
plot(X.unnorm[, i],y.national,
xlab = colnames(X)[i], ylab = 'National cumulative infections at lockdown',
pch = 19)
}
y.all <- apply(dat.clean[,6:ncol(dat.clean)], 1, sum)
y.reps <- y.all[1:120]
reps.list <- split(y.reps, ceiling(seq_along(y.reps)/5))
reps.var <- sapply(reps.list, var)
reps.sd <- sqrt(reps.var)
mean.reps.var <- mean(reps.var)
hist(reps.sd, main = "standard deviations of national infections at lockdown")
Fit the emulator, including a homogeneous nugget of the mean variance from the 5 repeats of each of the first 24 design points.
fit.national <- km(~., design = X.norm, response = y.national, nugget = mean.reps.var)
##
## optimisation start
## ------------------
## * estimation method : MLE
## * optimisation method : BFGS
## * analytical gradient : used
## * trend model : ~incubation_time + infectious_time + r_zero + lock_1_restrict +
## lock_2_release
## * covariance model :
## - type : matern5_2
## - nugget : 1.288347e+13
## - parameters lower bounds : 1e-10 1e-10 1e-10 1e-10 1e-10
## - parameters upper bounds : 2 2 2 2 2
## - variance bounds : 4.014027e+12 4.276305e+14
## - best initial criterion value(s) : -1668.201
##
## N = 6, M = 5 machine precision = 2.22045e-16
## At X0, 0 variables are exactly at the bounds
## At iterate 0 f= 1668.2 |proj g|= 0.73176
## At iterate 1 f = 1666.6 |proj g|= 0.67339
## At iterate 2 f = 1664.9 |proj g|= 1.694
## At iterate 3 f = 1664.8 |proj g|= 0.34744
## At iterate 4 f = 1664.8 |proj g|= 0.26335
## At iterate 5 f = 1664.8 |proj g|= 1.0633
## At iterate 6 f = 1664.7 |proj g|= 1.6758
## At iterate 7 f = 1664.6 |proj g|= 1.6814
## At iterate 8 f = 1664.6 |proj g|= 1.6292
## At iterate 9 f = 1664.6 |proj g|= 0.36861
## At iterate 10 f = 1664.6 |proj g|= 0.070628
## At iterate 11 f = 1664.6 |proj g|= 0.28186
## At iterate 12 f = 1664.6 |proj g|= 0.54538
## At iterate 13 f = 1664.6 |proj g|= 0.54208
## At iterate 14 f = 1664.5 |proj g|= 0.033698
## At iterate 15 f = 1664.5 |proj g|= 0.0076427
##
## iterations 15
## function evaluations 18
## segments explored during Cauchy searches 20
## BFGS updates skipped 0
## active bounds at final generalized Cauchy point 2
## norm of the final projected gradient 0.00764271
## final function value 1664.55
##
## F = 1664.55
## final value 1664.549034
## converged
Quickly check the emulator using leave-one-out cross validation. The emulator appears to perform fairly well - both in the mean estimate and the uncertainty estimates. Uncertainty bounds are quite wide, although this is to be expected with a stochastic simulator. More verification of the emulator will certainly be necessary.
loo = leaveOneOut.km(fit.national, type = 'UK', trend.reestim = TRUE)
loo.mae = mean(abs(loo$mean - y.national))
ylim = range(loo$mean - (2*loo$sd),loo$mean + (2*loo$sd) )
plot(y.national, loo$mean,
xlab = 'cumulative infections at lockdown', ylab = 'emulator prediction',
main = 'leave-one-out cross validtaion',
pch = 19,
ylim = ylim)
segments(x0 = y.national, y0 = loo$mean - (2*loo$sd), x1 = y.national, y1 = loo$mean + (2*loo$sd))
legend('topleft', legend = "bars indicate \u00B12 sd", bty = 'n')
abline(0,1)
It seems that there is a relatively high level of interaction between the parameters. Note that the sensitivity analysis shows interactions between the “lockdown” parameters, which shouldn’t have any influence at all on the output. This suggests the emulator is overfit to the noise (or the sensitivity analysis is interpreting noise as influence).
# Generate a design for the FAST99 analysis
X.fast <- fast99(model = NULL, factors = colnames(X.norm), n = 3000,
q = "qunif", q.arg = list(min = 0, max = 1))
# Predict the response at the FAST99 design points using the emulator
pred.fast = predict(fit.national, newdata = X.fast$X, type = 'UK')
# Calculate the sensitivity indices
fast.tell <- tell(X.fast, pred.fast$mean)
bp.convert <- function(fastmodel){
# get the FAST summary into an easier format for barplot
fast.summ <- print(fastmodel)
fast.diff <- fast.summ[ ,2] - fast.summ[ ,1]
fast.bp <- t(cbind(fast.summ[ ,1], fast.diff))
fast.bp
}
par(las = 2, mar = c(9,5,3,2))
barplot(bp.convert(fast.tell), col = c('skyblue', 'grey'), ylab = 'relative sensitivity', main = 'FAST99 Sensitivity')
##
## Call:
## fast99(model = NULL, factors = colnames(X.norm), n = 3000, q = "qunif", q.arg = list(min = 0, max = 1))
##
## Model runs: 15000
##
## Estimations of the indices:
## first order total order
## incubation_time 0.1364502123 0.238729844
## infectious_time 0.1468757360 0.261254135
## r_zero 0.5108515425 0.697192252
## lock_1_restrict 0.0004424436 0.002211755
## lock_2_release 0.0002582133 0.002137150
legend('topleft',legend = c('Main effect', 'Interactions'), fill = c('skyblue', 'grey') )
Parameters are swept across their range one at a time, with the remaining parameters held at central values.
n.oat <- 21
X.oat <- oaat.design(X.norm, n = n.oat, hold = rep(0.5,5))
X.oat.un <- unnormalize(X.oat, un.mins = apply(uq3a[,1:5],2,FUN = min), un.maxes = apply(uq3a[,1:5],2, FUN = max))
parnames <- colnames(X)
colnames(X.oat) <- parnames
pred.oat <- predict(fit.national, newdata = X.oat, type = 'UK')
col.transp <- adjustcolor('grey', alpha = 0.5)
par(mfrow = c(2,3), oma = c(0.1,0.1,3,0.1))
for(i in 1:5){
ix <- seq(from = ((i*n.oat) - (n.oat-1)), to = (i*n.oat), by = 1)
plot(X.oat.un[ix,i], pred.oat$mean[ix]
, ylim = range(pred.oat$mean),
xlab = parnames[i], ylab = 'cumulative infections at lockdown',
type= 'n')
polygon(x = c(X.oat.un[ix, i], rev(X.oat.un[ix, i])),
y = c(pred.oat$mean[ix] - (2*pred.oat$sd[ix]), rev(pred.oat$mean[ix] + (2*pred.oat$sd[ix]))),
col = col.transp, border = col.transp)
lines(X.oat.un[ix,i], pred.oat$mean[ix], xlim = c(0,1), lty = 'solid')
}
mtext('One-at-a-time sensitivity', side = 3, outer = TRUE, cex = 1.5)
What input space is consistent with the estimated cumulative infections from Jit et al.? We’ll use simple rejection sampling from uniform distributions across input space. We know that the lockdown parameters have no impact on this data (they haven’t been applied yet at the end of the pre-lockdown period), so we’ll exclude them form the analysis.
#Have a very quick look at how much input space is removed by a comparison with data
# Useful data for comparisons later
# https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/coronaviruscovid19infectionsurveypilot/england14may2020
# Jit et al estimate the number of infections at lockdown
# https://www.eurosurveillance.org/content/10.2807/1560-7917.ES.2020.25.18.2000632
#
# We estimated that each COVID-19 case admitted to CC reported in FF100 and CHESS corresponded to a median of 124 (95%
# credible interval (CrI): 81–11,500) and 120 (95% CrI: 76–46,600) infected individuals in the population, respectively, based on Chinese and US severity data [5,6].
#The Figure shows the number of incident cases estimated on each day between 16 February and 23 March. On 23 March, 114,000 (95% CrI: 78,000–173,000) new cases and 258 (95% CrI: 220–319) CC reports are estimated to have occurred, with 527,000 (95% CrI: 362,000–797,000) cumulative cases since 16 February. The best fitting exponential growth rates were consistent with an epidemic doubling time of 2.8 days (95% CrI: 2.6–3.0). Assuming an exponentially distributed serial interval of 4 days [8] gave an (approximate) reproduction number of 2.0 (95% CrI: 1.9–2.1). If we assume a longer serial interval of 6 days that may be expected at the start of an epidemic, the reproduction number could be 2.5 (95% CrI: 2.4–2.6).
# Samples from a uniform distribution across all of input space
nsamp.unif <- 500000
X.unif = samp.unif(nsamp.unif, mins = rep(0, ncol(X)), maxes = rep(1, ncol(X)))
colnames(X.unif) <- colnames(X)
pred.unif <- predict(fit.national, newdata = X.unif, type = 'UK')
# whart part of parameter space is implied by Jit et al. (within the 95% CI)?
ix.kept <- which(pred.unif$mean > 362000 & pred.unif$mean < 797000)
# Un-normalize and plot
X.unif.kept <- X.unif[ix.kept, ]
colnames(X.unif.kept) <- colnames(X)
X.unif.kept.unnorm <- unnormalize(X.unif.kept, un.mins = apply(uq3a[-rm.ix, 1:5],2,FUN = min), un.maxes = apply(uq3a[-rm.ix,1:5],2, FUN = max))
Darker shading indicates regions where more samples from the emulator fall within the estimated ranges of cumulative infections. The shapes in parameter space show how the parameters can offset each other.
blues = brewer.pal(9, 'Blues')
pairs(X.unif.kept.unnorm[, 1:3],
gap = 0,
panel = dfunc.up,
dfunc.col = blues
)
## Loading required package: MASS
This is a preliminary analysis and our understanding will benefit from a more formal history matching exercise. Further comparisons with data may well reduce the “not implausible” parameter space. However, it appears that using a Gaussian process emulator via DiceKriging appears to be a viable analysis technique.