This vignette describes “Ensemble Zero”, our first scaled ensemble of Metawards for UQ. We will explain the experimental protocol, the design, the model data we are publishing here + instructions for obtaining more output. We will also provide tools for converting the model runs to quantities that are directly comparable with data we have for hospital trusts from Leon. An Exeter NDA means we cannot host the hospital trust data here. If you are Exeter staff/student, we can get you a copy. Else, we will look for some publically available version from the ONS (which may be behind or slightly differnt to what we will use) to host here.
We are using our own version of Metawards described here. Metawards is a framework for simulating spatial infectious diseases and has currently been set up for COVID-19. Our version includes demographics for hospitals, intensive care units and deaths from these sources and the general population. We also have demographics for asymptomatics. We are motivated in our choice of demographics by the need to calibrate to data, the most reliable of which is coming out of hospital trusts.
Ensemble Zero is run over England and Wales at Ward level. It will be the last ensemble we run with this protocol. Our next version will run over the UK using LSOAs (Lower Layer Super Output areas), which are nested, directly compatible with hosptial trusts (for which we have data) and, perhaps most importantly, include Scotland. We have already run Metawards in this mode, but are missing a lookup table that would enable us to use the data (Leon has contacted the ONS for this, hopefully coming soon).
We have 16 parameters, and have varied them over the parameter ranges given in the model description. The model is run on HPC at Bristol using 512 cores and took 8 hours for the ensemble we describe below. Using Chris Fenton’s compressor, we are able to store all of the output compressed using 89GB, and have been donated an AWS cloud node with 16GB ram, 200GB hard disc and 4 cores where it is stored and we have sudo access for postprocessing. We must thank Christopher Woods, the HPE Catalyst program and the University of Bristol Advanced Computing Research Centre for their support in running and storing ensembles for us.
Our design vignette describes k-extended Latin Hypercubes for designing models with repeats. We use these again here. Motivated by quantile emulation ideas (see here, here and here), we use \(20\) repeats and ensure we use repeats for all ensemble members.
The code for generating the design is here, for reproducibility. We choose a k-extended LHC with \(200\) members made up of \(5\) sub-LHCs of size \(40\). This means that we have a \(200\) member LHC made up of \(5\) \(40\) member LHCs, and with the sub-LHCs and larger optimally space filling and orthogonal wrt to each other as per (Williamson 2015). The visualised benefit of this approach is that 5 different \(160\) member LHCs can be used as training data (following the usual “10x” rule), and well designed validation sets of size \(40\) can be used to assess them. This opens up very robust diagnostic checking which should be taken advantage of when an analysis might inform Spi-M. You can obtain the design via
design <- readRDS("data/EnsembleZero/inputs/design.rds")
parRanges <- readRDS("data/EnsembleZero/inputs/parRanges.rds")
For emulation purposes, you will want the design
tibble.
The design above generates \(4000\) runs of Metawards, takes 8 hours to complete on 512 cores and when compressed to within an inch of its life is still too big for us to store. As such, we have written extractors to obtain outputs of the data in the cloud that are around 120MB each, that can be used for UQ, or even stitched together if needed. Even extracting 3 outputs once per week for each ward and only after lockdown it takes around 3 hours and our AWS server runs out of memory during construction of this data (TJ has worked hard to introduce multiple scripts to capture and then combine the data). As such, we have been selective in terms of what we are making available. However, if on reading the model description, you would like an output not in the set here, we can obtain it for you. We are also working on an SQL solution that can be accessed by dplyr
without needing to even store the data in RAM.
The ensemble is post-processed using an R script you can look at in full here. I will reproduce some important parts of that script to highlight what we’ve done and why, and to give you an idea what you need to change if you want something else.
First we set up some dates, days and weeks. Our idea is to return weekly averages for prevalence variables and total counts on the Sunday of the named week (when needed).
library(lubridate)
startdate <- dmy("01/01/2020")
dates <- startdate + 0:177
lockdownDate1 <- dmy("21/03/2020")
lockdownDate2 <- dmy("13/05/2020")
tweeks <- week(dates)
lockdown1 <- dmy("21/03/2020")
WEEKS <- unique(tweeks[dates >= (lockdown1-7)])
Our extraction then needs to loop over all runIDs and then all repeats per ID, unzipping the database, calling the variables we need, and then postprocessing these before we delete the unzipped data to save memory. Here is what we have done with a line by line explanation to introduce the data we have.
Here compact
is the name of the database we have connected to, and we are extracting the day
, ward
, H
the number in hospital, C
the number in internsive care, DH
total hospital deaths, DC
total intensive care deaths. There are many more variables described here.
Our data has daily hospital prevalence, ICU prevalence and deaths in hospital per trust (not ward, but we’ll get there). Addressing deaths first, we cannot separate hostpial and ICU with what we have, so it makes sense to sum DH and DC and create a new variable called Deaths
(DC and DH are already cumulative, so we take the maximum value each week). H
and C
are directly comparable to the data, but the data is very noisy. Considering H
(in the data not the ensemble), daily fluctuations in H
represent patients being discharged, new COVID patients, patients being moved to ICU, and deaths and as such can be quite noisy. Similarly for ICU. We therefore take weekly averages of both in the ensemble to ensure we have something more comparable with data. The code is
output <- mutate(output, week = week(dates[day])) %>%
filter(week %in% WEEKS) %>%
group_by(ward, week) %>%
summarise(Hprev = mean(H), Cprev = mean(C), Deaths = max(DC) + max(DH)) %>%
ungroup() %>%
complete(ward = 1:8588, week = WEEKS, fill = list(Hprev = 0, Cprev = 0, Deaths = 0))
The last line ensures that values that should be 0, which Chris does not store to save space in the data base are now added so that UQ will work. If you want something different (say something from the asymptomatics class, you will need to provide code like this to us).
Because the data is so big, even just for a single output at a time, such as Hprev
as defined above, we cannot store the tibble in RAM for every week, even just those weeks since lockdown. Instead, we have produced weekly datasets for each of the 3 outputs described above. Each data file contains the value of a particular output across the ensemble and for all UK wards for a given week. We will perform our examples below with the hospital prevalances in week 12 (hprev_12.rds
). This particular file is on the git repository. The rest of the data can be downloaded from here. Note, this is a temporary solution and doesnt work for non-Exeter users, and it not fast enough even then. I am looking for a more general solution. In the meantime, if you are not at Exeter and reading this, please let me know and I will share a dropbox folder with you.
To load in one piece of the data (assuming you have downloaded Hprev_12.rds
and placed it in the directory below)
## # A tibble: 34,352,000 x 4
## output ward replicate Hprev
## <chr> <int> <dbl> <dbl>
## 1 Ens0000 1 1 0
## 2 Ens0000 2 1 0
## 3 Ens0000 3 1 0.429
## 4 Ens0000 4 1 0.714
## 5 Ens0000 5 1 0
## 6 Ens0000 6 1 0.857
## 7 Ens0000 7 1 0
## 8 Ens0000 8 1 2.29
## 9 Ens0000 9 1 0.857
## 10 Ens0000 10 1 0.429
## # … with 34,351,990 more rows
Note even this subset has over 34 million entries and only contains the week 12 data.
In this section we provide examples of how to convert the tibble we have here into useful data frames for UQ. The ensemble data is provided in tidy form, so those proficient with tidy data analysis using dplyr
etc could ignore all of this. However, I will describe obtaining ward names, deriving local authorities and matching to hosptial trusts, so at least a skim will be worthwhile.
We must thank Rob Challen for this work. He has mapped wards to hospital trusts. This mapping cannot be exact because wards straddle trusts and is the main reason that the next ensemble will move to LSOAs, where we don’t have this issue. A particular problem is that wards on or encompassing islands are not aligned to trusts and so are dropped by these steps. Note this is not a problem for history matching to single trusts, but of course could be for probabilistic calibration or for basis methods.
The look up for hospital trusts can be read in
## # A tibble: 8,532 x 4
## WD11CD WD11NM trustId trustName
## <chr> <chr> <chr> <chr>
## 1 E050006… Harper Green RMC Bolton NHS Foundation Trust
## 2 E050031… Hawcoat RTX University Hospitals Of Morecambe Bay N…
## 3 E050068… Yeovil Central RA4 Yeovil District Hospital NHS Foundation…
## 4 E050008… Wigan Central RRF Wrightington, Wigan and Leigh NHS Found…
## 5 E050015… Hardwick RVW North Tees and Hartlepool NHS Foundatio…
## 6 E050063… Woodlands RCB York Teaching Hospital NHS Foundation T…
## 7 E050076… Central RYR Western Sussex Hospitals NHS Foundation…
## 8 E050003… Brunel RAS The Hillingdon Hospitals NHS Foundation…
## 9 E050000… Farringdon Wi… R1H Barts Health NHS Trust
## 10 E050019… Horfield RVJ North Bristol NHS Trust
## # … with 8,522 more rows
The codes and names can be used to bind to the ensemble via
NewWithHosptial <- inner_join(WD11ToAcuteTrustIncWalesHB, NewOut, by=c("WD11CD", "WD11NM"))
NewWithHosptial
## # A tibble: 34,120,000 x 10
## WD11CD WD11NM trustId trustName LAD11CD LAD11NM ward output replicate
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <chr> <dbl>
## 1 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 1
## 2 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 2
## 3 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 3
## 4 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 4
## 5 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 5
## 6 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 6
## 7 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 7
## 8 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 8
## 9 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 9
## 10 E0500… Harpe… RMC Bolton N… E08000… Bolton 590 Ens00… 10
## # … with 34,119,990 more rows, and 1 more variable: Hprev <dbl>
and combining the wards into trusts
TrustData <- group_by(NewWithHosptial, trustId, trustName, output, replicate) %>%
summarise(Hmean=mean(Hprev)) %>%
ungroup()
Finally to prepare data for emulation, we need to attach the design points and ensure that the spatial units we want are in the columns. Suppose we want to emulate hospital prevalence in week 12 by hospital trust:
library(tidyr)
EmulateOut <- dplyr::select(TrustData, output, trustId, replicate, Hmean) %>% pivot_wider(names_from = trustId, values_from = Hmean) %>%
dplyr::select(-replicate)
ToEmulate <- inner_join(design, EmulateOut, by="output")
head(ToEmulate)
## # A tibble: 6 x 152
## r_zero incubation_time infectious_time hospital_time critical_time
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.801 0.746 0.952 0.574 0.940
## 2 0.801 0.746 0.952 0.574 0.940
## 3 0.801 0.746 0.952 0.574 0.940
## 4 0.801 0.746 0.952 0.574 0.940
## 5 0.801 0.746 0.952 0.574 0.940
## 6 0.801 0.746 0.952 0.574 0.940
## # … with 147 more variables: lock_1_restrict <dbl>, lock_2_release <dbl>,
## # pEA <dbl>, pIH <dbl>, pIRprime <dbl>, pHC <dbl>, pHRprime <dbl>,
## # pCR <dbl>, GP_A <dbl>, GP_H <dbl>, GP_C <dbl>, output <chr>,
## # repeats <dbl>, `7A1` <dbl>, `7A2` <dbl>, `7A3` <dbl>, `7A4` <dbl>,
## # `7A5` <dbl>, `7A6` <dbl>, R0B <dbl>, R1F <dbl>, R1H <dbl>, RA2 <dbl>,
## # RA3 <dbl>, RA4 <dbl>, RA7 <dbl>, RA9 <dbl>, RAE <dbl>, RAJ <dbl>,
## # RAL <dbl>, RAP <dbl>, RAS <dbl>, RAX <dbl>, RBD <dbl>, RBK <dbl>,
## # RBL <dbl>, RBN <dbl>, RBT <dbl>, RBZ <dbl>, RC1 <dbl>, RC9 <dbl>,
## # RCB <dbl>, RCD <dbl>, RCF <dbl>, RCU <dbl>, RCX <dbl>, RD1 <dbl>,
## # RD3 <dbl>, RD8 <dbl>, RDD <dbl>, RDE <dbl>, RDU <dbl>, RDZ <dbl>,
## # REF <dbl>, REM <dbl>, RF4 <dbl>, RFF <dbl>, RFR <dbl>, RFS <dbl>,
## # RGN <dbl>, RGP <dbl>, RGR <dbl>, RGT <dbl>, RH5 <dbl>, RH8 <dbl>,
## # RHM <dbl>, RHQ <dbl>, RHU <dbl>, RHW <dbl>, RJ1 <dbl>, RJ2 <dbl>,
## # RJ6 <dbl>, RJ7 <dbl>, RJC <dbl>, RJE <dbl>, RJL <dbl>, RJN <dbl>,
## # RJR <dbl>, RJZ <dbl>, RK5 <dbl>, RK9 <dbl>, RKB <dbl>, RKE <dbl>,
## # RL4 <dbl>, RLQ <dbl>, RLT <dbl>, RM1 <dbl>, RM3 <dbl>, RMC <dbl>,
## # RMP <dbl>, RN3 <dbl>, RN5 <dbl>, RN7 <dbl>, RNA <dbl>, RNN <dbl>,
## # RNQ <dbl>, RNS <dbl>, RNZ <dbl>, RP5 <dbl>, RPA <dbl>, …
From here you can use any of the emulation methods tried on our site so far. Please explore some of the vignettes!
If you are an Exeter researcher, please get in touch and I will send you the data on hosptial deaths, admission and ICU by hospital trust. If not, contact me anyway and we will explore whether your university has a similar agreement.
Firstly, this is the real thing. Please help us emulate and calibrate the model! Please post all vignettes, whether EDA, emulation, sensitivity analysis, calibration or whatever with a title beginning “Ensemble Zero: …” e.g. “Ensemble Zero: Solving everything, so you don’t have to”.
Williamson, Daniel. 2015. “Exploratory Designs for Computer Experiments Using K-Extended Latin Hypercubes.” Environmetrics 26 (4): 268–83. https://doi.org/10.1002/env.2335.