Licensed under the Creative Commons attribution-noncommercial license, http://creativecommons.org/licenses/by-nc/3.0/. Please share and remix noncommercially, mentioning its origin.
Important Note: These materials have been updated for use with version 2.8. As of version 2, pomp syntax has changed substantially. These changes are documented on the pomp website.
The models we’ve worked on so far have all been deterministic. It’s important to realize that this assumption is unrealistic. Stochasticity both environmental and demographic is an everpresent feature of real systems. He we expand our modeling toolkit to include some methods for studying stochastic versions of compartmental models. In particular, we introduce the concept of birth-death processes and Gillespie’s (1977) direct method for solving such processes.
First, we aim to understand the theory of birth-death processes in general. In section above, we studied a model that was deterministic, continuous in time, and continuous in the state variables \(S\), \(I\), and \(R\). In this section, we etain the biologically realistic assumption of continuous time, but also require that the number of susceptible, infected, and recovered individuals be integers (we are no longer modeling proportions). Of course, if the equation for \(dI/dt\) specified by the model should require that 1.8 individuals be infected in some interval of time, the only way to interpret such a model is that it is some average. If it is an average, then we are thinking that the process is probabilistic and evolves stochastically. This is, in fact, a perfectly reasonable assumption since not all infected individuals will give rise to exactly the same number of secondary infections nor recover after exactly the same interval of time. Since the random variation considered in this example is variation among individuals, such as scenario is referred to as demographic stochasticity.
At this point, there are still a number of different directions we could go. We will make two assumptions that, together with the assumption of demographic stochasticity, uniquely determine a whole class of models. First, we assume that the epidemic is a Markov chain. A Markov chain is a stochastic process with the property that the future state of the system is dependent only on the present state of the system and conditionally independent of all past states. This is known as the memoryless property. Second, we assume that the changes in the state variables (increments and decrements) occur one at a time. That is, we cannot have two individuals simultaneously undergoing a “transition”, where refers to any change in the state variables (birth, death, conversion between classes, etc.). Any time an individual undergoes such a transition, we will call it an “event”. Accordingly, this kind of modeling is sometimes referred to as “event-driven simulation”. For historical reasons, the continuous time Markov chain with increments and decrements of one is known as a birth-death process. (In general, a Markov chain with integer-valued increments and decrements is known as a jump process.)
[Note: An aside about birth-death processes is that notation varies considerably from author to author, even though the authors are referring to exactly the same stochastic process. Particularly, the computational literature uses a notation borrow from chemistry, e.g., \(S \stackrel{\mu S}{\longrightarrow} {S-1}\), probably because the algorithms that are commonly used to simulate birth-death processes were developed in the context of chemical kinetics. Probabilists, by contrast, often use the generating function notation and scientists that come to birth-death processes from a background in statistical mechanics represent the process using the Forward Kolmogorov Equation or Fokker-Planck Equation (a partial differential equation). Textbooks in epidemiology differ, too. The point is that once you learn to “read” the different notations, they are all saying the same thing, i.e., that changes in the state variables occur according to such and such rates. It’s these rates that are the basis of simulation modeling.]
To illustrate the approach, we’ll start with the simple closed stochastic SI epidemic. Because the population is closed (no births, deaths, or migration) we represent total population size as a constant \(N\). Because we are considering abundances, not proportions, we adopt the convention used in the lecture portion of respresenting susceptibles, infecteds, and recovereds by \(X\), \(Y\), and \(Z\) and denote the initial number of infected individuals by \(Y_0\) accordingly. This gives the initial number of susceptible individuals \(X_0=N-Y_0\). By analogy to our deterministic model, we want the average rate at which susceptibles individually become infectious (the force of infection) to be \(\beta\,\frac{Y}{N}\) and the average rate at which the population as a whole converts from susceptible to infectious to be \(\beta\,\frac{Y}{N}\,X\). That is, at average rate \(\beta\,\frac{Y}{N} X\) the value of \(X\) is decremented by one and the value of \(Y\) is incremented by one. But when do these increments and decrements occur? To answer this, we turn to our assumption that the epidemic process is Markovian.
If we can determine what the sequence of “inter-event times” is, then we have fully specified the trajectory of the epidemic, for we know that at each of those times the number of susceptibles decreases by one and the number of infecteds increases by one. So, what are the inter-event times? The memoryless property of the continuous time Markov chain requires that the time between events is independent of the time between any other set of events, and, moreover, that if we were to investigate the process at any point in time between events the time to the next event would be independent of the time elapsed since the previous event. This defines the birth-death process as a kind of Poisson process. There is only one distribution for the inter-event times that has this property, the exponential distribution. Since we know how to simulate exponentially distributed random variables, we just simulate the sequence of event times and make our increments and decrements accordingly. This approach is known as Gillespie’s direct method.
Let’s implement Gillespie’s direct method for an SI epidemic with demographic stochasticity. It’s sometimes convenient to express a model such as this using chemical equations. In this case, it’s just one “reaction”. \[\mathrm{S} + \mathrm{I} \stackrel{\frac{\beta}{N}}{\longrightarrow} 2\,\mathrm{I}\]
Here’s a function implementing the Gillespie direct method. Note that it just takes one step at a time and depends on another function which actually computes that step.
SI.simul <- function (x, params, nstep) {
## set up an array to store results
output <- array(dim=c(nstep+1,3))
## name the variables in the array
colnames(output) <- c("time","X","Y")
output[1,] <- x # initial condition
## iterate the model for nstep events
for (k in 1:nstep) {
## update x and store result
output[k+1,] <- x <- SI.onestep(x,params)
}
as.data.frame(output)
}
Here’s the function that takes the step.
SI.onestep <- function (x, params) {
## the second element of x is number of susceptibles X
X <- x[2]
## the third element of x is number of infecteds Y
Y <- x[3]
event.rate <- params["beta"]*X*Y/(X+Y)
## how much time do we wait for this event?
tau <- rexp(n=1,rate=event.rate)
c(tau=x[1]+tau,X=X-1,Y=Y+1)
}
Using the same parameters as before, we run some simulations and plot.
Note that because time is continuous and the process is stochastic, the number of events that occur in a specified amount of time will vary. Instead, we save some pre-set number of “events”.
set.seed(38499583) # make results repeatable
nsims <- 10 # number of simulations to run
pop.size <- 200 # size of the population
Y0 <- 2 # initial number infected
nstep <- pop.size-Y0 # run until everyone infected
xstart <- c(time=0,X=(pop.size-Y0),Y=Y0) # initial conditions
params <- c(beta=60,gamma=365/13) # parameters (R0=2.1)
simdat <- vector(mode='list',length=nsims) # to store simulated data
for (k in 1:nsims) {
simdat[[k]] <- SI.simul(xstart,params,nstep)
}
trange <- range(sapply(simdat,function(x)range(x$time)))
yrange <- range(sapply(simdat,function(x)range(x$Y)))
plot(trange,yrange,type='n',xlab='time',ylab="Y",bty='l')
for (k in 1:nsims)
lines(Y~time,data=simdat[[k]],col=k,type='o',pch=16)
Simulate the stochastic SI model using Gillespie’s direct method. Experiment with the initial number of infecteds (\(Y_0\)) and with the total population size (\(N\)). What effects do these have on the predictability of the epidemic? What effects do these have on the variability of the final outbreak size?
The techniques we’ll take up next will allow us to go beyond the simple SI model to models with an arbitrary number of compartments. For concreteness, let’s look at a stochastic SIR epidemic with demography. In particular, we have \[\mathrm{S} + \mathrm{I} \stackrel{\frac{\beta}{N}}{\longrightarrow} 2\,\mathrm{I} \qquad \mathrm{I} \stackrel{\gamma}{\longrightarrow} \mathrm{R}\] \[\mathrm{S} \stackrel{\mu}{\longrightarrow} \qquad \mathrm{I} \stackrel{\mu}{\longrightarrow} \qquad \mathrm{R} \stackrel{\mu}{\longrightarrow}\] \[\mathrm{S} \stackrel{\mu}{\longrightarrow} 2\,\mathrm{S} \qquad \mathrm{I} \stackrel{\mu}{\longrightarrow} \mathrm{I} + \mathrm{S} \qquad \mathrm{R} \stackrel{\mu}{\longrightarrow} \mathrm{R} + \mathrm{S}\] For present purposes, the main difference between the SI model and the SIR model is that in the latter, more than one kind of event can occur. This means we need to account for two things:
Since the transition processes are independent we can calculate a “total event rate” as the sum of the individual rates. That is, the waiting time to the next event, regardless of what kind it is, is exponentially distributed. The rate for this distribution is just the sum of the rates of the eight individual processes. Once we know when an event will occur, the next step is to determine which event it is. This is easy since each event must occur with probability proportional to its rate. This means that we can randomly choose which event occurs, but that we must do so in such a way such that each transition is selected in proportion to its contribution to the total rate.
Let’s see how to implement this in R
. As before, we first define a one-step function.
SIR.onestep <- function (x, params) {
X <- x[2]
Y <- x[3]
Z <- x[4]
N <- X+Y+Z
beta <- params["beta"]
mu <- params["mu"]
gamma <- params["gamma"]
## each individual rate
rates <- c(
birth=mu*N,
infection=beta*X*Y/N,
recovery=gamma*Y,
sdeath=mu*X,
ideath=mu*Y,
rdeath=mu*Z
)
## what changes with each event?
transitions <- list(
birth=c(1,0,0),
infection=c(-1,1,0),
recovery=c(0,-1,1),
sdeath=c(-1,0,0),
ideath=c(0,-1,0),
rdeath=c(0,0,-1)
)
## total event rate
total.rate <- sum(rates)
## waiting time
if (total.rate==0)
tau <- Inf
else
tau <- rexp(n=1,rate=total.rate)
## which event occurs?
event <- sample.int(n=6,size=1,prob=rates/total.rate)
x+c(tau,transitions[[event]])
}
Also as before, we set parameters and loop through the process. We do things slightly differently here: we’ll stop the simulations if ever we run out of susceptibles. For safety’s sake, however, we’ll put an upper bound on the amount of work we’ll do.
SIR.simul <- function (x, params, maxstep = 10000) {
output <- array(dim=c(maxstep+1,4))
colnames(output) <- names(x)
output[1,] <- x
k <- 1
## loop until either k > maxstep or
## there are no more infectives
while ((k <= maxstep) && (x["Y"] > 0)) {
k <- k+1
output[k,] <- x <- SIR.onestep(x,params)
}
as.data.frame(output[1:k,])
}
Now let’s repeat for 10 runs and plot.
set.seed(56856583)
nsims <- 10
xstart <- c(time=0,X=392,Y=8,Z=0) #initial conditions
params <- c(mu=0.02,beta=60,gamma=365/13) #parameters
require(plyr)
simdat <- rdply(
nsims,
SIR.simul(xstart,params)
)
head(simdat)
## .n time X Y Z
## 1 1 0.000000000 392 8 0
## 2 1 0.002728883 391 9 0
## 3 1 0.007003707 390 10 0
## 4 1 0.008856696 390 9 1
## 5 1 0.010046872 389 10 1
## 6 1 0.010658383 389 9 2
plot(Y~time,data=simdat,type='n')
d_ply(simdat,".n",function(x)lines(Y~time,data=x,col=.n))
Check out the simdat
data frame created by the above code. Use class
, head
, tail
, str
, and plot
to examine it.
Simulate the stochastic SIR model using Gillespie’s direct method. As before, experiment with the initial number of infecteds (\(Y_0\)) and with the total population size (\(N\)). What effects do these have on the predictability of the epidemic?
We can achieve much faster speeds using the implementation of Gillespie’s algorithm in pomp.
The following codes implement the SIR simulation above in pomp.
library(pomp)
pomp(
data=data.frame(
time=seq(0.01,0.5,by=0.01),
reports=NA
),
times="time",t0=0,
rprocess=gillespie_hl(
birth=list("rate = mu*N;",c(N=1,X=1,Y=0,Z=0,cases=0)),
deathS=list("rate=mu*X;",c(N=-1,X=-1,Y=0,Z=0,cases=0)),
deathI=list("rate=mu*Y;",c(N=-1,X=0,Y=-1,Z=0,cases=0)),
deathR=list("rate=mu*Z;",c(N=-1,X=0,Y=0,Z=-1,cases=0)),
infection=list("rate=Beta*X*Y/N;",c(N=0,X=-1,Y=1,Z=0,cases=1)),
recovery=list("rate=gamma*Y;",c(N=0,X=0,Y=-1,Z=1,cases=0)),
hmax=0.01
),
rmeasure=Csnippet("reports=rbinom(cases,rho);"),
paramnames=c("rho","mu","Beta","gamma"),
statenames=c("N","X","Y","Z","cases"),
accumvars=c("cases"),
params=c(X.0=392,Y.0=8,Z.0=0,N.0=400,cases.0=0,
mu=0.02,Beta=60,gamma=365/13,rho=0.5)
) -> sir
simulate(sir,nsim=10,format="data.frame") -> sims
library(ggplot2)
library(reshape2)
ggplot(melt(sims,id=c("time",".id")),
aes(x=time,y=value,group=.id,color=.id))+
geom_line()+
guides(color=FALSE)+
facet_grid(variable~.,scales="free_y")
From the deterministic models, we know that \(R_0=1\) is a critical or threshold value. When \(R_0>1\), the deterministic models always predict an outbreak; when \(R_0<1\), they predict that the disease will die out. What happens when we start to account for stochasticity?
The following codes perform many simulations at each of several values of \(R_0\). These simulations can be used to determine the shape of the distribution of epidemic final sizes.
R0vals <- c(0.5,3) # R0 values to explore
params <- c(mu=0.02,Beta=NA,gamma=365/13,rho=0.5,
X.0=392,Y.0=8,Z.0=0,
N.0=400,cases.0=0)
library(foreach)
library(doParallel)
registerDoParallel()
foreach (R0=R0vals) %dopar% {
params["Beta"] <- R0*365/13
simulate(sir,params=params,nsim=100,format="data.frame",times=0.5) -> sims
cbind(sims,R0=R0)
} -> simdat
library(plyr)
ldply(simdat) -> simdat
subset(simdat,time==max(time),select=c(Z,R0)) -> simfs
We can plot these distributions using histograms.
ggplot(simfs,aes(x=Z,fill=R0,group=R0))+
geom_histogram(binwidth=10)+
labs(x="final size")
Use the codes above to explore the final size in a neighborhood of the critical threshold. How do the results from the deterministic and stochastic models differ?
Use the stochastic simulator to generate some “data” for known parameter values: \(R_0=2\), infectious period of 13~da, host lifespan of 50~yr. Estimate \(R_0\) for each simulated epidemic using the invasion-phase growth rate method. Estimate \(R_0\) and \(\gamma\) using the trajectory-matching approach. Comment on the agreement between the true parameters and your estimates.
The Gillespie algorithm gives us a way to generate exact realizations when stochasticity is purely demographic. Very frequently, we wish to consider models with environmental stochasticity or we are thinking about large populations, for which the Gillespie algorithm is impractically slow. The stochastic differential equation (SDE) formalism is a convenient approximation under these circumstances. It represents a more coarse-grained approach to the epidemiological dynamics that can nevertheless capture many important features at relatively modest computational cost.
Here, we’ll examine an SDE formulation of the SIR model with demography and implement some codes that allow us to simulate realizations of this model. When working with compartmental models, the mathematics becomes more convenient if we think first in terms of the fluxes between the compartments. A diagram of the SIR model is shown below, with emphasis on the fluxes. Associated with each flux is a counting process, i.e., a nondecreasing, integer-valued stochastic process. In particular, for any pair of compartments X, Y, let \(N_{XY}\) be the cumulative number of individuals that have passed directly from X to Y since the arbitrary time origin \(t_0\); \(N_{XY}\) is a counting process.
\(N_{XY}\) represents the cumulative number of individuals that have moved beetween compartments X and Y.
To formulate the SDE version of the SIR model, we write stochastic differential equations for the fluxes. For example, we might propose the following: \[dN_{{\emptyset}S} = \mu\,(X+Y+Z)\,dt + \sqrt{\mu\,(X+Y+Z)}\,dW_{{\emptyset}S}\] \[dN_{SI} = \frac{\beta\,X\,Y}{X+Y+Z}\,dt + \sqrt{\frac{\beta\,X\,Y}{X+Y+Z}}\,dW_{SI}\] \[dN_{IR} = \gamma\,Y\,dt + \sqrt{\gamma\,Y}\,dW_{IR}\] \[dN_{S{\emptyset}} = \mu\,X\,dt + \sqrt{\mu\,X}\,dW_{S{\emptyset}}\] \[dN_{I{\emptyset}} = \mu\,Y\,dt + \sqrt{\mu\,Y}\,dW_{I{\emptyset}}\] \[dN_{R{\emptyset}} = \mu\,Z\,dt + \sqrt{\mu\,Z}\,dW_{R{\emptyset}}\] As usual, we think of the \(dW\) terms as independent \(\mathrm{normal}(0,\sqrt{dt})\) random variables. We complete the model by writing SDEs for the state variables \(X\), \(Y\), and \(Z\): \[dX = dN_{{\emptyset}S}-dN_{SI}-dN_{S{\emptyset}}\] \[dY = dN_{SI}-dN_{IR}-dN_{I{\emptyset}}\] \[dZ = dN_{IR}-dN_{R{\emptyset}}\] The easiest way to simulate realizations of an SDE model such as this is to use the Euler-Maruyama method. The following code implements Euler-Maruyama for an arbitrary system of SDE.
eulmar <- function (func, xstart, times, dt, ...) {
out <- array(
dim=c(length(times),length(xstart)),
dimnames=list(NULL,names(xstart))
)
out[1,] <- x <- xstart
t <- times[1]
for (k in seq.int(from=2,to=length(times))) {
while (t < times[k]) {
dx <- func(t,x,dt,...)
x <- x+dx
t <- t+dt
}
out[k,] <- x
}
as.data.frame(cbind(time=times,out))
}
In the eulmar
function, it is assumed that func
is a function that takes one Euler-Maruyama step of size dt
. Here’s an example of such a function for the SIR model.
sir.eulerstep <- function (t, x, dt, mu, Beta, gamma) {
N <- sum(x)
means <- c(
mu*N,
Beta*x[1]*x[2]/N,
gamma*x[2],
mu*x
)*dt
dn <- rnorm(n=6,mean=means,sd=sqrt(means))
c(
dn[1]-dn[2]-dn[4],
dn[2]-dn[3]-dn[5],
dn[3]-dn[6]
)
}
set.seed(238785319)
xstart <- c(X=392,Y=8,Z=0) #initial conditions
times <- seq(from=0,to=0.5,by=1/365)
x <- eulmar(func=sir.eulerstep,
xstart=xstart,
mu=0.02, Beta=50, gamma=365/13,
times=times,
dt=0.001)
plot(Y~time,data=x,type='o')
In this case, it will be useful to transform the state variables. To do this, we must use the Itô formula: \[df(X) = f'(X)\,dX+\tfrac{1}{2}\,f''(X)\,dX^2\] together with the rules \[dt^2 = 0 \qquad dW dt = 0 \qquad dW^2 = dt.\] Transforming the state variables via \(x=\log{X}\), \(y=\log{Y}\), and \(z=\log{Z}\), we get \[dx = \frac{1}{X}\,\left(dN_{{\emptyset}S}-dN_{SI}-dN_{S{\emptyset}}\right)-\frac{1}{2\,X^2}\,\left(\mu\,(X+Y+Z)+\mu\,X+\frac{\beta\,X\,Y}{X+Y+Z}\right)\,dt\] \[dy = \frac{1}{Y}\,\left(dN_{SI}-dN_{IR}-dN_{I{\emptyset}}\right)-\frac{1}{2\,Y^2}\,\left(\frac{\beta\,X\,Y}{X+Y+Z}+\mu\,Y+\gamma\,Y\right)\,dt\] \[dz = \frac{1}{Z}\,\left(dN_{IR}-dN_{R{\emptyset}}\right)-\frac{1}{2\,Z^2}\,\left(\gamma\,Z+\mu\,Z\right)\,dt\]
log.sir.eulerstep <- function (t, x, dt, mu, Beta, gamma) {
x <- exp(x)
N <- sum(x)
means <- c(
mu*N,
Beta*x[1]*x[2]/N,
gamma*x[2],
mu*x
)*dt
v <- means
dn <- rnorm(n=6,mean=means,sd=sqrt(means))
c(
(dn[1]-dn[2]-dn[4])/x[1]-sum(v[c(1,2,4)])/2/x[1]/x[1],
(dn[2]-dn[3]-dn[5])/x[2]-sum(v[c(2,3,5)])/2/x[2]/x[2],
(dn[3]-dn[6])/x[3]-sum(v[c(3,6)])/2/x[3]/x[3]
)
}
set.seed(23785319)
xstart <- c(X=392,Y=8,Z=1)
times <- seq(from=0,to=0.5,by=1/365)
x <- eulmar(func=log.sir.eulerstep,
xstart=log(xstart),
mu=0.02, Beta=50, gamma=365/13,
times=times,
dt=0.001)
within(
x,
{
X <- exp(X)
Y <- exp(Y)
Z <- exp(Z)
}
) -> x
plot(Y~time,data=x,type='o')
require(plyr)
simdat <- rdply(
20,
within(
eulmar(func=log.sir.eulerstep,
xstart=log(xstart),
mu=0.02, Beta=50, gamma=365/13,
times=times,
dt=0.001),
{
X <- exp(X)
Y <- exp(Y)
Z <- exp(Z)
}
)
)
plot(Y~time,data=simdat,type='n')
d_ply(simdat,".n",function(x)lines(Y~time,data=x,col=.n))
Gardiner, C. W. 2009. Stochastic methods : A handbook for the natural and social sciences. 4th editions. Springer, Berlin.
Gillespie, D. T. 1977. Exact stochastic simulation of coupled chemical reactions. Journal of Physical Chemistry 81:2340–2361.