Important Note: These materials have been superseded by materials provided as part of the as part of the SISMID short course on Simulation-based Inference.
Licensed under the Creative Commons Attribution-NonCommercial license. Please share and remix noncommercially, mentioning its origin.
This document has its origins in the SISMID short course on Simulation-based Inference given by Aaron King and Edward Ionides.
Produced with R version 3.3.1 and pomp version 1.7.5.1.
Students completing this lesson will:
Data are a sequence of N observations, denoted y∗1:N. A statistical model is a density function f(y1:N;θ) which defines a probability distribution for each value of a parameter vector θ. To perform statistical inference, we must decide, among other things, for which (if any) values of θ it is reasonable to model y∗1:N as a random draw from f(y1:N;θ).
The likelihood function is the density function evaluated at the data. It is usually convenient to work with the log likelihood function, ℓ(θ)=logf(y∗1:N;θ)
Recall that the probability distribution f(y1:N;θ) defines a random variable Y1:N for which probabilities can be computed as integrals of f(y1:N;θ). Specifically, for any event E describing a set of possible outcomes of Y1:N, P[Y1:N∈E]=∫Ef(y1:N;θ)dy1:N. If the model corresponds to a discrete distribution, then the integral is replaced by a sum and the probability density function is called a probability mass function. The definition of the likelihood function remains unchanged. We will use the notation of continuous random variables, but all the methods apply also to discrete models.
The following schematic shows dependence among variables in a POMP model. Measurements, Yn, at time tn depend on the state, Xn, at that time. State variables depend on state variables at the previous timestep. To be more precise, the distribution of the state Xn+1, conditional on Xn, is independent of the values of Xk, k<n and Yk, k≤n. Moreover, the distribution of the measurement Yn, conditional on Xn, is independent of all other variables.
Lets’ begin with a special case. Suppose that the unobserved state process is deterministic. That is, Xn=xn(θ) is a known function of θ for each n. What is the likelihood?
Since the probability of the observation, Yn, depends only on Xn and θ, and since, in particular Ym and Yn are independent given Xm and Xn, we have L(θ)=∏nfYn|Xn(y∗n;xn(θ),θ) or ℓ(θ)=logL(θ)=∑nlogfYn|Xn(y∗n;xn(θ),θ). The following diagram illustrates this.
In this diagram, ˆyn refers to the model prediction (ˆyn=E[Yn|Xn=xn(θ)]) and y∗n to the data.
For a POMP model, the likelihood takes the form of an integral: L(θ)=P[y∗1:N|θ]=∫N∏n=1P[y∗n|xn,θ]P[xn|xn−1,θ]dx1⋯dxN. This integral is high dimensional and, except for the simplest cases, can not be reduced analytically. A general approach to computing integrals, which will be useful here is Monte Carlo integration.
[See here for a brief introduction to Monte Carlo methods.]
Let us investigate a Monte Carlo integration scheme for the POMP likelihood. In particular, if we propose trajectories of the unobserved state process according to some probabilistic rule, we can generate a large number of these and approximate L(θ) by its Monte Carlo estimate. Specifically, let us randomly generate J trajectories of length N, xn,j, j=1…,J, n=1,…,N. Let wj denote the probability that we propose trajectory j. We compute the likelihood of each trajectory Lj(θ)=N∏n=1P[y∗n|xn,j,θ]P[xn,j|xn−1,j,θ] Then by the Monte Carlo theorem, we have L(θ)≈1JJ∑j=1Lj(θ)wj.
How shall we choose our trajectories? One idea would be to choose them so as to simplify the computation. If we choose them such that wj=N∏n=1P[xn,j|xn−1,j,θ], then we have L(θ)≈1JJ∑j=1Lj(θ)wj=1JJ∑j=1∏Nn=1P[y∗n|xn,j,θ]P[xn,j|xn−1,j,θ]∏Nn=1P[xn,j|xn−1,j,θ]=1JJ∑j=1N∏n=1P[y∗n|xn,j,θ]
This implies that if we generate trajectories by simulation, all we have to do is compute the likelihood of the data with given each trajectory and then average.
Let’s go back to the boarding school influenza outbreak to see what this looks like in practice. Let’s reconstruct the toy SIR model we were working with.
read.table("http://kingaa.github.io/short-course/stochsim/bsflu_data.txt") -> bsflu
rproc <- Csnippet("
double N = 763;
double t1 = rbinom(S,1-exp(-Beta*I/N*dt));
double t2 = rbinom(I,1-exp(-mu_I*dt));
double t3 = rbinom(R1,1-exp(-mu_R1*dt));
double t4 = rbinom(R2,1-exp(-mu_R2*dt));
S -= t1;
I += t1 - t2;
R1 += t2 - t3;
R2 += t3 - t4;
")
init <- Csnippet("
S = 762;
I = 1;
R1 = 0;
R2 = 0;
")
dmeas <- Csnippet("
lik = dpois(B,rho*R1+1e-6,give_log);
")
rmeas <- Csnippet("
B = rpois(rho*R1+1e-6);
")
pomp(subset(bsflu,select=-C),
times="day",t0=0,
rprocess=euler.sim(rproc,delta.t=1/5),
initializer=init,rmeasure=rmeas,dmeasure=dmeas,
statenames=c("S","I","R1","R2"),
paramnames=c("Beta","mu_I","mu_R1","mu_R2","rho")) -> flu
Let’s generate a large number of simulated trajectories at some particular point in parameter space.
simulate(flu,params=c(Beta=3,mu_I=1/2,mu_R1=1/4,mu_R2=1/1.8,rho=0.9),
nsim=5000,states=TRUE) -> x
matplot(time(flu),t(x["R1",1:50,]),type='l',lty=1,
xlab="time",ylab=expression(R[1]),bty='l',col='blue')
lines(time(flu),obs(flu,"B"),lwd=2,col='black')
We can use the function dmeasure
to evaluate the log likelihood of the data given the states, the model, and the parameters:
ell <- dmeasure(flu,y=obs(flu),x=x,times=time(flu),log=TRUE,
params=c(Beta=3,mu_I=1/2,mu_R1=1/4,mu_R2=1/1.8,rho=0.9))
dim(ell)
## [1] 5000 14
According to the equation above, we should sum up the log likelihoods across time:
ell <- apply(ell,1,sum)
summary(exp(ell))
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.000e+00 0.000e+00 0.000e+00 2.899e-136 0.000e+00 1.448e-132
The variability in the individual likelihoods is high and therefore the likelihood esitmate is imprecise. We will need many simulations to get an estimate of the likelihood sufficiently precise to be of any use in parameter estimation or model selection.
What is the problem? Essentially, very few of the trajectories pass anywhere near the data and therefore almost all have extremely bad likelihoods. Moreover, once a trajectory diverges from the data, it almost never comes back. While the calculation is “correct” in that it will converge to the true likelihood as the number of simulations tends to ∞, we waste a lot of effort investigating trajectories of very low likelihood. This is a consequence of the fact that we are proposing trajectories in a way that is completely unconditional on the data. The problem will get much worse with longer data sets.
We arrive at a more efficient algorithm by factorizing the likelihood in a different way: L(θ)=P[y∗1:N|θ]=∏nP[y∗n|y∗1:n−1,θ]=∏n∫P[y∗n|xn,θ]P[xn|y∗1:n−1,θ]dxn. Now, the Markov property gives us the Chapman-Kolmogorov equation, P[xn|y∗1:n−1,θ]=∫P[xn|xn−1,θ]P[xn−1|y∗1:n−1,θ]dxn−1, and Bayes’ theorem tells us that P[xn|y∗1:n,θ]=P[xn|y∗n,y∗1:n−1,θ]=P[y∗n|xn,θ]P[xn|y∗1:n−1,θ]∫P[y∗n|xn,θ]P[xn|y∗1:n−1,θ]dxn.
This suggests that we keep track of two key distributions. We’ll refer to the distribution of Xn|y∗1:n−1 as the prediction distribution at time tn and the distribution of Xn|y∗1:n as the filtering distribution at time tn.
Let’s use Monte Carlo techniques to estimate the integrals. Suppose {xFn−1,j}Jj=1 is a set of points drawn from the filtering distribution at time tn−1. Eqn. 2 tells us that we obtain a sample {xPn,j} of points drawn from the prediction distribution at time tn by simply simulating the process model: XPn,j∼process(xFn−1,j,θ),j=1,…,J. Having obtained {xPn,j}, we obtain a sample of points from the filtering distribution at time tn by resampling from {xPn,j} with weights proportional to P[y∗n|xn,θ]. The Monte Carlo theorem tells us, too, that the conditional likelihood Ln(θ)=P[y∗n|y∗1:n−1,θ]=∑xnP[y∗n|xn,θ]P[xn|y∗1:n−1,θ]≈1J∑jP[y∗n|xPn,j,θ]. We can iterate this procedure through the data, one step at a time, alternately simulating and resampling, until we reach n=N. The full log likelihood is then approximately ℓ(θ)=logL(θ)≈∑nlogLn(θ). It can be shown that this estimate of the likelihood is unbiased.
This is known as the sequential Monte Carlo algorithm or the particle filter. Key references include Kitagawa (1987), Arulampalam et al. (2002), and the book by Doucet et al. (2001). Pseudocode for the above is provided by King et al. (2016).
Here, we’ll get some practical experience with the particle filter, and the likelihood function, in the context of our influenza-outbreak case study.
In pomp, the basic particle filter is implemented in the command pfilter
. We must choose the number of particles to use by setting the Np
argument.
pf <- pfilter(flu,Np=5000,params=c(Beta=3,mu_I=1/2,mu_R1=1/4,mu_R2=1/1.8,rho=0.9))
logLik(pf)
## [1] -217.6103
We can run a few particle filters to get an estimate of the Monte Carlo variability:
pf <- replicate(10,
pfilter(flu,Np=5000,
params=c(Beta=3,mu_I=1/2,mu_R1=1/4,mu_R2=1/1.8,rho=0.9)))
ll <- sapply(pf,logLik)
logmeanexp(ll,se=TRUE)
## se
## -218.609270 1.132332
It is extremely useful to visualize the geometric surface defined by the likelihood function.
To get an idea of what the likelihood surface looks like in the neighborhood of a point in parameter space, we can construct some likelihood slices. We’ll make slices in the β and μI directions. Both slices will pass through the central point.
sliceDesign(
center=c(Beta=2,mu_I=1,mu_R1=1/4,mu_R2=1/1.8,rho=0.9),
Beta=rep(seq(from=0.5,to=4,length=40),each=3),
mu_I=rep(seq(from=0.5,to=2,length=40),each=3)) -> p
library(foreach)
library(doParallel)
registerDoParallel()
set.seed(108028909,kind="L'Ecuyer")
foreach (theta=iter(p,"row"),.combine=rbind,
.inorder=FALSE,
.options.multicore=list(set.seed=TRUE)
) %dopar% {
library(pomp)
pfilter(flu,params=unlist(theta),Np=5000) -> pf
theta$loglik <- logLik(pf)
theta
} -> p
Note that we’ve used the foreach package with the parallel backend (doParallel) to parallelize these computations. To ensure that we have high-quality random numbers in each parallel R session, we use a parallel random number generator (kind="L'Ecuyer"
, .options.multicore=list(set.seed=TRUE)
).
Add likelihood slices along the ρ direction.
Slices offer a very limited perspective on the geometry of the likelihood surface. With just two parameters, we can evaluate the likelihood at a grid of points and visualize the surface directly.
bake(file="flu-grid1.rds",seed=421776444,kind="L'Ecuyer",{
expand.grid(Beta=seq(from=1.5,to=5,length=50),
mu_I=seq(from=0.7,to=4,length=50),
mu_R1=1/4,mu_R2=1/1.8,
rho=0.9) -> p
library(foreach)
library(doParallel)
registerDoParallel()
## Now we do the computation
foreach (theta=iter(p,"row"),.combine=rbind,.inorder=FALSE,
.options.multicore=list(set.seed=TRUE)
) %dopar%
{
library(pomp)
pfilter(flu,params=unlist(theta),Np=5000) -> pf
theta$loglik <- logLik(pf)
theta
}
})-> p
In the above, all points with log likelihoods less than 50 units below the maximum are shown in grey.
Compute a slice of the likelihood in the β-ρ plane.
Call the whole parameter space Θ. Let Θ∗ be a subset of Θ, constraining parameters according to a scientific hypothesis of interest. For example, in a disease transmission model, Θ∗ could assert that the probability of a case being reported is ρ=0.8.
Determine the size of AIC as a hypothesis test for nested hypotheses with d=1 in a regular parametric situation.
We define maximum likelihood estimates (MLEs) \hat\theta and \hat\theta^* such that \ell(\hat\theta)=\ell_\mathrm{max},\quad \ell(\hat\theta^*)=\ell_\mathrm{max}^*.
The likelihood ratio test with d=1 gives a good way to construct confidence intervals. Suppose we are interested in a specific parameter, \theta_k, and we want to consider whether the data support the possibility that \theta_k=\theta_k^* in the absence of assumptions on the other parameters. We can then take \Theta^* to be the subset of \Theta satisfying \theta_k=\theta_k^*. Using the chi-square approximation to the likelihood ratio statistic, a 95% confidence interval for \theta_k consists of all the values \theta_k^* for which \ell_\mathrm{max}-\ell_\mathrm{max}^* < 1.92.
A way to visualize the information about a specific parameter \theta_k is via the profile likelihood function, defined as \ell_\mathrm{profile}(\theta_k^*) = \max\{\ell(\theta): \theta_k=\theta_k^*\}. We then plot \ell_\mathrm{profile}(\theta_k) against \theta_k.
In the toy example we’ve been working with, the default parameter set is not particularly close to the MLE. One way to find the MLE is to try optimizing the estimated likelihood directly. There are of course many standard optimization algorithms we might use for this. However, three issues arise immediately:
Here, let’s opt for deterministic optimization of a rough function. We’ll try using optim
’s default method: Nelder-Mead, fixing the random-number generator seed to make the likelihood calculation deterministic. Since Nelder-Mead is an unconstrained optimizer, we must transform the parameters. The following Csnippet
s encode an appropriate transformation and its inverse, and introduce them into the pomp
object.
toEst <- Csnippet("
TBeta = log(Beta);
Tmu_R1 = log(mu_R1);
Tmu_I = log(mu_I);
Trho = logit(rho);
")
fromEst <- Csnippet("
TBeta = exp(Beta);
Tmu_I = exp(mu_I);
Tmu_R1 = exp(mu_R1);
Trho = expit(rho);
")
pomp(flu,toEstimationScale=toEst,
fromEstimationScale=fromEst,
paramnames=c("Beta","mu_I","mu_R1","rho")) -> flu
Let’s fix a reference point in parameter space and insert these parameters into the pomp
object:
coef(flu) <- c(Beta=2,mu_I=1,mu_R1=512/sum(bsflu$B),mu_R2=512/sum(bsflu$C),rho=0.9)
The following constructs a function returning the negative log likelihood of the data at a given point in parameter space. The parameters to be estimated are named in the est
argument. Note how the freeze
function is used to fix the seed of the RNG. Note too, how this function returns a large (and therefore bad) value when the particle filter encounters and error. This behavior makes the objective function more robust.
neg.ll <- function (par, est) {
allpars <- coef(flu,transform=TRUE)
allpars[est] <- par
try(
freeze(
pfilter(flu,params=partrans(flu,allpars,dir="fromEst"),
Np=2000),
seed=915909831
)
) -> pf
if (inherits(pf,"try-error")) 1e10 else -logLik(pf)
}
Now we call optim
to minimize this function:
## use Nelder-Mead with fixed RNG seed
fit <- optim(
par=c(log(2), log(1), log(0.9/(1-0.9))),
est=c("Beta","mu_I","rho"),
fn=neg.ll,
method="Nelder-Mead",
control=list(maxit=400,trace=0)
)
mle <- flu
coef(mle,c("Beta","mu_I","rho"),transform=TRUE) <- fit$par
coef(mle)
## Beta mu_I mu_R1 mu_R2 rho
## 3.5599549 1.7619645 0.3324675 0.5541126 0.8841778
fit$val
## [1] 72.23701
lls <- replicate(n=5,logLik(pfilter(mle,Np=20000)))
ll <- logmeanexp(lls,se=TRUE); ll
## se
## -75.3770485 0.5121744
We plot some simulations at these parameters.
simulate(mle,nsim=10,as.data.frame=TRUE,include.data=TRUE) -> sims
The data are shown in blue. The 10 simulations are shown in red.
Construct likelihood slices through the MLE we just found.
Evaluate the likelihood at points on a grid lying in a 2D slice through the MLE we found above. Each group should choose a different slice. Afterward, we’ll compare results across groups.
The search of parameter space we conducted above was local. It is possible that we found a local maximum, but that other maxima exist with higher likelihoods. Conduct a more thorough search by initializing the Nelder-Mead starting points across a wider region of parameter space. Do you find any other local maxima?
The Poisson measurement model used here may not seem easy to interpret. Formulate an alternative measurement model and maximize the likelihood to compare the alternative model.
Try to estimate \beta, \mu_I, \rho, and \mu_{R1} simultaneously. Does your estimate of \mu_{R1} differ from the value we computed from the raw data? How do you interpret the agreement or lack thereof?
When we write down a mechanistic model for an epidemiological system, we have some idea of what we intend parameters to mean; a reporting rate, a contact rate between individuals, an immigration rate, a duration of immunity, etc.
Arulampalam, M. S., S. Maskell, N. Gordon, and T. Clapp. 2002. A tutorial on particle filters for online nonlinear, non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing 50:174–188.
Doucet, A., N. de Freitas, and N. Gordon, editors. 2001. Sequential Monte Carlo Methods in Practice. Springer-Verlag, New York.
Fisher, R. A. 1922. On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London, Series A 222:309–368.
King, A. A., D. Nguyen, and E. L. Ionides. 2016. Statistical inference for partially observed Markov processes via the R package pomp. Journal of Statistical Software 69:1–43.
Kitagawa, G. 1987. Non-Gaussian state-space modeling of nonstationary time series. Journal of the American Statistical Association 82:1032–1041.
Pawitan, Y. 2001. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Clarendon Press, Oxford.