AML 612 Spring 2019: List of modules

Computational and statistical methods for mathematical biologists and epidemiologists.

Objectives:

This course is meant to provide students in applied mathematics with the broad skill-set needed to optimize the parameters of dynamical mathematical models to relevant biological or epidemic data. The course will almost entirely be based on material posted on this website. Continue reading

Students t- and z-tests of sample means, and ANOVA to compare multiple means

In this module we will discuss how to conduct one-sample and two-sample Students t-tests of sample means when the variance of the sample is unknown, testing the equality of the means of several samples, and z-test of sample means when the variance is known.

Contents:

Continue reading

Introduction to hypothesis testing

[In this course we will be learning how to formulate figure-of-merit statistics that can help to answer research questions like “Is quantity A significantly greater/less than quantity B?”, or “Does quantity X appear to be significantly related to quantity Y?”.  As we are about to discuss, statistics that can be used to answer these types of questions do so using the underlying probability distribution to the statistic.  Every statistic used for hypothesis testing has an underlying probability distribution.]

Continue reading

AML 612 Spring 2018: list of modules

Statistical Methods for Students in the Life and Social Sciences

(aka: How to be a Data Boss)

Objectives:

This course is meant to introduce students in the life and social sciences to the skill set needed to do well-executed and well-explicated statistical analyses.  The course is aimed at students with little prior experience in statistical analyses, but prior exposure to “stats 101″-type courses is helpful.  The course will be almost entirely based on material posted on this website.  The course syllabus can be found here. There is no textbook for this course, but recommended reading is How to Lie with Statistics by Irving Geis, Statistical Data Analysis by Glen Cowan, and Applied Linear Statistical Models by Kutner et al (doesn’t really matter which edition). Continue reading

Sunspot activity and influenza pandemics: a statistical assessment of the purported association

The R code associated with the analysis presented in the paper Sunspot activity and influenza pandemics: a statistical assessment of the purported association is in sunspot_analysis.R

The file reads in the data files summary_pandemic_data.txt, and sunspot_wolf_and_group_1700_to_2014.txt

The R script produces the following plot, shown in the paper,

sunspot_pandemic_analysis

 

Combining statistical and mathematical modelling for fun (and occasional profit)

In this talk, I’ll discuss several past analyses I’ve done with collaborators where we have combined statistical and mathematical modelling methods to explore some interesting research questions.

I’m a statistician, and I also have a PhD in experimental particle physics.  Research in experimental particle physics can involve complex models of observable physical processes, and fitting of those models to experimental data is a not uncommon task in that field.  Like the field of applied mathematics in the life and social sciences (AMLSS), the models being fit at times have no analytic solution, and must be solved numerically using specialized methods.  When I entered the field of AMLSS back in 2009, I had a lot to learn about the various models used in this field and the common methodologies, but I already had a solid tool box of specialized skills that allowed me to connect mathematical models to data, and it has turned out that those skills have been remarkably useful in exploring a wide range of research questions in the life and social sciences that I find interesting.  I also apply these skills in consulting projects I do.

First off: what is the difference between statistical and mathematical modelling, anyway?

The difference between statistical and mathematical models is often times confusing to people.  In this past module on this site, I discuss an example of the differences, with an analysis of seasonal and pandemic influenza used as an example.

Example of an analysis combining statistical and mathematical modelling: Mathematical and statistical modelling of the contagious spread of panic in a population

During the 2014 Ebola outbreak, there were a total of five cases that were ultimately identified in America, compared to tens of thousands of cases in West Africa.  Even though the “outbreak” in America was essentially non-existent, once the first case was identified in the US in autumn 2014, the media shifted into 24/7 coverage of the supposed dire threat Ebola presented to Americans, complete with scary imagery.

ebola_here

 

Autumn 2014 I was teaching a course in the ASU AMLSS graduate program on statistical methods for fitting the parameters of mathematical models to data.  Each year, when I teach AML classes, I usually try to have a “class publication project” that encompasses the methodology I teach in the class.  In this case, I thought it might be interesting to try to model the spread of Ebola-related panic in the US population, as expressed on social media, and explore how news media might play a role in that.

The class did the analysis as a homework assignment, and we wrote the paper together, which was published in 2015.  The paper received national media attention when it came out.

First; why was this analysis important?  Well, it has been suggested in the past that people talking about a particular disease on social media might perhaps be used as a real-time means to assess the temporal and geospatial spread of the disease in the population, rather than relying on slower traditional surveillance methods, which can suffer from backlogs in laboratory testing.  For instance, tracking influenza, or cholera:

comparison_twitter_cholera_data

However, up until the US Ebola “outbreak” the problem was that it was impossible to say whether people were just discussing a disease on social media because they were worried about it, rather than because they actually had it.  During the Ebola outbreak, pretty much no one actually had it in the US, so everyone who was talking about it was doing so because they were concerned about it.  This gave us the perfect instance to gauge what kind of temporal patterns we might see in social media chatter due simply to panic or concern about a disease!

The data we used in the study were the daily number of news stories about Ebola from large national news outlets.  We also obtained Twitter data related to Ebola, and Google search data in the US with search terms related to Ebola, including “do I have Ebola?” from Google Trends. Here is what the data looked like:

plos_ebola_analysis_data

We came up with a model that related the number of news videos, V, and people who were infected, I, with the idea to tweet about Ebola, or do a Google search related to Ebola:

plos_ebola_media_model

The parameter beta is a measure of how many tweets (or Google searches) per person per unit time one news story would inspire, and gamma parameterizes the “boredom” effect, through which people eventually move to a “recovered and immune” class, upon which they never tweet again about Ebola no matter how many Ebola-related news stories they are exposed to.  Using the statistical methodology taught in the AML course, the students fit the parameters of that model to data, and obtained the following model predictions, shown in red:

plos_ebola_results

The blue lines on the plot represent a plain statistical model that simply regresses the Twitter and Google search data on the news media data, without taking into account the “boredom” effect.  Can you see that the regression fits are systematically too high early on, and systematically too low later for all the plots, but the same is not true of our mathematical model?  That tells us that our mathematical model that includes boredom really does do a better job of describing the dynamics of peoples’ Ebola-related social media behaviours!

We found that each Ebola-related new story inspired on average thousands of tweets and Google searches. Also, on average, we found people were only interested enough for a few days to tweet or do Google search after seeing a news story about Ebola before they became bored with the topic:

Screen Shot 2017-03-22 at 8.09.21 PM

We couldn’t have done this analysis without both mathematical modelling and statistical methods; it was a nice “bringing together” of the methodologies to explore an interesting research question.

Another example of an analysis that involved mathematical and statistical modelling methods: contagion in mass killings and school shootings

In January, 2014 there was a shooting at Purdue University, where one student entered a classroom and shot another student dead, then walked out and waited for police to arrest him.

purdue_shooting

At the time, it struck me that it was the third school shooting I had heard about in an approximately 10 day period.  Even for the United States, which has a serious problem with firearm violence compared to other first world countries, this seemed like an unusual number to have in such a short period of time.

It led me to wonder if perhaps contagion was playing a role in these dynamics.  Certainly, in the past it had been noted that suicide appears to be contagious, because (for example) in high schools where there is one suicide it is statistically more likely to see an ensuing cluster of suicides.  And the “copy cat” effect in mass killings has long been suspected.  I wondered if perhaps a mathematical model of contagion might be used to help quantify whether or not mass killings and school shootings are contagious.  So, I talked with some colleagues:

plos_mass_killing_analysis_team

And, we decided to use a mathematical model known as a Hawke’s point process “self-excitation” model to simulate the potential dynamics of contagion in mass killings; the idea behind the model is quite simple… there is a baseline probability (which may or may not depend on time) of a mass killing to occur by mere random chance (the dotted line, below).  But, if a mass killing does occur, due to contagion it temporarily raises the probability that a similar event will occur in the near future.  That probability decays exponentially:

hawkes_point_process

 

How contagion would manifest itself in data is thus as unusual “bunching together in time” of events compared to what you would expect from just the baseline probability.

Here’s our (blurry) model:

plos_mass_killing_model

 

The parameters of the model were Texcite, the average length of the excitation period, and Nsecondary, the average number of new mass killings inspired by one mass killing.  N_0(t) was the baseline probability for mass killings to occur.  We used statistical modelling methods to estimate N_0(t).

We needed data in order to fit the parameters of our model. From USA Today we obtained data on mass killings (four or more people killed), and from the Brady Campaign to Prevent Gun Violence, we obtained data on school shootings, and data on mass shootings (three or more people shot, not necessarily killed).  Mass shootings happen very frequently in the US!

plos_mass_killing_usa_today_data

plos_mass_killing_brady_campaign_data

We compared how well the Hawkes model fit the data compared to a model that didn’t include self-excitation.  If contagion is evident, the former will fit the data significantly better.

The fit results were:

plos_mass_killings_results

Both mass killings and school shootings appear to be significantly contagious!  And the length of the contagion period is on average around two weeks for both.

Mass shootings with more than three people shot, but less than four people killed were not contagious though.

Why?  Well, mass shootings with low death counts happen on average more than once a week in the US.  They happen so often, that they rarely make it past the local news.  In contrast, mass shootings with high death rates, and school shootings, usually get national and even international media attention.  It may likely be that widespread media attention is the vector for the contagion.

Again, this was an analysis that was made possible through the marriage of mathematical and statistical modelling methods.

Statistical and mathematical modelling skills on the job market

Quantitative and predictive analytics is a field that is growing very quickly.  Statistical methods and data mining (“big data”) play a large role in predictive analytics, but the power of mathematical models is more and more being recognized as having same advantages over statistical models alone because mathematical models do not simply assume an “X causes Y” relationship, but instead can describe the complex dynamics of interacting systems.  Having a tool box of skills that includes expertise in both mathematical and statistical modelling can lead to many interesting career opportunities, including consulting.

 

 

Incorporating prior belief of parameter estimates into model fits to data

Incorporating parameter prior-information into the fit

For many models, information about the parameters and/or initial conditions can be obtained from other studies. For instance, let’s examine the seasonal influenza SIR model we have used as an example in several other modules.  Our data was influenza incidence data from an influenza epidemic in the midwest, and we fit the transmission rate, beta (or alternative, R0=beta/gamma), of an SIR model to this data. For example, using the R script fit_midwest_negbinom_gamma_fixed.R

The script performs a negative binomial likelihood fit to the influenza data, assuming that the average recovery period, 1/gamma, for flu is fixed at 4.8 days.  The script produces the following plot (recall that alpha is the over-dispersion parameter for the negative binomial likelihood, and t0 is the time of introduction of the virus to the population.

fit_flu_fixed_gamma_nb

The script gives the best-fit estimate using the graphical Monte Carlo fmin+1/2 method, and also the weighted mean method.  Note that the plots should be much better populated in order to really get trustworthy estimates from the fmin+1/2 method.

In reality, most of our parameters that we obtain from prior studies aren’t know to perfect precision

In our script above, we assumed that 1/gamma was 4.8 days based on a prior study in the literature. However, this was estimated from observational studies of sick people, and, in reality, there are statistical uncertainties associated with that estimate.  In the paper describing the studies, they state that their central estimate and 95% confidence interval on 1/gamma was 4.80 [4.31,5.29] days.  Unless told otherwise in the paper from which you get an estimate, you assume that the uncertainty on the parameter is Normally distributed.  Because the 95% CI is +/-1.96*sigma from the mean, this implies that the std deviation width of the Normal distribution is sigma=(4.8-4.31)/1.96~0.25 days

Thus, our probability distribution for x=1/gamma is

P(x|mu,sigma)~exp(-0.5(x-mu)^2/sigma^2)

with mu=4.8 days, and sigma = 0.25 days, in this case.

 

Uncertainty on “known” parameters affects the uncertainties on the other model parameters you estimate from your fit to data!

The uncertainty on 1/gamma will affect the uncertainty on our parameter estimates.  For instance, is it clear that if all we know about 1/gamma was that it was between 1 and 50 days, it would be much harder to pin down our transmission rate? (ie; we had no idea what 1/gamma was, and thus had fit for gamma, as well as R0, t0, and alpha)  The script fit_midwest_negbinom_gamma_unconstrained.R does this, and produces the following plot:

fit_flu_unconstrained_gamma_nb

You can see that the influenza data we have perhaps give us a little bit of sensitivity to the parameter gamma, but not much (basically, the fit just tells us 1/gamma is somewhere between 2 to 6 days, with 95% confidence).  The uncertainties on our estimates of R0 and t0 have gone way up, compared to the first fit where we assumed 1/gamma was fixed at 4.8 days!  Also, when you are using the weighted mean method to estimate parameters and the parameter uncertainties, you can also get the covariance matrix for your parameter estimates.  The correlation matrix, derived from the covariance matrix, for this fit looks like this:

fit_flu_unconstrained_gamma_covariance_matrix

Notice that our estimates of R0 and 1/gamma are almost 100% correlated (this means that as 1/gamma goes up, R0 also has to go up to achieve a good fit to the data).  You never want to see parameters so highly correlated in fits you do… it means that your best-fit parameters likely won’t give you a model with good predictive power for a different, equivalent data set (say, influenza data for the same region for the next flu season).

So, even though we seem to have a little bit of sensitivity to the value of 1/gamma in our fit, having that estimate 100% correlated to our estimate of R0 is not good, and a sign you shouldn’t trust the results of the fit.

Incorporating uncertainties on “known” parameters from the literature in our fit likelihoods

In order to take into account the uncertainties on our “known” parameter, x, you simply modify your fit likelihood to include the likelihood coming from the probability distribution for that parameter.  Thus, the negative log likelihood is modified like so:

negloglike = negloglike + 0.5*(x-mu)^2/sigma^2

Then, in the fit, you do Monte Carlo sampling not only of all your other unknown parameters (like R0, t0, and alpha in this case), but also uniformly randomly sample parameter x over a range of around approximately mu-4*sigma to mu+4*sigma.

For 1/gamma, we know that mu=4.8 days, and sigma is 0.25 days.  The R script fit_midwest_negbinom_gamma_constrained.R modifies the likelihood to take into account our probability distribution for our prior estimate of 1/gamma from the literature.  The script produces the following plot:

fit_flu_constrained_gamma_nb

(again, for the fmin+1/2 method, we’d like to see these plots much better populated!).  Note that now our uncertainties on R0 and t0 from the weighted mean method are much smaller than they were when 1/gamma was completely unconstrained, but larger than they were when 1/gamma was fixed to 4.8 days.  By modifying the likelihood to take into account the probability distribution of our prior belief for 1/gamma, we now have a fit that properly feeds that uncertainty into our uncertainty on R0 and t0.

When publishing analyses that involve fits like these, it is important to take into account your prior belief probability distributions for the parameter estimates you take from the literature.  In some cases, your fit might be quite sensitive to the assumed values of those parameters; if the literature estimates are a bit off from what your data would “like” them to be to obtain a good fit, and you just assume a fixed central value for the parameter, sometimes you just won’t be able to get a good fit to your data.

When you include the parameter in your fit with a modified likelihood to take into account it’s prior belief probability distribution, the estimate you get from the fit to your data is known as the “posterior” estimate.  Note that the posterior estimate, and uncertainty, on 1/gamma that we obtained from fit_midwest_negbinom_gamma_constrained.R  is 4.798+/-0.247, and is pretty darn close to our prior belief estimate of 4.8+/-0.25.  If our data were sensitive to the value of 1/gamma, our posterior estimate would have a smaller uncertainty than the prior belief estimate, and likely have a different central value too.