How to make mathematical models (even at home)

As a WordPress blogger, I get a handy list of search terms that have led people to my blog. A particularly memorable search term that showed up on my feed was ‘how to make mathematical models at home’. What I liked about this query was that it suggests mathematical modelling as a recreational hobby: at home, in one’s spare time; just for fun. This speaks to an under-appreciated quality of mathematical modelling – that it’s really quite accessible once the core principles have been mastered.

To get started, I would suggest any of the following textbooks*:

Now, I know, you want to make your own mathematical model, not just read about other people’s mathematical models in a textbook. To start down this road, I think you should pay attention to two things:

  • How to make a diagram that represents your understanding of how the quantities you want to model change and interact, and;
  • Developing a basic knowledge of the classic models in the ecology, evolution and epidemiology including developing an understanding of what these models assume.

This would correspond to reading Chapters 2 and 3 of A Biologist’s Guide to Mathematical Modeling.

A good way to start towards developing your own model would be to identify the ‘classic model’ which is closest to the particular problem you want to look at. If you’re interested in predator-prey interactions, this would be the Lotka-Volterra model, or if you’re asking a question about disease spread, then you need to read about Kermack and McKendrick and the SIR model. Whatever your question, it should fall within one of the basic types of biological interactions, and the corresponding classic model is then the starting point for developing your mathematical model. From there, the next step is to think about how the classic model you’ve chosen should be made more complicated (but not too complicated!) so that your extended model best captures the nuances of your particular question.

Remember that the classic model usually represents the most simple model that will be appropriate, and only in rare circumstances, might you be able to justify using a more simple model. For example, if the level of predation or disease spread for your population of interest is very low, then you might be able to use a model for single species population growth (exponential/logistic/Ricker) instead of the Lotka-Volterra or SIR models, however, if predation and disease spread are negligible, then it arguably wasn’t appropriate to call your problem ‘predator-prey’ or ‘disease spread’ in the first place. Almost by definition, it’s usually not possible to go much simpler than the dynamics represented by the appropriate classic model.

That should get you started. You can do this at the university library. You can do this for a project for a class. And, yes, you can even do this at home!

Footnotes:

*For someone with a background in mathematics some excellent textbooks are:

but while the above textbooks will give you a better understanding of how to perform model analysis, the ‘For Biologist’s’ textbooks listed in this post are still the recommended reading to learn about model derivation and interpretation.

Testing mass-action

UPDATE: I wrote this, discussing that I don’t really know the justification for the law of mass action, however, comments from Martin and Helen suggest that a derivation is possible using moment closure/mean field methods. I recently found this article:

Use, misuse and extensions of “ideal gas” models of animal encounter. JM Hutchinson, PM Waser. 2007. Biological Reviews. 82:335-359.

I haven’t have time to read it yet, but from the title it certainly sounds like it answers some of my questions.

——————–

Yesterday, I came across this paper from PNAS: Parameter-free model discrimination criterion based on steady-state coplanarity by Heather A. Harrington, Kenneth L. Ho, Thomas Thorne and Michael P.H. Strumpf.

The paper outlines a method for testing the mass-action assumption of a model without non-linear fitting or parameter estimation. Instead, the method constructs a transformation of the model variables so that all the steady-state solutions lie on a common plane irrespective of the parameter values. The method then describes how to test if empirical data satisfies this relationship so as to reject (or fail to reject) the mass-action assumption. Sounds awesome!

One of the reasons I like this contribution is that I’ve always found mass-action to be a bit confusing, and consequently, I think developing simple methods to test the validity of this assumption is a step in the right direction.  Thinking about how to properly represent interacting types of individuals in a model is hard because there are lots of different factors at play (see below). For me, mass-action has always seemed a bit like a magic rabbit from out of the hat; just multiply the variables; don’t sweat the details of how the lion stalks its prey; just sit back and enjoy the show.

Figure 1. c x (1 Lion x 1 Eland) = 1 predation event per unit time where c is a constant.

Before getting too far along, let’s state the law:

Defn. Let x_1 be the density of species 1, let x_2 be the density of species 2, and let f be the number of interactions that occur between individuals of the different species per unit time. Then, the law of mass-action states that f \propto x_1 \times x_2.

In understanding models, I find it much more straight forward to explain processes that just involve one type of individual – be it the logistic growth of a species residing on one patch of a metapopulation, or the constant per capita maturation rates of juveniles to adulthood. It’s much harder for me to think about interactions: infectious individuals that contact susceptibles, who then become infected, and predators that catch prey, and then eat them. Because in reality:

Person A walks around, sneezes, then touches the door handle that person B later touches; Person C and D sit next to each other on the train, breathing the same air.

There are lots of different transmission routes, but to make progress on understanding mass-action, you want to think about what happens on average, where the average is taken across all the different transmission routes. In reality, also consider that:

Person A was getting a coffee; Person B was going to a meeting; and Persons C and D were going to work.

You want to think about averaging over all of a person’s daily activities, and as such, all the people in the population might be thought of as being uniformly distributed across the entire domain. Then, the number of susceptibles in the population that find themselves in the same little \Delta x as an infectious person is probably \beta S(t) \times I(t).

Part of it is, I don’t think I understand how I am supposed to conceptualize the movement of individuals in such a population. Individuals are going to move around, but at every point in time the density of the S’s and the I’s still needs to be uniform. Let’s call this the uniformity requirement. I’ve always heard that a corollary of the assumption of mass-action was an assumption that individuals move randomly. I can believe that this type of movement rule might be sufficient to satisfy the uniformity requirement, however, I can’t really believe that people move randomly, or for that matter, that lions and gazelles do either.  I think I’d be more willing to understand the uniformity requirement as being met by any kind of movement where the net result of all the movements of the S’s, and of the I’s, results in no net change in the density of S(t) and I(t) over the domain.

That’s why I find mass-action a bit confusing. With that as a lead in:

How do you interpret the mass-action assumption? Do you have a simple and satisfying way of thinking about it?

________________________________

Related reading

This paper is relevant since the author’s derive a mechanistic movement model and determine the corresponding functional response:

How linear features alter predator movement and the functional response by Hannah McKenzie, Evelyn Merrill, Raymond Spiteri and Mark Lewis.

On the art of ecological modelling

An article by Ian Boyd from this week’s Science argues that there is a systemic problem of researchers under acknowledging the limits of model prediction:

Many of today’s ecological models have excessively optimistic ambitions to predict ecosystem and population dynamics.

And:

The main models are general population models (16) and data-driven, heuristic, ecosystem models (17,18), which are rarely validated and often overparameterized.

The good news is that:

Some recent studies (35)—including that by Mougi and Kondoh (6) on page 349 of this issue—help to specify where the limits of prediction may lie.

Initially, I thought that these articles (3-6) might be the answer that I’ve been searching for, but it seems that Dai et al. (2012), Allesina and Tsang (2012; see also here) and Mougi and Kondoh (2012) are examples of well derived models for specific questions, not rules for deciding how complex is too complex for a general range of ecological questions. Liu et al. (2011) is a general result, but asks a different question in speaking to the difficulty of controlling real complex systems.

Ian Boyd’s article raises more questions than answers, but it draws attention to an important question, which is highly worthwhile in-of-itself.

Do we need to derive mechanistic error distributions for deterministic models?

In fitting mathematical models to empirical data, one challenge is that deterministic models make exact predictions and empirical observations usually do not match up perfectly. Changing a parameter value may reduce the distance between the model predictions and some of the data points, but increase the distance to others. To estimate the best-fit model parameters, it is necessary to assign a probability to deviations of a given magnitude. The function that assigns these probabilities is called the error distribution. In this post, I ask:

Do mechanistic, deterministic mathematical models necessarily have to have error distributions that are mechanistically derived?

One of the simplest approaches to model fitting is to use a probability density function, such as the normal or Poisson distribution, for the error function and to use a numerical maximization algorithm to identify the best-fit parameters. The more parameters there are to estimate the more time consuming this numerical search becomes, but in most cases this approach to parameter estimation is successful.

In biology, the processes that give rise to deviations between model predictions and data are measurement error, process error, or both. Some simple definitions are:

  • Measurement error: y(t) = f(x(t), b) + e
  • Process (or demographic) error: y(t) = f(x(t)+e, b)

where x(t) is a variable, such as the population size at time t, b is a vector of parameters, f(x,b) is the solution to the deterministic model, e is the error as generated by a specified probability density function, and y(t) is the model prediction including the error. As examples, counting the number of blue ducks each year might be subject to measurement error if a major source of error is in correctly identifying the colour of the duck, whereas extreme weather events that affect duckling survivorship are a source of process error.

In the simple approach described above, to keep it simple, I intended to implement the measurement error formulation of the full model. Under this formulation, many of the probability density functions that might be chosen as the error distribution have a process-based interpretation. For example, the normal distribution arises if (1) there are many different types of measurement errors, (2) these errors arise from the same distribution, and (3) total measurement error is the sum of all the errors. In biological data, all of that might be true, to some degree, but in general this explanation is likely incomplete.

A second justification of the simple approach, could be that the error distribution is not intended to be mechanistic, and here, the normal distribution is simply a function that embodies the necessary characteristics – it’s a decreasing function of the absolute value of the deviation. But if you have derived a mechanistic deterministic model, is it really okay to have an error distribution that isn’t justified on mechanistic grounds? Does such an error distribution undermine the mechanistic model formulation to the point where you might as well have started with a more heuristic formulation of the whole model? Would this be called semi-mechanistic – if the model is mechanistic, but the error distribution is heuristic?

If this all seems like no big deal, consider that measurement error does not compound under the effect of the deterministic model, while process error does. When only measurement error operates the processes occur as hypothesized and only the measurements are off. When process error occurs – slightly higher duck mortality than average – there are fewer breeding ducks in the next year, and this change feeds back into the process affecting the predictions made for future years. This makes model fitting to y(t) quite difficult. This is because model fitting is easier when the model and the error can be separated so that numerical methods for solving deterministic models can be used. If the error and the model can not be disentangled then fitting to y(t) will usually involve solving a stochastic model of some sort, which is more difficult, and more time consuming.

An easier alternative for the process error formulation, is to fit using gradient matching. This is because deterministic models are usually differential equations, f'(t) = g(x(t),b). Let z(t) be a transformation of the data, such that, z(t) = [y(t+Δt)-y(t)]/Δt, then we can fit the model as z(t) = g(x(t),b) +e1 where e1 are deviations between the empirical estimate of the gradient and the gradient as predicted by the model. Derivations from the model predicted gradient can be viewed as errors in the model formation or error that arises due to variation in the processes described by the model. If we have a mixture of measurement error and process error then we could do something nice like generalized profiling.

Anyway, this all has been my long-winded response to a couple of great posts about error at Theoretical Ecology by Florian Hartig. I wanted to revisit Florian’s question ‘what is error?’ Is error stochasticity? The latter would mean that e is a random variable, and I have a hard time imagining any good reason why e would not be a random variable. However, I think there are more issues to resolve if we want to understand how to define error. Specifically, how do we decide which processes are subsumed under f(x(t)) and which go under e? Is this a judgment call or should all the deterministic processes be part of f(x(t),b) and all the stochastic processes be put into e and therefore be considered error?

The Geometry Selfish Herd (Hamilton, 1971)

No apology, therefore, need be made even for the rather ridiculous behavior that tends to arise in the later stages of the model process, in which frogs supposedly fly right around the circular rim… The model gives a hint which I wish to develop: …the selfish avoidance of a predator can lead to aggregation.

— W.D. Hamilton (1971)

Sheep cyclone got me thinking about Hamilton’s selfish herd (pdf). It’s been a while since I’ve commented on model derivation with reference to the published literature, and so it seemed like a good idea to re-read Geometry of the selfish herd (1971) with the goal of discussing Hamilton’s decision-making process regarding his model assumptions.

To quickly summarize, the model from Section 1 of the Selfish Herd involves a water snake that feeds once a day on frogs. If they stay in the pond, the frogs will certainly be eaten and so they exit the pond and arrange themselves around the rim. The snake will then strike at a random location and eat the nearest frog. A frog’s predation risk is described by its ‘domain of danger’, which is half the distance to the nearest frog on either side (see Figure). Lone frogs have the highest risk of predation, which leads to the formation of frog aggregations (like the one in the lower left corner of the Figure). The model from Section 3 of the Selfish Herd presents the same problem, but in two-dimensional space, and so now the domains of danger are polygons. In relation to Section 3, I think Hamilton foreshadows the sheep cyclone because he considers the case where the predator is concealed in the centre of a herd before pouncing, as well as discussing (the probably more common) predation on the margins (when the predator starts on the outside).

As you can see from the quote above, Hamilton makes no apologies for unrealistic qualities because his model gives him some helpful insight. This insight is that aggregations could arise from a selfish desire to diffuse predation risk. In terms of the model derivation, a helpful construct is the domain of danger, whereby minimizing the domain of danger likely corresponds to minimizing predation risk, assuming the predator only takes one prey and starts searching in a random location.

Overall, the one-dimensional scenario seems contrived and I’m not sure if I understand how the insight from one-dimension carries over into two-dimensions.  In two-dimensions, what incentive is there for initially-far-away-from-the-lion-ungulates to let the initially-near-the-lion-ungulates catch-up and form a herd? The model in Section 1, is what I would call a ‘toy model’ – it acts as a proof of concept, but is so simple that its value is more as ‘something to play around with’ rather than something intended as a legitimate instrument. I wonder about the relevancy of edge effects – in Section 1, the model is not just one-dimensional, but the frogs are limited to the rim of the pond which is of finite length. The more realistic two-dimensional example of ungulates in the plain should consider, I think, a near infinite expanse. If the one-dimensional problem was instead ‘frogs on a river bank’, would this all play out the same? Would frogs on the river bank aggregate?

Pre-1970’s computing efficiency probably made it quite difficult for Hamilton to investigate this question to the level that we could today, but none-the-less, I’m going to put this model in the ‘Too Simple’ category. For me, this paper never reaches the minimum level of complexity that I need it to – that would be: (a) simultaneous movement of prey; (b) in response to a pursuing predator; and (c) in an expansive two-dimensional space. Aside from this paper, Hamilton sounds like he was an entertaining fellow whose other work made highly substantive contributions.