Testing mass-action

UPDATE: I wrote this, discussing that I don’t really know the justification for the law of mass action, however, comments from Martin and Helen suggest that a derivation is possible using moment closure/mean field methods. I recently found this article:

Use, misuse and extensions of “ideal gas” models of animal encounter. JM Hutchinson, PM Waser. 2007. Biological Reviews. 82:335-359.

I haven’t have time to read it yet, but from the title it certainly sounds like it answers some of my questions.

——————–

Yesterday, I came across this paper from PNAS: Parameter-free model discrimination criterion based on steady-state coplanarity by Heather A. Harrington, Kenneth L. Ho, Thomas Thorne and Michael P.H. Strumpf.

The paper outlines a method for testing the mass-action assumption of a model without non-linear fitting or parameter estimation. Instead, the method constructs a transformation of the model variables so that all the steady-state solutions lie on a common plane irrespective of the parameter values. The method then describes how to test if empirical data satisfies this relationship so as to reject (or fail to reject) the mass-action assumption. Sounds awesome!

One of the reasons I like this contribution is that I’ve always found mass-action to be a bit confusing, and consequently, I think developing simple methods to test the validity of this assumption is a step in the right direction.  Thinking about how to properly represent interacting types of individuals in a model is hard because there are lots of different factors at play (see below). For me, mass-action has always seemed a bit like a magic rabbit from out of the hat; just multiply the variables; don’t sweat the details of how the lion stalks its prey; just sit back and enjoy the show.

Figure 1. c x (1 Lion x 1 Eland) = 1 predation event per unit time where c is a constant.

Before getting too far along, let’s state the law:

Defn. Let x_1 be the density of species 1, let x_2 be the density of species 2, and let f be the number of interactions that occur between individuals of the different species per unit time. Then, the law of mass-action states that f \propto x_1 \times x_2.

In understanding models, I find it much more straight forward to explain processes that just involve one type of individual – be it the logistic growth of a species residing on one patch of a metapopulation, or the constant per capita maturation rates of juveniles to adulthood. It’s much harder for me to think about interactions: infectious individuals that contact susceptibles, who then become infected, and predators that catch prey, and then eat them. Because in reality:

Person A walks around, sneezes, then touches the door handle that person B later touches; Person C and D sit next to each other on the train, breathing the same air.

There are lots of different transmission routes, but to make progress on understanding mass-action, you want to think about what happens on average, where the average is taken across all the different transmission routes. In reality, also consider that:

Person A was getting a coffee; Person B was going to a meeting; and Persons C and D were going to work.

You want to think about averaging over all of a person’s daily activities, and as such, all the people in the population might be thought of as being uniformly distributed across the entire domain. Then, the number of susceptibles in the population that find themselves in the same little \Delta x as an infectious person is probably \beta S(t) \times I(t).

Part of it is, I don’t think I understand how I am supposed to conceptualize the movement of individuals in such a population. Individuals are going to move around, but at every point in time the density of the S’s and the I’s still needs to be uniform. Let’s call this the uniformity requirement. I’ve always heard that a corollary of the assumption of mass-action was an assumption that individuals move randomly. I can believe that this type of movement rule might be sufficient to satisfy the uniformity requirement, however, I can’t really believe that people move randomly, or for that matter, that lions and gazelles do either.  I think I’d be more willing to understand the uniformity requirement as being met by any kind of movement where the net result of all the movements of the S’s, and of the I’s, results in no net change in the density of S(t) and I(t) over the domain.

That’s why I find mass-action a bit confusing. With that as a lead in:

How do you interpret the mass-action assumption? Do you have a simple and satisfying way of thinking about it?

________________________________

Related reading

This paper is relevant since the author’s derive a mechanistic movement model and determine the corresponding functional response:

How linear features alter predator movement and the functional response by Hannah McKenzie, Evelyn Merrill, Raymond Spiteri and Mark Lewis.

Advertisements

Q1. Define independent parameterization

Mechanistic and phenomenological models

Mechanistic models describe the processes that relate variables to each other, attempting to explain why particular relationships emerge, rather than solely how the variables are related, as a phenomenological model would. Colleagues will ask me ‘is this a mechanistic model’ and then provide an example.  Often, I decide that the model in question is mechanistic, even though the authors of these types of models may rarely emphasize this. Otto & Day (2008) wrote that mechanistic and phenomenological are relative model categorizations – suggesting that it is only productive to discuss whether one model is more or less mechanistic than another – and I’ve always thought of this as a nice way of looking at it. This has also led me to think that nearly any model, on some level, can be considered mechanistic.

But, of course, not all models are mechanistic. Here’s the definition that I am going to work from (derived from the Ecological Detective, see here):

Mechanistic models have parameters with biological interpretations, such that these parameters can be estimated with data of a different type than the data of interest

For example, if we are interested in a question that can be answered by knowing how the size of a population changes over time, then our data of interest is number versus time. A phenomenological model could be parameterized with data describing number versus time taken at a different location. On the other hand, a mechanistic model could be parameterized with data on the number of births versus time, and the number of deaths versus time; and so it’s a different type of data, and this is only possible because the parameters have biological interpretations by virtue of the model being mechanistic.

The essence of a mechanistic model is that it should explain why, however, to do so, it is necessary to give biological interpretations to the parameters. This, then, gives rise to a test of whether a model is mechanistic or not: if it is possible to describe a different type of data that could be used to parameterize the model, then we can designate the model as mechanistic.

Validation

In mathematical modelling we can test our model structure and parameterization by assessing the model agreement with empirical observations. The most convincing models are parameterized and formulated completely independently of the validation data. It is possible to validate both mechanistic and phenomenological models. Example 1 is a description of a series of three experiments that I believe would be sufficient to validate the logistic growth model.

Example 1.  The model is \frac{d N}{d t} = r N \left(1-\frac{N}{K}\right) which has the solution N(t) = f(t, r, K, N_0) and where N_0 is the initial condition, N(0).

Experiment 1 (Parameterization I):

1. Put 6 mice in a cage, 3 males and 3 females and of varied, representative ages. (This is a sexually reproducing species. I want a low density but not so few that I am worried about inbreeding depression). A fixed amount of food is put in the cage every day.

2. Every time the mice produce offspring, remove the offspring and put them somewhere else (i.e., keep the number of mice constant at 6 throughout Experiment 1).

3. Have the experiment run for a while, record the total time, No. of offspring and No. of the original 6 mice that died.

Experiment 2 (Parameterization II):

4.  Put too many mice in the cage, but the same amount of food everyday, as for Experiment 1. Let the population decline to a constant number. This is K.

5. r is calculated from the results of Experiment 1 and K as (No. births – No. deaths)/(total time) = 6 r (1-6/K).

Experiment 3 (Validation):

6. Put 6 mice in the cage and the same amount of food as before. This time keep the offspring in the cage and produce the time series N(t) by recording the number of mice in the cage each day. Compare the empirical observations for N(t) with the now fully parameterized equation for f(t,r,K,N(0)).

The Question. Defining that scheme for model parameterization and validation was done to provide context for the following question:

  • When scientists talk about independent model parameterization and validation – what exactly does that mean? How independent is independent enough? How is independent defined in this context?

If I was asked this, I would say that the parameterization and the validation data should be different. In the logistic growth model example (above), the validation data is taken for different densities and under a different experimental set-up. However, consider this second example.

Example 2. Another way to parameterize and validate a model is to use the same data, but to use only part of the information. As an example consider the parameterization of r (the net reproductive rate) for the equation,

\frac{\partial u}{\partial t} = D\frac{\partial^2 u}{\partial x^2} + r u           (eqn 1)

The solution to Equation (1) is u(x,t), a probability density that describes how the population changes in space and time, however, another result is that the radius of the species range increases at a rate c=\sqrt{4rD}. To validate the model, I will estimate c from species range maps (see Figure 1). To estimate r, I will use data on the change in population density taken from a core area (this approach is suggested in Shigesada and Kawaski (1997): Biological invasions, pp. 36-41. See also Figure 1). To estimate D, I will use data on wolf dispersal taken from satellite collars.

Returning to the question. But, is this data, describing the density of wolves in the core area, independent of the species range maps used for validation? The species range maps, at any point in time, provide information on both the number of individuals and where these individuals are. The table that I used for the model parameterization is recovered from the species range maps by ignoring the spatial component (see Figure 1).

Figure 1. The location of wolves at time 0 (red), time 1 (blue) and time 2 (green). The circles are used to estimate, c, the rate of expansion of the radius of the wolves’ home range at t=0,1,2. The population size at t=0,1,2 is provided in the table. The core area is shown as the dashed line. Densities are calculated by dividing the number of wolves by the size of the core area. The reproductive rate is calculated as the slope of a regression on the density of wolves at time t versus the density at time t-1. For this example, the above table will only yield two data points, (3,5) and (5,9).

While the data for the parameterization of r, and the validation by estimating c, seems quite related, the procedure outlined in Example 2 is still a strong test of Equation (1). Equation (1) makes some very strong assumptions, the strongest of which, in my opinion, is that the dispersal distance and the reproductive success of an individual are unrelated. If the assumptions of equation (1) don’t hold then there is no guarantee that the model predictions will bear any resemblance to the validation data. Furthermore, the construction of the table makes use of the biological definition of r, in contrast to a fully phenomenological approach to parameterization which would fit the equation u(x,t) to the data on the locations of the wolves to estimate r and D, and would then prohibit validation for this same data set.

So, what are the requirements for independent model parameterization and validation? Are the expectations different for mechanistic versus phenomenological models?

Hit me with your best shot

Choosing and designating some models as Great has been causing me anxiety and so here’s a model that I’m going to write about just for fun. Thanks to Titia Praamsma for sending me this mathematical model of shot selection in basketball by Dr. Brian Skinner of the University of Minnesota. Skinner even acknowledges the parsimony issue:

While the complex nature of decision-making in basketball makes such a description seem prohibitively difficult, it is nonetheless natural as belonging to the class of “optimal stopping problems”…

Fig. 1. Occassionally, I have wanted to derive a mathematical model for basketball, however, the complex nature of the decision-making process in basketball makes it prohibitively difficult for me.

Now, I follow basketball fairly closely, yet I’ve never been able to come up with any good ideas for applicable mathematical models (see Fig. 1). After reading Skinner’s work I realize that this is because I was framing the question so that I would have to tackle it using complicated and possibly uninformative methodology.*

Let me explain. When I think about basketball, I think that as a coach your job is (partly) to invent set plays that will increase your team’s chance of getting a high quality shot. This problem is spatial, game theoretical (i.e., it depends on whether the other team is playing a zone or a man-to-man defense) and it’s probably necessary to coordinate the movement of all five players (i.e., n=1 or 2 is fundamentally a different problem and so studying these simple cases is likely uninstructive). I know these types of problems are hard and so it seemed like too much work for me to derive a mathematical model for basketball.

The above discussion is meant to illustrate that anything can seem complicated if you choose to look at it that way.**  After acknowledging the complexity of basketball, Skinner goes on to come up with a simple model of shot selection. I think that his conceptual approach is quite clever and I hope this is underscored by my admission that I had (naively) written basketball off as not something that was amenable to simple models.

Fig. 2. Tiffany Hayes (with the ball) of the University of Connecticut: dropping 35 points, shooting 11-for-15, and lending support to the controversial "hot hand" phenomenon (see Skinner (2012)).

Certainly, part of what Skinner does that’s clever is come up with a good question. The question is this:

Given a shot opportunity of quality, q, should a player take the shot?

The answer is that it’s a good idea to shoot when q > f where f is a threshold quality value that depends on n, the number of shot opportunities remaining in a possession and the article explains how n might depend on the team’s turnover rate, the existence of a shot clock and how fast the team moves the ball.

Interestingly, the article reports that a team that moves the ball well has a higher n, and therefore a higher f, and would want to execute more passes (until the shot opportunity q > f arises) than a team with poor ball movement (i.e., lower n and lower f). This is counter-intuitive because you’d think that if a team passes the ball quickly, then they can shoot sooner. This is true, but the result isn’t about whether a team with good ball movement can shoot earlier in the possession and win, it’s about what’s optimal: it’s optimal to expect a better shot to arise during a possession if shots are created at a faster rate – and so the team with good ball movement makes more passes and they might even wait longer to shoot.

Let’s revisit the model construction. You might say that if Brittany Griner gets the ball in the post then she should dunk it. That’s a good decision because this would be a high quality shot where q > f, but what I like about the model is that it abstracts away the name of the player, where the player is on the court, and what type of offense was run to generate the shot opportunity, and simply summarizes all this complexity into the one variable, q, shot quality.

Another aspect of this paper that I like is the comparison with data from the NBA. In fact, section 4. of Skinner’s paper is solely dedicated to recasting  f in terms of a shooting rate under the assumption that shot opportunities arise at a constant rate τ, so that the number of remaining shot opportunities n at time t is Poisson distributed. This is a nice final step because how would one ever know if q > f without these addition assumptions? My point is that q (shot quality) is not an especially useful quantity because how would one measure that? On the other hand, the shot rate can be estimated from a play-by-play box score which reports when shots were taken during the course of a game.

So that’s basketball. Some time in the future we might talk about soccer or coffee, but I have some other posts to get to before that. There’s also a homework problem if you continue to scroll down.

* I think I’m suggesting if the only ideas that you have involve deriving a complicated model, the solution might be to refine the question.

** I stated that there’s always a complicated way of looking at a problem. What I mean is that you can take a complex phenomenon at face value and then obviously it will appear complex. Today’s philosophical question is:

Given a complex process does there necessarily exist a simple way of looking at it that will yield productive insight?

Ronald Ross’ mosquito theorem

I had promised to write about models that I think succeed on a very high level and so let me start that series of posts by discussing Ronald Ross’ so-called mosquito theorem.

If you go running in Kingston, Ontario, along the west bank of the start of the Rideau Canal, past the dock and towards the Kingston-Whig Standard headquarters, you will come across a Celtic cross that reads:

In memory of an estimated one thousand Irish labourers and their coworkers who died of malaria and by accidents in terrible working conditions while building the Rideau Canal, 1826-1832.

I didn’t think too much of that at the time, but later I was trying to recall the disease that was mentioned. Malaria? In Kingston? In Canada? Could that be right?

Ronald Ross. This picture was taken from Wikipedia and is in the public domain.

And so I read up on the details (see for example here and here) and, sure enough, malaria had been endemic in North America and was eradicated through the use of insecticides and mosquito control.

Ronald Ross identified the anophelene mosquitoes as the vector for malaria transmission and developed a mathematical model that showed that malaria could be eradicated as long as the number of mosquitoes per human was brought below a threshold value, which was significant because malaria could then be eliminated without needing to kill every mosquito. Given that introduction, it’s not hard to see why I might consider this a great mathematical model: (1) malaria was eradicated in North America – an extremely impressive achievement, and (2) the ‘mosquito theorem’ turned out to be correct.

Ross sounds like an amazing guy and I’m not sure if there is a lesson here or if it’s just that he was flat out brilliant. Having said that, I think one of the reasons Ross’ work was so successful was because mathematical modelling was just one facet of his arsenal – he likely derived the model because he needed the result to help design and lobby for prevention strategies. If there is any lesson here for today’s theoretician it might be that Ross’ success underscores the importance of collaboration with empiricists, ecologists, clinicians and other experts.

In 1902, Ross was awarded the Nobel Prize in medicine and physiology, however, this was after Ross’ experimental work on malaria transmission and before Ross derived his first epidemiological model in 1908. Yet, despite Ross having made numerous and varied contributions, the Nobel Laureates webpage singles out mathematical modelling as Ross’ greatest contribution:

He made many contributions to the epidemiology of malaria and to methods of its survey and assessment, but perhaps his greatest was the development of mathematical models for the study of its epidemiology, initiated in his report on Mauritius in 1908, elaborated in his Prevention of Malaria in 1911 and further elaborated in a more generalized form in scientific papers published by the Royal Society in 1915 and 1916. These papers represented a profound mathematical interest which was not confined to epidemiology, but led him to make material contributions to both pure and applied mathematics.