Recommended reading list

I’ll update this post with recommended readings as I come across them.

ARTICLES

Model selection in ecology and evolution (2004) by Jerald B. Johnson and Kristian S. Omland. A great starting point for learning about model selection and mechanistic models.

Fitting population dynamic models to time-series data by gradient matching (2002) by Stephen P. Ellner, Yodit Seifu and Robert H. Smith. This quote from the abstract gives a good indication of what the paper is about: “Semimechanistic modeling makes it possible to test assumptions about the mechanisms behind population fluctuations without the results being confounded by possibly arbitrary choices of parametric forms for process-rate equations.” The idea of the paper is that functions and parameters that are well-supported by biological evidence are explicitly specified in the model; functions and parameters that are not well understood are fitted, hence the term ‘semimechanistic’.*

Why do populations cycle? A synthesis of statistical and mechanistic modeling approaches (1999) by Bruce Kendall, Cheryl Briggs, Bill Murdoch, Peter Turchin, Steve Ellner, Ed McCauley, Roger Nisbet and Simon Wood. This paper develops an approach, called probe matching, which can be used for parameter estimation and model fitting. Probe matching combines the best of both worlds – statistical models and mechanistic models – and this paper, along the way, highlights the advantages and rationale for using each.

Mathematics, ecology, ornithology (1980) by Simon Levin. A general discussion of mathematical modelling in ecology. I like that this article is concise and well articulated. [Read more].

BLOG POSTS

Ignorance is bliss by Jeremy Fox. On deterministic versus stochastic model formulation.
Fusing theory and data a plea for help from Dan Bolnick by Dan Bolnick. Most of us agree that fusing theory and data is a worthy objective. How exactly to go about doing this is a topic that I wish received a bit more attention.
20 different stability concepts by Jeremy Fox. A compendium of ways to describe the properties of fixed points.
On the use and care of mathematical models by Simon Levin (1975) as quoted by Jeremy Fox.
Charles Elton on A.J. Lotka by Jeremy Fox at Dynamic Ecology.

Footnotes

*credit to J. Fox and E. Pederson for putting me onto this paper.

Blog challenge: Sheep cyclone

In relation to multiscale modelling, a category of problem that receives a good amount of attention is understanding how collective (group) movement arises from movement rules defined at the individual level. For example, the direction of each individual’s next move might depend on where that individual’s nearest neighbours are. What type of rules are necessary for the group to stay together? If the group stays together, what does the collective movement look like (directed or meandering)?

Some nice examples are:

and more generally see here, here, here and here. So this is my question:

What type of individual movement rules are needed to produce a Sheep cyclone?

Bonus question: How close together do the parallel walls (relative to the car width) need to be?

Mechanistic models: what is the value of understanding?

My recent thinking has been shaped by my peripheral involvement in discussions between colleagues at the University of Ottawa. What I will discuss now, is the confluence of ideas expressed by several people, and I say this because these have been stimulating discussions, and I don’t want to appear as taking full credit for these ideas by virtue of this solo author blog post.

———————————–

All other things being equal, mechanistic models are more powerful since they tell you about the underlying processes driving patterns. They are more likely to work correctly when extrapolating beyond the observed conditions.

-Bolker (2008) Ecological models and Data in R, p7.

My question for today is:

Given a mechanistic model and a phenomenological (or statistical) model, if we are trying to determine which model is best, shouldn’t the mechanistic model score some ‘points’ by virtue of it being mechanistic?

Assume a data set that both models are intended to describe. Define mechanistic and phenomenological as follows,

Mechanistic model: a hypothesized relationship between the variables in the data set where the nature of the relationship is specified in terms of the biological processes that are thought to have given rise to the data. The parameters in the mechanistic model all have biological definitions and so they can be measured independently of the data set referenced above.

Phenomenological/Statistical model: a hypothesized relationship between the variables in the data set, where the relationship seeks only to best describe the data.

These definitions are taken from the Ecological Detective by Ray Hilborn and Marc Mangel. Here are some additional comments from Hilborn and Mangel:

A statistical model foregoes any attempt to explain why the variables interact the way they do, and simply attempts to describe the relationship, with the assumption that the relationship extends past the measured values. Regression models are the standard form of such descriptions, and Peters (1991) argued that the only predictive models in ecology should be statistical ones; we consider this an overly narrow viewpoint.

Having defined mechanistic and phenomenological, now the final piece of the puzzle is to define ‘best’. Conventional wisdom is that mechanistic models facilitate a biological understanding, however, I think that’s only one step removed from prediction – you want to take your new found understanding and do something with it, specifically make a prediction and test it. Therefore, the goal of both mechanistic and phenomenological models are to predict and the performance of the models in this respect is referred to as model validation.

But, validation data is not always available. One reason is that if the models predict into the future, we will have to wait until the validation data appears. The other reason is that if we don’t have to wait, it’s a bit tempting to take a sneak peek at the validation data. For both model types, you want to present a model that is good given all the information available – it’s tough to write a paper where your conclusion is that your model is poor when the apparent poorness of the model can be ‘fixed’ by using the validation data to calibrate/parameterize the model (which then leaves no data to validate the model, something that, if anything, is a relief because your previous try at model validation didn’t go so well).

In absence of any validation data, one way to select the best model is using Akaike Information Criterion (AIC) (or a similar test). AIC will choose a model that fits the data well without involving too many parameters, but does AIC tell me which model is best, given my above definition of best, when comparing a mechanistic and a statistical model? Earlier this week, I said that if we wanted to settle this – which is better mechanistic or phenomenological – then we could settle it in the ring with an AIC battle-to-the-death.

As the one who was championing the mechanistic approach, I now feel like I didn’t quite think that one through. Of the set of all models that are phenomenological versus the set of all models that are mechanistic (with respect to a particular data set), it’s not rocket science to figure out which set is a subset of the other one. If one model is a relationship that comes with a biological explanation too, then you’re getting something extra than the model that just describes a relationship. Shouldn’t I get some points for that? Didn’t I earn that when I took the mechanistic approach to modelling because my options for candidate models is much more limited?

There is one way that mechanistic models are already getting points from AIC. If I did a good job of parameterizing my mechanistic model there should be few fitted parameters – hopefully even none. But is that enough of an advantage? Exactly what advantage do I want? I think what I am hoping for is related to the span of data sets that the model could then be applied to for prediction or validation. I feel pretty confident taking my mechanistic model off to another setting and testing it out, but if my model was purely statistical I might be less confident in doing so. Possibly because if my mechanistic model failed in the new setting I could say ‘what went wrong?’ (in terms of my process-based assumptions) and I’d have a starting point for revising my model. If my statistical model didn’t do so well in the new setting, I might not have much to go on if I wanted to try and figure out why.

But, if the objective is only to predict then you don’t need to know about mechanisms and so the phenomenological/statistical approach is the most direct and arguably best way of generating a good predictive model. Perhaps, what this issue revolves around is that mechanistic models make general and inaccurate predictions (i.e., the predictions might apply to a number of different settings) and that phenomenological models make accurate, narrow predictions.

Truth be known, this issue is tugging at my faith (mechanistic models), and I’m not really happy with my answers to some of the fundamental questions about why I favour the mechanistic approach, as I do. And let me say, too, that I definitely don’t think that mechanistic models are better than phenomenological models; I think that each have their place and I’m just wondering about which places those are.

Survey on mathematical training for ecologists

The International Network of Next-Generation Ecologists is conducting a survey on mathematical training for ecologists. I just filled it out. It was quick and easy, and so if you care about this issue completing the survey would be time well spent.

HT to the Oikos Blog by Jeremy Fox for making me aware of this. The Oikos blog is fantastic and has some highly relevant posts that I hope to discuss in the near future.

Hit me with your best shot

Choosing and designating some models as Great has been causing me anxiety and so here’s a model that I’m going to write about just for fun. Thanks to Titia Praamsma for sending me this mathematical model of shot selection in basketball by Dr. Brian Skinner of the University of Minnesota. Skinner even acknowledges the parsimony issue:

While the complex nature of decision-making in basketball makes such a description seem prohibitively difficult, it is nonetheless natural as belonging to the class of “optimal stopping problems”…

Fig. 1. Occassionally, I have wanted to derive a mathematical model for basketball, however, the complex nature of the decision-making process in basketball makes it prohibitively difficult for me.

Now, I follow basketball fairly closely, yet I’ve never been able to come up with any good ideas for applicable mathematical models (see Fig. 1). After reading Skinner’s work I realize that this is because I was framing the question so that I would have to tackle it using complicated and possibly uninformative methodology.*

Let me explain. When I think about basketball, I think that as a coach your job is (partly) to invent set plays that will increase your team’s chance of getting a high quality shot. This problem is spatial, game theoretical (i.e., it depends on whether the other team is playing a zone or a man-to-man defense) and it’s probably necessary to coordinate the movement of all five players (i.e., n=1 or 2 is fundamentally a different problem and so studying these simple cases is likely uninstructive). I know these types of problems are hard and so it seemed like too much work for me to derive a mathematical model for basketball.

The above discussion is meant to illustrate that anything can seem complicated if you choose to look at it that way.**  After acknowledging the complexity of basketball, Skinner goes on to come up with a simple model of shot selection. I think that his conceptual approach is quite clever and I hope this is underscored by my admission that I had (naively) written basketball off as not something that was amenable to simple models.

Fig. 2. Tiffany Hayes (with the ball) of the University of Connecticut: dropping 35 points, shooting 11-for-15, and lending support to the controversial "hot hand" phenomenon (see Skinner (2012)).

Certainly, part of what Skinner does that’s clever is come up with a good question. The question is this:

Given a shot opportunity of quality, q, should a player take the shot?

The answer is that it’s a good idea to shoot when q > f where f is a threshold quality value that depends on n, the number of shot opportunities remaining in a possession and the article explains how n might depend on the team’s turnover rate, the existence of a shot clock and how fast the team moves the ball.

Interestingly, the article reports that a team that moves the ball well has a higher n, and therefore a higher f, and would want to execute more passes (until the shot opportunity q > f arises) than a team with poor ball movement (i.e., lower n and lower f). This is counter-intuitive because you’d think that if a team passes the ball quickly, then they can shoot sooner. This is true, but the result isn’t about whether a team with good ball movement can shoot earlier in the possession and win, it’s about what’s optimal: it’s optimal to expect a better shot to arise during a possession if shots are created at a faster rate – and so the team with good ball movement makes more passes and they might even wait longer to shoot.

Let’s revisit the model construction. You might say that if Brittany Griner gets the ball in the post then she should dunk it. That’s a good decision because this would be a high quality shot where q > f, but what I like about the model is that it abstracts away the name of the player, where the player is on the court, and what type of offense was run to generate the shot opportunity, and simply summarizes all this complexity into the one variable, q, shot quality.

Another aspect of this paper that I like is the comparison with data from the NBA. In fact, section 4. of Skinner’s paper is solely dedicated to recasting  f in terms of a shooting rate under the assumption that shot opportunities arise at a constant rate τ, so that the number of remaining shot opportunities n at time t is Poisson distributed. This is a nice final step because how would one ever know if q > f without these addition assumptions? My point is that q (shot quality) is not an especially useful quantity because how would one measure that? On the other hand, the shot rate can be estimated from a play-by-play box score which reports when shots were taken during the course of a game.

So that’s basketball. Some time in the future we might talk about soccer or coffee, but I have some other posts to get to before that. There’s also a homework problem if you continue to scroll down.

* I think I’m suggesting if the only ideas that you have involve deriving a complicated model, the solution might be to refine the question.

** I stated that there’s always a complicated way of looking at a problem. What I mean is that you can take a complex phenomenon at face value and then obviously it will appear complex. Today’s philosophical question is:

Given a complex process does there necessarily exist a simple way of looking at it that will yield productive insight?

Quote: Neither fearing nor embracing complexity for its own sake

Going forward I hope to summarize the advice on model derivation provided in some of the popular mathematical modelling textbooks. I knew Fred Adler had a Calculus textbook for biologists and so I visited his website to figure out the exact title of the book. And I found this quote:

The Adler lab group brings together empirical and mathematical approaches to study a wide variety of problems. We neither fear complexity nor embrace it for its own sake, but rather face it with the faith that simplicity and understanding are within reach.

This is another brilliant quote. I like that they don’t fear complexity. Adler concedes that biological problems are complex, that this is the reality, and that it’s a reality to embrace. The end part is inspirational: understanding is within reach – how can you not love mathematical biology after that?

Figure 1. A grad student quantitatively expresses some practical concerns and is hesitant to embrace additional complexity.

Mission drift and black fever

Ever since clicking ‘publish’ on my first post in the Great Models series, I didn’t feel right about it. I like to be original and interesting, however, I chose to write about the ‘mosquito theorem’ because it was a safe choice – because the work was foundational and because Ross’ achievements are highly and widely respected. I had named this blog post series Great Models and given the name, I felt compelled to chose work that was undeniably significant. In keeping with my blog mission, in the future I want to make the choices of Great Models more with respect to elegant model derivation for a given question, and not with respect to the gravity of the conclusions. The next model that I plan to discuss, I chose just because it’s fun. This will shatter my expectation that I have to choose only work that is already widely acclaimed.

Since that post, I started reading The Malaria Capers by Robert S. Desowitz. The dust jacket of the book states that despite the progress of 21st century science, malaria is more prevalent in tropical regions now than 50 years ago – and that truly the situation is worse because now a large number of mosquitoes are resistant to insecticides. The first part of the book was a story about how scientist’s determined the transmission route of kala azar (visceral leishmaniasis). An interesting aside is that kala azar appears to be a relatively new infectious disease, one that did not cause it’s first epidemic until 1824 (pp. 34).

To understand the spread of kala azar, the first problem for scientists was to identify the infectious agent. Initially, this was thought to be a hookworm because hookworm infections were common and sometimes found in patients that had died from kala azar. The infectious agent, in fact, was a protozoan first observed by Dr. William Boog Leishman. However, at the time it was quite some work to determine what the specks that Leishman had seen inside macrophages actually were. Initially, it was thought that these bodies could be trypanosomes.

The so-called Leishman-Donovan bodies were then put in a saline solution. This revealed a flagellated elongated life stage and this transformation implied an insect vector in the transmission of the black fever. Bed bugs were very common and unpopular at the time and so these arthropods were scientist’s first guess. To incriminate the bed bugs it was necessary to show that the protozoan could survive in these insect’s intestines and then moved to the salivary glands (where they could be transferred during biting) or that they were defacated and rubbed into the bite wound. This could not be demonstrated for bed bugs and instead it was determined that the silvery sandfly was the guilty party. The silvery sandfly first became a candidate vector when the range maps of this sandfly and kala azar epidemics were overlayed and found to be suspiciously related.

None of this story has much to do with modelling, but it highlights the challenge of making inferences in science when there are multiple plausible hypotheses and only incomplete knowledge. Would solving the kala azar problem be any simpler today?

With a more advanced taxonomic key for microbes and with modern day molecular techniques, it would be much easier to determine that the Leishman-Donovan bodies were not an organism that had ever been seen before, and to slot them into the tree-of-life as an animal-like protist. However, determining the insect vector might still be roughly as challenging now as it was in the 1900’s because the knowledge gains made in the areas of vector ecology and epidemiology over this period have been less dramatic.

Problems with multiple plausible hypotheses and incomplete information are a type of problem where modelling can make a great contribution. Here, the mathematical model is used to build a bridge between each hypothesis and the available evidence, to understand if any of the hypotheses are consistent with the limited information on hand, and even more so, to determine what additional characteristics and additional pieces of evidence must exist if each of these hypotheses are to be consistent. For example, modelling might be used to determine if the seasonal variation in kala azar is consistent with the seasonally driven vector population biology combined with the current best explanation of vector-human epidemiology. In fact, it was mentioned in the book that major kala azar epidemics occur on a 15-year cycle and, to me, that sounds like a great modelling problem for someone!