On the art of ecological modelling

An article by Ian Boyd from this week’s Science argues that there is a systemic problem of researchers under acknowledging the limits of model prediction:

Many of today’s ecological models have excessively optimistic ambitions to predict ecosystem and population dynamics.


The main models are general population models (16) and data-driven, heuristic, ecosystem models (17,18), which are rarely validated and often overparameterized.

The good news is that:

Some recent studies (35)—including that by Mougi and Kondoh (6) on page 349 of this issue—help to specify where the limits of prediction may lie.

Initially, I thought that these articles (3-6) might be the answer that I’ve been searching for, but it seems that Dai et al. (2012), Allesina and Tsang (2012; see also here) and Mougi and Kondoh (2012) are examples of well derived models for specific questions, not rules for deciding how complex is too complex for a general range of ecological questions. Liu et al. (2011) is a general result, but asks a different question in speaking to the difficulty of controlling real complex systems.

Ian Boyd’s article raises more questions than answers, but it draws attention to an important question, which is highly worthwhile in-of-itself.

Why parsimony?

One question is does there necessarily exist a simple model for a given biological question, the other is, is there a unique model? And taking that one step further, given two models that are equal in all regards except that one is more complex, why should we favour the more simple model? This argument, that we should prefer simpler explanations, is Occam’s razor.

William of Ockham. This picture is attributed to the following source.

Here’s the definition of Occam’s razor from Wikipedia:

It is a principle urging one to select, among competing hypotheses, that which makes the fewest assumptions and thereby offers the simplest explanation of the effect.

In fact, the Wikipedia page on Occam’s razor, for me, made for inspired reading. Here are some of the highlights*:

Justifications for Occam’s razor

  • Aesthetic: nature is simple and simple explanations are more likely to be true.
  • Empirical: You want the signal; you don’t want the noise. A complex model will give you both, e.g. overfitting in statistics.
  • Mathematical: hypotheses that have fewer adjustable parameters will automatically have an enhanced posterior probability because the predictions are sharper (Jeffreys & Berger, 1991)
  • Practical: it is easier to understand simple models.

Alternatives to Occam’s razor

  • Popper (1992): For Popper it can all be cast in the light of falsifiability. We prefer simper theories “because their ecological context is greater” and because they are testable.
  • Elliot Sober: simplicity considerations do not count unless they reflect something more fundamental.**

And yet my initial reaction to the definition of Occam’s razor was that it sounded a bit strange: simple explanations and few assumptions? Yikes, I can give you your simple explanation, but it’s going take a lot of assumptions to get there. I think my confusion could be due to a difference in bookkeeping (and the phrasing ‘simple explanation of the effect‘). In the Occam’s razor definition, you score only assumptions that contribute to the explanation. In biology, if the true explanation consists of n things-that-matter, the theoretician will say that the observation can be reproduced by only considering k < n of those things. Here, biologists are used to scoring the number of assumptions as the number of things that are suspected to matter but that are neglected, i.e. nk. This difference would seem to suggest that, although in biology we do value simplicity, we also value explanations that incorporate known contributing factors over explanations that ignore these. These types of values are reflected in Elliot Sober’s view on Alternatives to Occam’s razor as described above.

However, even given that caveat, I think we still often prefer simple models in biology. Why? Here’s Ben Bolker (p7)*** with some insight:

By making more assumptions, (mechanistic models) allow you to extract more information from your data – with the risk of making the wrong assumptions.

That does kind of sum it up from the data analysis perspective: simple models make a lot of assumptions, but at the end of it you can conclude something concrete. Complex models still make assumptions, but they are a less restrictive type of assumption (i.e., an assumption about how a factor is included rather than an assumption to ignore it). All this flexibility in complex models means that many different parameter combinations can lead to the same outcome: inference is challenging, and parameters are likely to be unidentifiable. Given Wikipedia’s list of different justifications of Occam’s razor this seems to be an example of ‘using the mathematical justification to practical ends’. That is to say, this argument doesn’t seem to fit well into the list of justifications, but elements of the mathematical and the practical justifications are represented. Or perhaps it fits with Popper’s alternative view?

For the theoretical ecologist, another reason that parsimony is often favoured is certainly the practical justification: because simple models are easier to understand.

What do you think? Is parsimony important in biology? And why?


Jeffreys and Berger (1991) Sharpening Ockham’s Razor on a Bayesian Strop. [pdf] Alternatively, if that isn’t satifying this might do the trick:

Quine, W (1966) On simple theories in a complex world. In The Ways of Paradox and Other Essays. Harvard University Press.****


*okay, so maybe the actual highlight for me was learning a new expression. The expression is ‘turtles all the way down’ and the best way to explain it is by using it in a sentence. Here goes: sometimes people say ‘yes, but that’s not really a mechanistic model because you could take this small part of it and make that more mechanistic, and then you could take parts of that and make those more mechanistic.’ And to that, I would say ‘yes, but why bother? It’s just going to be turtles all the way down‘. 

**fundamental = mechanistic, i.e. biological underpinning. This is a quote from Wikipedia and I need to chase down the exact reference for the statement. I have Elliot Sober (200o) Philosophy of Biology but he doesn’t seem to say anything quite this definitive.

***Ben suggests the references:  Levins (1966) The strategy of model building in population biology;  Orzack and Sober (1993) A critical assessment of Levin’s The strategy of model building in population biology; and Levins (1993) A response to Orzack and Sober: Formal Analysis and the fluidity of science. [I’ll read them and let you know.]

****I haven’t read either, I just list the references in case anyone wants to follow up.

Hit me with your best shot

Choosing and designating some models as Great has been causing me anxiety and so here’s a model that I’m going to write about just for fun. Thanks to Titia Praamsma for sending me this mathematical model of shot selection in basketball by Dr. Brian Skinner of the University of Minnesota. Skinner even acknowledges the parsimony issue:

While the complex nature of decision-making in basketball makes such a description seem prohibitively difficult, it is nonetheless natural as belonging to the class of “optimal stopping problems”…

Fig. 1. Occassionally, I have wanted to derive a mathematical model for basketball, however, the complex nature of the decision-making process in basketball makes it prohibitively difficult for me.

Now, I follow basketball fairly closely, yet I’ve never been able to come up with any good ideas for applicable mathematical models (see Fig. 1). After reading Skinner’s work I realize that this is because I was framing the question so that I would have to tackle it using complicated and possibly uninformative methodology.*

Let me explain. When I think about basketball, I think that as a coach your job is (partly) to invent set plays that will increase your team’s chance of getting a high quality shot. This problem is spatial, game theoretical (i.e., it depends on whether the other team is playing a zone or a man-to-man defense) and it’s probably necessary to coordinate the movement of all five players (i.e., n=1 or 2 is fundamentally a different problem and so studying these simple cases is likely uninstructive). I know these types of problems are hard and so it seemed like too much work for me to derive a mathematical model for basketball.

The above discussion is meant to illustrate that anything can seem complicated if you choose to look at it that way.**  After acknowledging the complexity of basketball, Skinner goes on to come up with a simple model of shot selection. I think that his conceptual approach is quite clever and I hope this is underscored by my admission that I had (naively) written basketball off as not something that was amenable to simple models.

Fig. 2. Tiffany Hayes (with the ball) of the University of Connecticut: dropping 35 points, shooting 11-for-15, and lending support to the controversial "hot hand" phenomenon (see Skinner (2012)).

Certainly, part of what Skinner does that’s clever is come up with a good question. The question is this:

Given a shot opportunity of quality, q, should a player take the shot?

The answer is that it’s a good idea to shoot when q > f where f is a threshold quality value that depends on n, the number of shot opportunities remaining in a possession and the article explains how n might depend on the team’s turnover rate, the existence of a shot clock and how fast the team moves the ball.

Interestingly, the article reports that a team that moves the ball well has a higher n, and therefore a higher f, and would want to execute more passes (until the shot opportunity q > f arises) than a team with poor ball movement (i.e., lower n and lower f). This is counter-intuitive because you’d think that if a team passes the ball quickly, then they can shoot sooner. This is true, but the result isn’t about whether a team with good ball movement can shoot earlier in the possession and win, it’s about what’s optimal: it’s optimal to expect a better shot to arise during a possession if shots are created at a faster rate – and so the team with good ball movement makes more passes and they might even wait longer to shoot.

Let’s revisit the model construction. You might say that if Brittany Griner gets the ball in the post then she should dunk it. That’s a good decision because this would be a high quality shot where q > f, but what I like about the model is that it abstracts away the name of the player, where the player is on the court, and what type of offense was run to generate the shot opportunity, and simply summarizes all this complexity into the one variable, q, shot quality.

Another aspect of this paper that I like is the comparison with data from the NBA. In fact, section 4. of Skinner’s paper is solely dedicated to recasting  f in terms of a shooting rate under the assumption that shot opportunities arise at a constant rate τ, so that the number of remaining shot opportunities n at time t is Poisson distributed. This is a nice final step because how would one ever know if q > f without these addition assumptions? My point is that q (shot quality) is not an especially useful quantity because how would one measure that? On the other hand, the shot rate can be estimated from a play-by-play box score which reports when shots were taken during the course of a game.

So that’s basketball. Some time in the future we might talk about soccer or coffee, but I have some other posts to get to before that. There’s also a homework problem if you continue to scroll down.

* I think I’m suggesting if the only ideas that you have involve deriving a complicated model, the solution might be to refine the question.

** I stated that there’s always a complicated way of looking at a problem. What I mean is that you can take a complex phenomenon at face value and then obviously it will appear complex. Today’s philosophical question is:

Given a complex process does there necessarily exist a simple way of looking at it that will yield productive insight?

Quote: Neither fearing nor embracing complexity for its own sake

Going forward I hope to summarize the advice on model derivation provided in some of the popular mathematical modelling textbooks. I knew Fred Adler had a Calculus textbook for biologists and so I visited his website to figure out the exact title of the book. And I found this quote:

The Adler lab group brings together empirical and mathematical approaches to study a wide variety of problems. We neither fear complexity nor embrace it for its own sake, but rather face it with the faith that simplicity and understanding are within reach.

This is another brilliant quote. I like that they don’t fear complexity. Adler concedes that biological problems are complex, that this is the reality, and that it’s a reality to embrace. The end part is inspirational: understanding is within reach – how can you not love mathematical biology after that?

Figure 1. A grad student quantitatively expresses some practical concerns and is hesitant to embrace additional complexity.

A quote on parsimony

This morning’s reading has lead me to a nice quote:

Model building is the art of selecting those aspects of a process that are relevant to the question being asked – J.H Holland*

What I like about the quote, is that it not only highlights the principle of parsimony (as the Einstein quote did), but it highlights that the question being asked is the element of the scientific problem that should be referenced to determine if an aspect of the model should be kept in or kicked out.

In a world where we might identify ourselves as a landscape ecologist, a toxicologist, or even an expert in neural networks – consider this: there are unlikely to be any discipline specific guides to parsimonious model building. And my reason for wanting to catalog the different types of questions was that this could, possibly, serve as a useful framework; where the same types of questions share the same types of guiding principles regarding how best to achieve parsimony.

Holland, JH (1995) Hidden Order. Addison-Wesley, New York, USA.

Deriving models that are simple, but not too simple

This picture of Albert Einstein is from Wikipedia and is in the US public domain

In the first few pages of A Biologist’s Guide to Mathematical Modeling in Ecology and Evolution by Sally Otto and Troy Day they paraphrase Albert Einstein (pg 7) who said:

“Everything should be made as simple as possible, but no simpler”


After I give a talk, I am often asked questions such as: “you assumed that space is homogeneous, but isn’t there a mountain range to the west?” or “could you expand your model to consider the influence of hunting on adult wolf survivorship?” And for a split second this thought races through my head: They’re right. I am wrong. My work is wrong! This is terrible, I must add in hunting to fix it.

It is tempting to think that a more complex model is better. Will other scientists assume that I aren’t skilled enough to include hunting in the model? Will they not understand that this was my deliberate choice – the choice not to include it?

As a final comment, if some asks “could you expand your model to consider the effect of climate change?” at the end of one of my talks, I will return the question by asking what they think would change if I had explicitly included this. The question above, without further elaboration, could imply that I didn’t include climate change because I overlooked it. Returning the question helps to draw attention to the challenges that modellers face and to highlight the types of careful considerations that go into model construction.