Christmas gifts from Just Simple Enough!

And, no, not the gift of homework. The gifts of song and movie!

Song. An original song written by a graduate student about graduate student-supervisor meetings. It’s a catchy tune! Click Here.

Movie. This is a movie that I took in 2009 while attending a Mathematical Biology Summer School in Botswana. Click here. (Unfortunately, I encountered some technical difficulties uploading this to YouTube, but it’s still watchable, albeit in micro-mini).

Happy holidays everyone!

Breakthrough mathematics, fundamental research and ideas

During my time on the train last week, I read some of the book ‘God created the integers: the mathematical breakthroughs that changed history’ by Stephen Hawking and several free hotel newspapers: the Globe and Mail, the Toronto Star and the National Post. This served as a supplement to my general musings on how to be more imaginative in my research and the innovation agenda.

The book title is based on a quote by the nineteenth century mathematician, Leopold Kronecker; in full, ‘God created the integers. All the rest is the work of Man.’ The quote speaks to the fact that modern mathematics is a magnificent outgrowth of the most humble beginnings: the whole numbers. The book starts with quoting Euclid as writing that “The Pythagoreans… having been brought up in the study of mathematics, thought that things are numbers… and the whole cosmos is a scale and a number.” In the first chapter, what caught my interest was the Pythagorean cult and their treatment of mathematical results such as the square root of 2 being irrational:

 The Pythagoreans carefully guarded this great discovery (irrational numbers) because it created a crisis that reached to the very roots of their cosmology. When the Pythagoreans learned that one of their members had divulged the secret to someone outside their circle they quickly made plans to throw the betrayer overboard and drown him while on the high seas. -p3

Next, I read the Intellectual Property supplement to the National Post, and in reading about intellectual property, I noted that priority to developing new technologies such as Google Glasses* are protected by patents, yet throwing people overboard to protect new advances in fundamental research is no longer appropriate. In fact, amongst scientists insights and new results are freely shared. Arguably, as a consequence, advances in fundamental research then have no market value – if they are keenly given away to anyone, of any company, of any country (or so my reasoning goes).

Back to the book: the next chapters covered Archimedes, Diophantus, Rene Descartes, Isaac Newton and Leonhard Euler. Despite making advances in fundament research, some of these mathematicans also worked on very applied projects: Archimedes on identifying counterfeit coins and Euler on numerous projects including how to set up ship masts, on correcting the level of the Finow canal, in advising the government on pensions, annunities and insurance, in supervising work on a plumbing system, and on the Seven Bridges of Konigsberg Problem. With regard to the Seven Bridges of Konigsberg Problem,

Euler quickly realized he could solve the problem of the bridges simply by enumerating all possible walks that crossed bridges no more than once. However, that approach held no interest to him. Instead, he generalized the problem… – p388.

On the shoulders of Giant's - perhaps (perhaps necessary but not sufficient). Irrespective of the boost: uncommonly brilliant and arguably unmatched. The photo is sourced from Andrew Dunn (

… and perhaps that quote speaks to the tension in advancing applied research at the expense of fundamental research.

In reading the book, so far I’m most impressed by Newton**. How on earth did he think of that? By studying pendulums on earth he arrives at a mechanistic model of planetary motion? Swinging pendulums and falling apples? Swinging and thudding? This doesn’t naturally evoke ideas of elliptical motion for me, let alone that these events over such small distances are generalizable to a cosmic scale. Setting that aside, and continuing to generalize: every object I have ever pushed has… stopped. Yet, Newton’s first law, when it comes to objects in motion, earthly observations are the exception to the rule (not generalizable) and it takes an extra twist (external forces) to explain why, on earth, things always stop. Generalize for the universal theory of gravity; don’t generalize for the first law. I find it so not-obvious! And consequently, I’m so very impressed.


*Google is amazing **and Newton, much moreso.

How to not make bad models

Levin (1980)* is a concise and insightful discussion of where mathematical modelling can go wrong. It is quite relevant to my investigation of The Art of Mathematical Modelling and does a nice job of addressing my ‘why make models?’ question.

Vito Volterra is referred to as the father of mathematical biology in Levin (1980).

This paper answered one of the questions that I had long been wondering about: who is considered to be the father of mathematical biology? Levin’s answer is Vito Volterra** – at least for mathematical biologists who come from a mathematical background. Levin then says that modern day mathematical biologists, as the descendents of Vito Volterra, lack his imagination; too often investigating special cases or making only small extensions of existing theory. It’s a fair point, but thinking takes time and time is often in short supply. My take on Levin’s comment is ‘aspire to be imaginative, but to remember to be productive too’. Furthermore, Levin identifies one of the ingredients that make great models great: imagination – I’m adding that to my notes.

A second piece of advice is that mathematical models that make qualitative predictions are more valuable than those that make quantitative predictions. Levin’s reasoning is that ‘mathematical ecology as a predictive science is weaker than as an explanatory science because ecological principles are usually empirical generalizations that sit uneasily as axioms.’ That is quite eloquent – but is it really quite that simple? For example, if you make a quantitative prediction with a stated level of confidence (i.e., error bars) is that really that much worse than making a qualitative prediction? The sentiment of the quote appears to be to not overstate the exactness of the conclusions, but to me this seems equally applicable to quantitative or qualitative models.

Levin coins the phrase ‘mathematics dressed up as biology’. I have my own version of that, as I like to say ‘that’s just math and a story’, in both cases, for use whenever there are weak links between any empirical observations and the model structure.

To conclude, this paper discusses why the different approaches of biologists and mathematicians to problem solving can result in mathematicians that are keen to analyze awkwardly derived models and in biologists who lack an appreciation for the mathematician’s take on a cleanly formulated problem. Rather than discussing what makes great models great, Levin’s paper reads like advice on how not to make bad models, and because it’s so hard to distill the essence of good models, looking at the art of mathematical modelling from that angle is a constructive line of inquiry.


Levin (1980), Mathematics, ecology and ornithology. Auk 74: 422-425


*Suggested by lowendtheory, see Crowdsourcing from Oikos blog.

**Do you agree? For me, if this is true then the timing is interesting: Vito Volterra (1926), Ronald Ross (1908), Michaelis-Menten (1913), P.F. Verhulst (1838), JBS Haldane (1924) and the Law of Mass Action dates to 1864.

Levin also hits on several items from my ‘why make models’ list and so I have updated that post.

Making the list, checking it twice

The table below lists the goals of mathematical modelling as described in three books and one journal article with respect to my list.* For each reference, when an item from my list is mentioned, I provide the page number, section, or chapter where the mention is made.







1. Quantitative prediction



Ch 4


2. Qualitative prediction



Ch 3

3. Bridge between different scales
4. Parameter estimation

1.2 (2.)



5. Clarify logic behind a relationship




6. Test hypothetical scenarios



1.2 (4.)


7. Motivate test/experiment


(6) p38


8. Disentangle multiple causation
9. Make an idea precise, integrate thinking


(4) p38

1.2 (3.)

10. Inform data needs

1.2 (1.)

11. Highlight sensitivities to parameters or assumptions

(3) p38

12. Determine the necessary requirements for a given relationship

(2) p38

13. Characterize all theoretically possible outcomes

(1) p38

14. Identify common elements from seemingly disparate situations


(2) p38

15. Detect hidden symmetries and patterns



Levin (1980), Mathematics, ecology and ornithology. Auk 74: 422-425.**

Caswell (1988), Theory and models in ecology: a different perspective. Ecological Modelling 43: 33-44.***

Hilborn and Mangel (1997), The Ecological Detective. Princeton Monographs.

Haefner (1996), Modeling biological systems: principles and applicaitons. Chapman and Hall.

Otto and Day (2006), A biologist’s guide to mathematical modeling in ecology and evolution. Princeton University Press.


*Please feel welcome to suggest references to be added or to disagree with the placement of items in the table.

References suggested by **lowendtheory and ***Pablo Almaraz during comments on the ‘Crowdsourcing’ post at the Oikos blog.

@Although Levin advocates for the derivation of qualitative models as these rest on firmer axioms.

Recommended reading list

I’ll update this post with recommended readings as I come across them.


Model selection in ecology and evolution (2004) by Jerald B. Johnson and Kristian S. Omland. A great starting point for learning about model selection and mechanistic models.

Fitting population dynamic models to time-series data by gradient matching (2002) by Stephen P. Ellner, Yodit Seifu and Robert H. Smith. This quote from the abstract gives a good indication of what the paper is about: “Semimechanistic modeling makes it possible to test assumptions about the mechanisms behind population fluctuations without the results being confounded by possibly arbitrary choices of parametric forms for process-rate equations.” The idea of the paper is that functions and parameters that are well-supported by biological evidence are explicitly specified in the model; functions and parameters that are not well understood are fitted, hence the term ‘semimechanistic’.*

Why do populations cycle? A synthesis of statistical and mechanistic modeling approaches (1999) by Bruce Kendall, Cheryl Briggs, Bill Murdoch, Peter Turchin, Steve Ellner, Ed McCauley, Roger Nisbet and Simon Wood. This paper develops an approach, called probe matching, which can be used for parameter estimation and model fitting. Probe matching combines the best of both worlds – statistical models and mechanistic models – and this paper, along the way, highlights the advantages and rationale for using each.

Mathematics, ecology, ornithology (1980) by Simon Levin. A general discussion of mathematical modelling in ecology. I like that this article is concise and well articulated. [Read more].


Ignorance is bliss by Jeremy Fox. On deterministic versus stochastic model formulation.
Fusing theory and data a plea for help from Dan Bolnick by Dan Bolnick. Most of us agree that fusing theory and data is a worthy objective. How exactly to go about doing this is a topic that I wish received a bit more attention.
20 different stability concepts by Jeremy Fox. A compendium of ways to describe the properties of fixed points.
On the use and care of mathematical models by Simon Levin (1975) as quoted by Jeremy Fox.
Charles Elton on A.J. Lotka by Jeremy Fox at Dynamic Ecology.


*credit to J. Fox and E. Pederson for putting me onto this paper.

Mechanistic models: what is the value of understanding?

My recent thinking has been shaped by my peripheral involvement in discussions between colleagues at the University of Ottawa. What I will discuss now, is the confluence of ideas expressed by several people, and I say this because these have been stimulating discussions, and I don’t want to appear as taking full credit for these ideas by virtue of this solo author blog post.


All other things being equal, mechanistic models are more powerful since they tell you about the underlying processes driving patterns. They are more likely to work correctly when extrapolating beyond the observed conditions.

-Bolker (2008) Ecological models and Data in R, p7.

My question for today is:

Given a mechanistic model and a phenomenological (or statistical) model, if we are trying to determine which model is best, shouldn’t the mechanistic model score some ‘points’ by virtue of it being mechanistic?

Assume a data set that both models are intended to describe. Define mechanistic and phenomenological as follows,

Mechanistic model: a hypothesized relationship between the variables in the data set where the nature of the relationship is specified in terms of the biological processes that are thought to have given rise to the data. The parameters in the mechanistic model all have biological definitions and so they can be measured independently of the data set referenced above.

Phenomenological/Statistical model: a hypothesized relationship between the variables in the data set, where the relationship seeks only to best describe the data.

These definitions are taken from the Ecological Detective by Ray Hilborn and Marc Mangel. Here are some additional comments from Hilborn and Mangel:

A statistical model foregoes any attempt to explain why the variables interact the way they do, and simply attempts to describe the relationship, with the assumption that the relationship extends past the measured values. Regression models are the standard form of such descriptions, and Peters (1991) argued that the only predictive models in ecology should be statistical ones; we consider this an overly narrow viewpoint.

Having defined mechanistic and phenomenological, now the final piece of the puzzle is to define ‘best’. Conventional wisdom is that mechanistic models facilitate a biological understanding, however, I think that’s only one step removed from prediction – you want to take your new found understanding and do something with it, specifically make a prediction and test it. Therefore, the goal of both mechanistic and phenomenological models are to predict and the performance of the models in this respect is referred to as model validation.

But, validation data is not always available. One reason is that if the models predict into the future, we will have to wait until the validation data appears. The other reason is that if we don’t have to wait, it’s a bit tempting to take a sneak peek at the validation data. For both model types, you want to present a model that is good given all the information available – it’s tough to write a paper where your conclusion is that your model is poor when the apparent poorness of the model can be ‘fixed’ by using the validation data to calibrate/parameterize the model (which then leaves no data to validate the model, something that, if anything, is a relief because your previous try at model validation didn’t go so well).

In absence of any validation data, one way to select the best model is using Akaike Information Criterion (AIC) (or a similar test). AIC will choose a model that fits the data well without involving too many parameters, but does AIC tell me which model is best, given my above definition of best, when comparing a mechanistic and a statistical model? Earlier this week, I said that if we wanted to settle this – which is better mechanistic or phenomenological – then we could settle it in the ring with an AIC battle-to-the-death.

As the one who was championing the mechanistic approach, I now feel like I didn’t quite think that one through. Of the set of all models that are phenomenological versus the set of all models that are mechanistic (with respect to a particular data set), it’s not rocket science to figure out which set is a subset of the other one. If one model is a relationship that comes with a biological explanation too, then you’re getting something extra than the model that just describes a relationship. Shouldn’t I get some points for that? Didn’t I earn that when I took the mechanistic approach to modelling because my options for candidate models is much more limited?

There is one way that mechanistic models are already getting points from AIC. If I did a good job of parameterizing my mechanistic model there should be few fitted parameters – hopefully even none. But is that enough of an advantage? Exactly what advantage do I want? I think what I am hoping for is related to the span of data sets that the model could then be applied to for prediction or validation. I feel pretty confident taking my mechanistic model off to another setting and testing it out, but if my model was purely statistical I might be less confident in doing so. Possibly because if my mechanistic model failed in the new setting I could say ‘what went wrong?’ (in terms of my process-based assumptions) and I’d have a starting point for revising my model. If my statistical model didn’t do so well in the new setting, I might not have much to go on if I wanted to try and figure out why.

But, if the objective is only to predict then you don’t need to know about mechanisms and so the phenomenological/statistical approach is the most direct and arguably best way of generating a good predictive model. Perhaps, what this issue revolves around is that mechanistic models make general and inaccurate predictions (i.e., the predictions might apply to a number of different settings) and that phenomenological models make accurate, narrow predictions.

Truth be known, this issue is tugging at my faith (mechanistic models), and I’m not really happy with my answers to some of the fundamental questions about why I favour the mechanistic approach, as I do. And let me say, too, that I definitely don’t think that mechanistic models are better than phenomenological models; I think that each have their place and I’m just wondering about which places those are.

Survey on mathematical training for ecologists

The International Network of Next-Generation Ecologists is conducting a survey on mathematical training for ecologists. I just filled it out. It was quick and easy, and so if you care about this issue completing the survey would be time well spent.

HT to the Oikos Blog by Jeremy Fox for making me aware of this. The Oikos blog is fantastic and has some highly relevant posts that I hope to discuss in the near future.