Do we need to derive mechanistic error distributions for deterministic models?

In fitting mathematical models to empirical data, one challenge is that deterministic models make exact predictions and empirical observations usually do not match up perfectly. Changing a parameter value may reduce the distance between the model predictions and some of the data points, but increase the distance to others. To estimate the best-fit model parameters, it is necessary to assign a probability to deviations of a given magnitude. The function that assigns these probabilities is called the error distribution. In this post, I ask:

Do mechanistic, deterministic mathematical models necessarily have to have error distributions that are mechanistically derived?

One of the simplest approaches to model fitting is to use a probability density function, such as the normal or Poisson distribution, for the error function and to use a numerical maximization algorithm to identify the best-fit parameters. The more parameters there are to estimate the more time consuming this numerical search becomes, but in most cases this approach to parameter estimation is successful.

In biology, the processes that give rise to deviations between model predictions and data are measurement error, process error, or both. Some simple definitions are:

  • Measurement error: y(t) = f(x(t), b) + e
  • Process (or demographic) error: y(t) = f(x(t)+e, b)

where x(t) is a variable, such as the population size at time t, b is a vector of parameters, f(x,b) is the solution to the deterministic model, e is the error as generated by a specified probability density function, and y(t) is the model prediction including the error. As examples, counting the number of blue ducks each year might be subject to measurement error if a major source of error is in correctly identifying the colour of the duck, whereas extreme weather events that affect duckling survivorship are a source of process error.

In the simple approach described above, to keep it simple, I intended to implement the measurement error formulation of the full model. Under this formulation, many of the probability density functions that might be chosen as the error distribution have a process-based interpretation. For example, the normal distribution arises if (1) there are many different types of measurement errors, (2) these errors arise from the same distribution, and (3) total measurement error is the sum of all the errors. In biological data, all of that might be true, to some degree, but in general this explanation is likely incomplete.

A second justification of the simple approach, could be that the error distribution is not intended to be mechanistic, and here, the normal distribution is simply a function that embodies the necessary characteristics – it’s a decreasing function of the absolute value of the deviation. But if you have derived a mechanistic deterministic model, is it really okay to have an error distribution that isn’t justified on mechanistic grounds? Does such an error distribution undermine the mechanistic model formulation to the point where you might as well have started with a more heuristic formulation of the whole model? Would this be called semi-mechanistic – if the model is mechanistic, but the error distribution is heuristic?

If this all seems like no big deal, consider that measurement error does not compound under the effect of the deterministic model, while process error does. When only measurement error operates the processes occur as hypothesized and only the measurements are off. When process error occurs – slightly higher duck mortality than average – there are fewer breeding ducks in the next year, and this change feeds back into the process affecting the predictions made for future years. This makes model fitting to y(t) quite difficult. This is because model fitting is easier when the model and the error can be separated so that numerical methods for solving deterministic models can be used. If the error and the model can not be disentangled then fitting to y(t) will usually involve solving a stochastic model of some sort, which is more difficult, and more time consuming.

An easier alternative for the process error formulation, is to fit using gradient matching. This is because deterministic models are usually differential equations, f'(t) = g(x(t),b). Let z(t) be a transformation of the data, such that, z(t) = [y(t+Δt)-y(t)]/Δt, then we can fit the model as z(t) = g(x(t),b) +e1 where e1 are deviations between the empirical estimate of the gradient and the gradient as predicted by the model. Derivations from the model predicted gradient can be viewed as errors in the model formation or error that arises due to variation in the processes described by the model. If we have a mixture of measurement error and process error then we could do something nice like generalized profiling.

Anyway, this all has been my long-winded response to a couple of great posts about error at Theoretical Ecology by Florian Hartig. I wanted to revisit Florian’s question ‘what is error?’ Is error stochasticity? The latter would mean that e is a random variable, and I have a hard time imagining any good reason why e would not be a random variable. However, I think there are more issues to resolve if we want to understand how to define error. Specifically, how do we decide which processes are subsumed under f(x(t)) and which go under e? Is this a judgment call or should all the deterministic processes be part of f(x(t),b) and all the stochastic processes be put into e and therefore be considered error?

Advertisements
This entry was posted in Model derivation, Questions to readers by Amy Hurford. Bookmark the permalink.

About Amy Hurford

I am a theoretical biologist. I became aware of mathematical biology as an undergraduate when I conducted an internet search to learn about the topic. Now, twelve years later, I want to know, what is it that makes great models great? This blog is the chronology of my thoughts as I explore this topic.

11 thoughts on “Do we need to derive mechanistic error distributions for deterministic models?

  1. Nice introduction to measurement errors and process errors and how to deal with them. I like the plug for gradient matching as as a way to deal with some of these issues. It seems that everyone automatically goes for state space models these days. Not that that’s always, or even usually, a bad thing, but it’s worth remembering that you do have choices.

    Re: the more philosophical issues raised by Florian, I have an old post that’s relevant:

    http://oikosjournal.wordpress.com/2011/05/24/ignorance-is-bliss-sometimes/

  2. Hi Amy,

    as always, a great post and a very readable exposition of the problem.

    I think I have little to add really, except for that I wouldn’t confine the problem of mechanistic underpinning to process vs. observation error.

    Either process and observation “errors” require some generating stochastic process, and we can model this process phenomenologically by simply choosing a stochastic process that fits to the residuals, or we can model it mechanistically, e.g. by having explicit ideas about how the distribution and/or the variance of this process should look like, even before seeing the data. There are mechanistic models for observation stochasticity (I think I remember a paper modeling people in a car, going through the savanna, including their viewing angle, to determine detection rates for animal counts, can’t find the reference though), or phenomenological ones (e.g. the sigma of the normal model is often optimized together with the other parameters during MLE).

    In conclusion, I agree that it’s not always a big deal, and we have to make simplifications wherever we go, but I do think that “mechanistic” explanations for the observed stochasticity carry a little bit more inferential weight than phenomenological ones, in particular when they are derived from the same processes and parameters that describe the “deterministic” part of the model.

    • I agree with your point about measurement and process error not being a fundamental distinction, but it does seems like for measurement error only it’s okay to assume that your residuals (deviations) are independent and identically distributed, but most stochastic models that represent some kind of process error probably don’t generate predictor variable means with error distributions that have the iid property. However, it’s true that not all measurement errors are are going to have this property and all process errors are not.

      Anyway, you asked a good question – ‘what is error’. I don’t know that you want to use ‘unobservable’ as a means to define it though, because (and partly related to Jeremy’s link above) maybe it’s observable and you have a reason for abstracting it. Or maybe then you don’t call it error, you call it the model, perhaps that’s what you were saying. And so I guess then fundamentally we’re left with is an error (1) something we didn’t explicitly consider or is an error (2) something with didn’t/can’t observe?

      Thoughts?

      • I think that the temporal autocorrelation in models with process-errors is a result of the position at which the error is placed in the model (within or after the process), rather than an indicator about whether one of the errors is more “mechanistically motivated”. Both for process and for observation errors, I can still adjust the generating stochastic process to the residuals if I want to, or I can fix them by specific mechanistic assumptions (only that this gets technically challenging pretty fast, here http://dx.doi.org/10.1111/j.1461-0248.2011.01640.x we explain why).

        I’ll move my response to your second point into a separate post because this is getting too long and it’s a good chance to comment on some related issues – I’ll post a link here as soon as I’m done with that. So far, I only want to say that, by discussing about whether to place the stochasticity within or after the process, we are already thinking more thoroughly about the nature of the “error” and how to best represent it in a model, which is a good thing imo.

  3. Pingback: Probabilistic models in statistical analysis – mechanism or phenomenology? « theoretical ecology

      • Okay, I think I’ve got this now. You’ve got to put $.latex \yourlatexcommands $ (but without the ‘.’ between ‘$’ and ‘latex’) and I think that should work. Let’s see \int f(x) dx. Yay!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s