Which technique should you use?

See also: The Bootstrap, Classical statistics, Bayesian inference

We have discussed a variety of methods for estimating your uncertainty about some model parameter. An obvious question now is which one is best? There are some situations where classical statistics have exact methods for determining confidence distribution. In such cases, it is sensible to use those methods, and the results are unlikely to be challenged. In situations where the assumptions behind traditional statistical methods are being stretched rather too much for comfort, you will have to use your judgement as to which technique to use. Bootstrapping is a flexible alternative and has the advantage of lying within the filed of classical statistics.

Bayesian inference techniques require some knowledge of the appropriate probability mathematics, which may be difficult. Bayesian inference also requires a prior which can be contentious at times, but has the potential to include knowledge that the other techniques cannot allow for.

Traditional statisticians will sometimes offer a technique to use on your data that implicitly assumes a random sample from a Normal distribution, though the parent distribution is clearly not Normal. This usually involves some sort of approximation or a transformation of the data (e.g. by taking logs) to make the data better fit a Normal distribution. If you can find a Bayesian method that does not have this limitation, it is probably preferable.

We suggest that you compare the results on your model of using two or more different estimating techniques if the parameter you are estimating has significant influence on the results of the analysis. that will give you greater confidence if there is reasonable agreement between any two methods you might choose. What is meant by reasonable will depend on your model and the level of accuracy you need from that model, which in turn depends on decision-sensitivity.

If you find there appears to be some reasonable disagreement between two methods that you test, you could try running your model twice, once with each estimate and see if the model outputs are significantly different. Finally, if the uncertainty distributions between two methods are significantly different and you cannot choose between them, you can consider this as another source of uncertainty and simply combine the two distributions, using a Discrete distribution.

Bayesian and frequentist methods asymptotically converge to the same results with greater amounts of data, but will often produce slightly different results when the amount of data is small. If they didn't, there would be no debate about which technique to use, of course. Thus, from a decision-makers perspective, one method may be preferred over another because it results in a more conservative (for example) estimate of the model output.

It is therefore interesting to look at the analysis of the most common model parameters to compare the results they produce. We have done this for you for the most common parameters, comparing classical results with Bayesian results using uninformed priors:

Probability p in a binomial process

Intensity l in a Poisson process

Mean "time" b in a Poisson process

Estimates of the standard deviation of a Normal distribution (comparison of the mean estimations have been discussed elsewhere)

See Also



Monte Carlo simulation in Excel. Learn more


Adding risk and uncertainty to your project schedule. Learn more



For Microsoft Excel

Download your free copy of ModelRisk Basic today. Professional quality risk modeling software and no catches

Download ModelRisk Basic now


For Primavera & Microsoft Project

Download your free copy of Tamara Basic today. Professional quality project risk software and no catches.

Download Tamara Basic now