Sources of error in subjective estimation | Vose Software

Sources of error in subjective estimation

See also: Modeling expert opinion introduction, Using distributions in modeling expert opinion

Before looking at the techniques for eliciting distributions from an expert, it is very useful to have an understanding of the biases that commonly occur in subjective estimation. The analyst should bear in mind the following heuristics that the expert may employ when attempting to provide subjective estimates and which are potential sources of systematic bias and errors. These biases are explained in considerably more detail in Hertz & Thomas (1983) and in Morgan & Henrion (1990): the latter includes a very comprehensive list of references. Kahneman and Tversky (2000) offers a large collection of papers that discusses people's attitude to risk and Slovic (2000) describes Paul Slovic's research over many years of how people evaluate and respond to risk.

Availability

This is where the expert uses his or her recollection of past occurrences of an event to provide an estimate. The accuracy of the estimate is dictated by the expert's ability to remember past occurrences of the event or how easily s/he can imagine the event occurring. This may work very well if the event is a regular part of the expert's life, e.g. how much s/he spends on petrol. It also works well if the event is something that sticks in the expert's mind, e.g. the probability of having a flat tyre. On the other hand, it can produce poor estimates if it is difficult for the expert to remember past occurrences of the event: for example, the expert may not be able to confidently estimate the number of people s/he passed in the street that day since s/he would have no interest in noting each passer-by. Availability can produce overestimates of frequency if the expert can remember past occurrences very clearly because of the impact it had on the expert. For example, if a computer manager was asked how often the mainframe had crashed in the last two years, s/he might well overestimate the frequency because s/he could remember every crash and the crises they caused, but because of the clarity of the expert's recollection ('it seems like only yesterday'), include some crashes that happened well over two years ago and therefore overestimate the frequency as a result.

The availability heuristic is also affected by the degree to which we are exposed to information. For example: one might consider that the chance of dying in a motoring accident was much higher than dying from stomach cancer, because car crashes are always being reported in the media and stomach cancer fatalities are not. On the other hand, an older person may have had several acquaintances who have died from stomach cancer and would therefore offer the reverse opinion.

Representativeness

One type of bias is the erroneous belief that the large scale nature of uncertainty is reflected in small scale sampling. For example, in the National Lottery, many would say I had no chance of winning if I selected the consecutive numbers 16, 17, 18, 19, 20 and 21. The lottery numbers are randomly picked each week so it is believed that the winning numbers should also exhibit a random pattern, e.g. 3, 11, 15, 21, 29 and 41. Of course, both sets of numbers are actually equally likely.

A second type of representativeness bias is where people concentrate on an enticing detail of the problem and forget the overall picture. In a frequently cited paper by Kahneman and Tversky, described in Morgan and Henrion (1990), subjects in an experiment were asked to determine the probability of a person being an engineer based on a written description of that person. If they were given a bland description that gave no clue to the person's profession, the answer given was usually 50:50, despite being told beforehand that, of the 100 described people, 70 were lawyers and 30 engineers. However, when the subjects were asked what probability they would give if they had no description of the person, they said 30%, illustrating that they understood how to use the information but had just ignored it.

Adjustment and Anchoring

This is probably the most important heuristic of the three. An individual will usually begin the estimate of the distribution of uncertainty of a model parameter with a single value (usually the most likely value) and then make adjustments for its minimum and maximum from that first value. The problem is that these adjustments are rarely sufficient to encompass the range of values that could actually occur: the estimator appears to be 'anchored' to the first estimated value. This is certainly one source of over-confidence and can have a dramatic impact on the validity of a risk analysis model.

There are other elements that may affect the correct assessment of uncertainty and the analyst should be aware of them in order to avoid unnecessary errors.

Inexpert expert

The person nominated (wrongly) as being able to provide the most knowledgeable opinion occasionally actually has very little idea.

Culture of the organization

The environment within which people work may sometimes impact on their estimating. Sales people, for example, will often provide unduly optimistic estimates of future sales because of the optimistic culture they work in.

Conflicting agendas

Sometimes the expert will have a vested interest in the values that are submitted to a model.

Unwillingness to consider extremes

The expert will frequently find it difficult or be unwilling to envisage circumstances that would cause a variable to be extremely low or high. The analyst will often have to encourage the development of such extreme scenarios in order to elicit an opinion that realistically covers the entire possible range. This can be done by the analyst dreaming up some examples of extreme circumstances and discussing them with the expert.

Eagerness to say the right thing

Occasionally, the interviewee will be trying to provide the answer s/he thinks the analyst wants to hear. For this reason, it is important not to ask questions that are leading and never to offer a value for the expert to comment on.

Units used in the estimation

People are frequently confused between the magnitudes of units of measurement. An older (or English) person may be used to thinking of distances in miles and liquid volumes in (UK) gallons and pints. If the model uses SI units, the analyst should let the expert estimate in the units in which s/he is comfortable and convert the figures afterwards.

Expert too busy

People always seem to be busy and under pressure. A risk analyst coming to ask a lot of difficult questions may not be very welcome. The expert may act brusquely or give the whole process lip-service. Obvious symptoms are when the expert offers over-simplistic estimate like X +/- Y% or minimum, most likely and maximum values that are equally spaced for all estimated variables.

Belief that the expert should be quite certain

It may be perceived by the expert that assigning a large uncertainty to a parameter would indicate a lack of knowledge and thereby undermine his/her reputation. The expert may need to be reassured that this is not the case. An expert should have a more precise understanding of a parameter's true uncertainty and may, in fact, appreciate that the uncertainty could be greater than the lay person would have expected..

Read on: Distributions used in modeling expert opinion

 

Navigation