Modeling expert opinion

See also: Distributions used in modeling expert opinion, Incorporating differences in expert opinions, Modeling expert opinion in ModelRisk, Subjective distributions

Risk analysis models almost invariably involve some element of subjective estimation. It is usually impossible to obtain data from which to accurately determine the uncertainty of all of the variables within the model for a number of reasons:

  • The data have simply never been collected in the past.

  • The data are too expensive to obtain.

  • Past data are no longer relevant (new technology, changes in political or commercial environment, etc.).

  • The data are sparse, requiring expert opinion 'to fill in the holes'.

When insufficient data are available to completely specify the uncertainty of a variable, one or more experts will usually be consulted to provide their opinion of the variable's uncertainty. This section offers guidelines for the analyst to model the experts' opinions as accurately as possible.

So in conclusion, expert opinion is an important source of information for quantifying model parameters and variables. Expert estimates can produce unrealistic distributions but they are often the only source of information available to us.

In our experience, you need to follow the following broad principles to get the most reliable and unbiased estimate:

  • Select your expert carefully for knowledge and lack of bias. Include, if possible, the expert in the original model design.

  • Collate any information and find a good way of presenting that information to help the expert orient his/herself.

  • Explain the reason for requiring the estimate. This will improve cooperation and also help the expert comment on other factors that need consideration (correlations, etc.).

  • You might hold a brainstorming session with several experts. If so, restrict their conversation to discussing information, and avoid actual estimation within the group if possible. Then ask each expert in private for their estimate. This allows you to determine whether the level of information was well understood and resulted in consistent estimates.

  • Allow the expert to describe the reasons for uncertainty about the parameter in his/her own way, and make the model match the expressed opinion. Too often, an expert is asked to confine his/her opinion to a statement of minimum, most likely and maximum. Disaggregation methods are particularly helpful. Make full use of the range of distributions normally used to model expert opinion.

  • Model any correlations that the expert expresses.

  • If the expert's estimate is based on quantifiable data, consider performing a statistical analysis of the data rather than relying on the expert to provide the interpretation. People tend to believe that a small data set tells us more than it actually does.

  • Be aware of sources of bias and error in the estimation process, including the misunderstanding of probabilistic terms.

  • Generate a plot of the modelled opinion, check this matches the expert's opinion. Fine tune as necessary. When it does, get him/her to sign and date it.

  • Offer the possibility of revision if the expert has a rethink.

  • If you have two or more expert estimates, make a combined distribution. If the estimates disagree strongly, check that there has not been a misunderstanding of the information, assumptions (especially for conditional distributions), or the quantity being estimated.

  • Test whether the model output is sensitive to the estimated parameter/variable. If it is, you may consider fine-tuning the estimate.

Disaggregation

A key technique to eliciting distributions of opinion is to disaggregate the problem sufficiently well so that the expert can concentrate on estimating something that is tangible and easy to envisage. For example, it will generally be more useful to ask the expert to break down her company's revenue into logical components (like region, product, subsidiary company, etc.) rather than to estimate the total revenue in one go. Disaggregation allows the expert and analyst to recognise dependencies between components of the total revenue. It also means that the risk analysis result will be less critically dependent on the estimate of each model component. Aggregating the estimates of the various revenue components will show a more complex and accurate distribution than ever could have been achieved by directly estimating the sum. The aggregation will also take care of the effects of the Central Limit theorem automatically - something that is extremely hard for the expert to do in her head. Another benefit of disaggregation is that the logic of the problem usually becomes more apparent and the model therefore becomes more realistic.

During the disaggregation process, the analyst should be aware of where the key uncertainties lie within his model and therefore where he should place his emphasis. The analyst can check whether an appropriate level of disaggregation has been achieved by running a sensitivity analysis on the model and looking to see whether the Tornado chart is dominated by one or two model inputs.

Using distributions to model expert opinion

The main technique for eliciting from expert opinion, is to attach a distribution to it.

Once the model has been sufficiently disaggregated, it is usually not necessary to provide very precise estimates of each individual component of the model. In fact, three point estimates are often quite sufficient: the three points being the minimum, most likely and maximum values the expert believes the value could take. These three values can be used to define either a Triangle distribution or some form of PERT distribution. Our preference is to use a modified PERT, because it has a natural shape that will invariably match the experts view better than a Triangle distribution would. The analyst should attempt to determine the expert's opinion of the maximum value first and then the minimum, by considering scenarios that could produce such extremes. Then, the expert should be asked for his/her opinion of the most likely value within that range. Determining the parameters in the order: 1. maximum, 2. minimum and 3.most likely will go some way to removing the anchoring bias. An individual will usually begin  an estimate of the distribution of uncertainty of a  parameter with a single value (usually the most likely value) and then make adjustments for its minimum and maximum from that first value. The problem is that these adjustments are rarely sufficient to encompass the range of values that could actually occur: the estimator appears to be 'anchored' to  the first estimated value. This is one source of over-confidence and can have a dramatic impact on the validity of a risk analysis model.

Occasionally, a model will not disaggregate evenly into sufficiently small components, leaving the model's outputs strongly affected by one or more individual subjective estimates. When this is the case, it is useful to employ a more rigorous approach to eliciting an expert's opinion than a simple three point estimate. In such cases, the modified PERT distribution is a good starter but the Relative distribution offers yet more flexibility.

Read on: Sources of error in subjective estimation

 

ModelRisk

Monte Carlo simulation in Excel. Learn more

Tamara

Adding risk and uncertainty to your project schedule. Learn more

Navigation

FREE MONTE CARLO SIMULATION SOFTWARE

For Microsoft Excel

Download your free copy of ModelRisk Basic today. Professional quality risk modeling software and no catches

Download ModelRisk Basic now

FREE PROJECT RISK SOFTWARE

For Primavera & Microsoft Project

Download your free copy of Tamara Basic today. Professional quality project risk software and no catches.

Download Tamara Basic now
-->