# Operational risk modeling

An example of a Monte Carlo simulation risk analysis model for cost modeling

Minimum software requirements: ModelRisk Complete edition

Technical difficulty: 4

Techniques used: Monte Carlo simulation in Excel

# Model description

Operational risk is defined in the Basel II capital accord as 'The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events'.

Operational risk includes: internal and external fraud; workers discrimination and health and safety; antitrust, trading and accounting violations; natural disasters and terrorism; computer systems failure; data errors; failed mandatory reporting (e.g. sending out statements or policy documents within a required time); negligent loss of clients' assets, but excludes strategic risk and reputation risk although the latter can be affected by the occurrence of a high-visibility operational risk.

Basel II and various corporate scandals have brought Operational Risk into particular focus in the banking sector where operational risks are required to be closely and transparently monitored and reported. Sufficient capital must be held in reserve to cover operational risk at a high level of certainty to achieve the highest rating under Basel II. Under Basel II's 'Advanced Measurement Approach' (AMA), which will usually be the least onerous on a bank provided they have the necessary reporting systems in places, operational risk can be modelled as an aggregate portfolio problem similar to insurance risk.

The model shown below uses a FFT method to calculate the capital required to cover a bank's risks at the 99.9th percentile level. Basel II allows a bank to use Monte Carlo simulation to determine the 99.9th percentile but the use of FFT methods is to be preferred over simulation because such a high loss distribution percentile requires a very large number of samples to determine its value with any precision. The difference between the 99.9th percentile and the expected loss is called the 'unexpected loss' and equates to the capital charge that the bank must set aside to cover operational risk.

The model has assumed that each risk is independent (making a Poisson distribution appropriate to model the frequency, though a Pólya or Delaporte may well be better) and that the impacts all follow a Lognormal distribution. In this model one could have used fitted distribution objects that were linked to the available data. The chief difficulty in performing an operational risk calculation is the acquisition of relevant data that could be used to determine the parameters of the distributions.

More examples related to Basel II can be found in this Wiki topic.

Operational risks, especially those with a large impact, occur very infrequently so there is often an absence of any data at all within an individual bank. However, one can base the frequency and severity distributions on general banking industry databases, and use credibility theory (for example, using the Bühlmann credibility factor, Klugman et al (1998)) to gradually assign more weight over time to the individual bank's experience against the industry as a whole.

Credibility theory is often used in the insurance industry when one offers a new policy hoping to attract a particular sector of the population with a known risk level: as a history of claims emerges one migrates from the expected claim frequency and severity to that actually observed. In fact, the Basel Committee on Banking Supervision has now abandoned the AMA approach for a simpler method, partly because every bank created its own model which was difficult to validate. As a result, banks – which tended to see the AMA modeling as more of a regulatory requirement than a valuable analytic tool - have stopped doing proper quantitative risk analysis on their operational risks.

That’s a shame, because the purpose of AMA was really to get banks to understand what drives the OpRisk so they can manage it and avoid the risk of liquidation. The real value of AMA was, in theory, the requirement to include what Basel II called ‘internal control factors’. But bank risk modelers didn’t know how to apply these since the term, and the guidelines, were very vague.

What should have happened is that banks focused less on statistical modeling of OpRisk and more on modeling the risks they are exposed to and the effectiveness of the control and mitigation strategies managing those risks. Vose Software’s Pelican and StopRisk products are ideal tools that, together, help a bank to truly evaluate and manage its OpRisk.