Article Detail

Quantitative Modeling of Operational Risk Losses When Combining Internal and External Data

Journal 35: Zicklin-Capco Institute Paper Series in Applied Finance

Jens Perch Nielsen, Montserrat Guillén, Catalina Bolancé, Jim Gustafsson

We present an overview of methods to estimate risk arising from operational losses. Our approach is based on the study of the statistical severity distribution of a single loss. We analyze the fundamental issues that arise in practice when modeling operational risk data. We address the statistical problem of estimating an operational risk distribution, both in standard abundant data situations and when our available data is challenged from the inclusion of external data or because of underreporting. Our presentation includes an application to show that failure to account for underreporting may lead to a substantial underestimation of operational risk measures. The use of external data information can easily be incorporated in our modeling approach.

The challenge of operational risk quantification
In the banking sector, as in many other fields, financial transactions are subject to operational errors. So, operational risk is part of banking supervision and it is also a key component of insurance supervision.

Regulators require the measurement of operational risk as part of the indicators for solvency. This research area has exploded in the last few years with the existence of the Basel agreements in the banking industry, and Solvency II in the insurance sector, and partly because the financial credit crunch has accelerated our appetite for better risk management. These two regulatory frameworks have set the path for international standards of market transparency for financial and insurance service operators [Panjer (2006), Franzetti (2010)].

Measuring operational risk requires knowledge of the quantitative tools and comprehension of financial activities in a very broad sense, both technical and commercial. Our presentation offers a practical perspective that combines statistical analysis and management orientations.

In general, financial institutions do not seem to keep historical data on internal operational risk losses. So, models for operational risk quantification only have a scarce sample with which to estimate and validate.

Therefore, nobody doubts that complementing internal data with more abundant external data is desirable [Dahen and Dionne (2007)]. A suitable framework to overcome the lack of data involves getting a compatible sample from a consortium. Individual firms come together to form a consortium, where the population from which the data is drawn is assumed to be similar for every member of the group. We know that consortium data-pooling may be rather controversial. This is because not all business are the same, and one can raise questions about whether consortium data can be considered to have been generated by the same process, or whether pooled data accurately reflect the size of each financial institution’s transaction volume. We will assume that scaling has already been corrected for, or can be corrected for during our modeling process, assuming that external data provide prior information to the examination of internal data. So, here we will not address pre-scaling much, specifically when combining internal and external data sets.


Leave a comment

Comments are moderated and will be posted if they are on-topic and not abusive. For more information, please see our Comments FAQ
This question is for testing whether you are a human visitor and to prevent automated spam submissions.