Article Detail

Kernel Alternatives to Approximate Operational Severity Distribution: An Empirical Application

Journal 35: Zicklin-Capco Institute Paper Series in Applied Finance

Filippo di Pietro, María Dolores, Oliver Alfonso, Ana I. Irimia Diéguez

One of the main topics in operational risk is the estimation of loss severity distribution. Numerous parametric estimations have been suggested, although very few work for both high frequency small losses and low-frequency big losses. In this paper the most used parametric models, kernel alternatives, are explored to approximate operational severity distribution. The good performance of the double transformation kernel estimation in the context of operational risk severity is worthy of special mention. This method is based on the work of Bolancé and Guillén (2009). It was initially proposed in the context of the cost of claims insurance, and it means an advance in operational risk research.

The revised Basel Capital Accord requires banks to meet a capital requirement for operational risk as part of an overall risk-based capital framework. With regards to the definition aspects, the Risk Management Group (RMG) of the Basel Committee and industry representatives have agreed on a standardized definition of operational risk, i.e., “the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events” [BIS (2001)]. The discipline proposed establishes various schemes for calculating the operational risk charge, which increases sophistication and risk sensitivity. The most sophisticated method is the Advanced Measurement Approaches (AMA), based on the adoption of the internal models of banks.

Concerning the measurement issue, a growing number of articles, research papers and books have addressed the topic from a theoretical point of view. In practice, this objective is complicated by the relatively short period over which operational risk data have been gathered by banks. Obviously, the greatest difficulty is in collecting information on infrequent but large losses that contribute the most to the capital requirement. The need to evaluate the exposure to potentially severe tail events is one of the reasons why the new capital framework requires banks to supplement internal data with further sources (i.e., external data and scenario analysis) in order to compute their operational risk capital requirement.

Recently, the measurement of operational risk has moved towards the data-driven Loss Distribution Approach (LDA) and therefore, many financial institutions have begun collecting operational loss data in order to take advantage of this approach. The LDA approach requires the aggregation of the severity and frequency distributions in order to obtain the aggregated loss distribution.

The estimation of the severity loss distribution is probably one of the most significant phases, and that which involves the highest number of complications towards the estimation of the capital requirement of operational risk. An incorrect estimation of distribution leads to a severe distortion of the model and a low estimate, or overestimates regulatory capital with respect to the operational risk. This could have a big impact on the Basel II economic capital requirement. Recent literature on operational risk has focused attention on the use of a parametric estimation of loss distribution. This is the simplest method to follow, since it attempts to fit analytical distributions with certain properties. The aim of this approach is to find a distribution of losses that may be feasible to the severity distribution of the losses of the sample available. Another commonly applied technique in operational risk is the Extreme Value Theory (EVT), which is a good methodology in cases where the main attention is the tail of the distribution.

We take an alternative non-parametric estimation, which permits the quantification of operational loss severity by fitting the whole distribution and which does not require the specification of parametric estimation. With this in mind, the kernel estimation is taken as the starting point. This has been improved with the parametric transformation approach given in Wand, Marron and Ruppert (1991) and recently considered in Bolancé, Guillén and Nielsen (2008), and Buch-Larsen, Guillén, Nielsen and Bolancé (2005). The analysis is completed by using the latest methodology developed by Bolancé and Guillén (2009) based on a double transformation, which in our opinion can notably improve the operational loss severity estimation. In order to explore all possible methods, a data sample of losses is utilized based on the operational risk of a mediumsized Spanish savings bank.

In this article, we attempt to find which methodology of estimation (parametric and non-parametric) yields the most appropriate measure of operational risk severity, with a particular focus on the case of savings banks. We demonstrate that non-parametric estimation with this improvement (the double transformation kernel estimation) is a good alternative methodology for the approximation of the loss severity distribution, since it performs much better than parametric estimation. These methodologies render the estimation of a threshold unnecessary. Another good property of the non-parametric estimation with respect to EVT is that it seems to not overestimate the capital requirement. The double transformation kernel estimation was applied in the insurance claim field to approximate loss distribution. We believe that this methodology can also represent an advance in operational loss severity estimation.

The remainder of the paper is organized in the following way. In the second section a description of the parametric and non-parametric methodologies is reported. In the third section the characteristics of data and an exploratory analysis are presented. In the fourth section, a comparison of the distributions obtained with parametric and non-parametric estimations to approximate the severity loss distribution is included. In the fifth section an operational VaR is estimated aggregating the different severity distributions with a Poisson for the frequency distribution. The conclusion is the last section.

Comments

Leave a comment

Comments are moderated and will be posted if they are on-topic and not abusive. For more information, please see our Comments FAQ
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
jailer