Tuesday, July 18, 2017

Model Risk Management


Model Risk Management

Model risk is the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.
Or
The outcome of the model is not same as expected.

-         Model Failure Examples-


1-     Long Term Capital Management (LTCM) – The fund had followed an arbitrage investment strategy on bonds, involving hedging against a range of volatility in foreign currencies and bonds, based on complex models.
Arbitrage margins are small and the fund took on leveraged positions. Russian crisis kicked off in 1998, European and US markets fell drastically and LTCM was badly hit through market losses.

2-     CDO / MBS – 2007 subprime mortgage crisis- Between 2002 and 2007, the mortgage underwriting standards had significantly deteriorated. However, those loans bundled into MBS and CDO with high ratings which were believed justified by credit enhancement techniques. Investors relied on rating agencies, blindly in many cases. However, a significant portion of AAA CDO and MBS tranches were finally downgraded to junk in 2007 and early 2008, once the housing bubble burst in the 2006.

-         Market risk regulatory pre-crisis models-


-         The VaR metrics used before the outburst of the financial crisis did not adequately capture tail-risk events, credit risk events as well as market illiquidity.
-         When the financial crisis arose, essentially driven by credit risk events, a large number of banks posted daily trading losses many times greater than their VaR estimates.
-         Model risk- tail credit risk events were not adequately modelled, hence underestimating possible losses in stressed conditions.

1-  Types Of Models- 


2-  Elements of Model Risk Management Framework-


2.1  Model Lifecycle Management-

Model development, documentation, classification and follow up-
-         Models are classified according to the level of risk.
-         The documentation should include description, key variables, assumptions and algorithms.

2.2   Model Risk Quantification-

-         Data, sensitivity to errors or absence of variables;
-         Estimates, sensitivity of estimates (maximum impact, alternative models);
-         Uses, predictive power evolution, impact of erroneous use, etc.

2.3   Model Control Framework-

-         Models assigned the highest level of risk are subject to continuous assessment.
-         In addition to the above, all models should be re-evaluated by Validation: o Annually.
If they undergo material changes.
-         Before they are deployed to production, they should have been approved.

3-  Model Risk Assessment Framework-


1)     Aspects - Identifying issues.
2)     Impact = What are the consequences of the issues.
3)     Probability of occurrence
4)     Risk Score = Probability * Consequences
5)     Model Risk Control = Action to eliminate issue.
6)     Residual Risk Score = Risk Score – Risk Score considering model risk control
7)     Ranking = Sorting Residual Risk Score to identify issues that need priorities.



4-  Model validation-

The set of processes and activities intended to verify that models are performing as expected.

3.1       Model Validation Matrices-

a)   Performance Matrices-

1-     Coefficient of Correlation
2-     Root Mean Square root error.
3-     Signal to Noise Ratio –

b)  Cross Validation-

To evaluate model by portioning the original sample into training set to train the model and to test and evaluate the model.
b.1- K-Out Cross Validation- k observations are left out in each set.
b.2-  K-fold Cross Validation – Original sample is portioned into k- sub samples.

c)   KS Test-

Maximum difference between cumulative % of event and that of cumulative % of non- event.

d)  ROC curve for Logistic Regression-

Receiver Operating Characteristics (ROC).
Finding are under the curve with axis as sensitivity and specificity.
Sensitivity (Y-axis) – is defined as model predicting an observation as positive (Y =1).
Specificity (X-axis) – is defined as model predicting an observation as negative (Y =0).
Generally, both are defined by cutoff (c).
P > c as positive (sensitivity)
P < c as negative (specificity)
C increase from 0 to 1
Sensitivity = 1 – Specificity.
Result –
If area under the curve is between 90-100 - Excellent

e)   Hosmer and Lemeshow or Chi- Square Test-

Five groups were formed.
For every group, the average estimated default Probability is calculated and used to derive the expected number of defaults per group.
Next, this number is compared with the amount of realized defaults in the respective group.
Then, test statistic of groups is used for the estimation sample is chi-square distributed in turn calculating p-value for the rating model.
Calculated as =
P-Value – The closer the p-value is to zero, the worse the estimation is.
o k =  (number of rating classes), ni = number of companies in rating class i, Di is the number of defaulted obligors in class i, pi is the forecasted probability of default for rating class i
o Compare with p-value.
o No critical value of p that could be used to determine whether the estimated PD’s are correct or not
o The closer the p-value is to zero the worse the estimation is.
o First –all else equal, the greater the chi square number, the stronger the relationship between the dependent and independent variable.
o Second –the lower the probability associated with a chi-square statistic, the stronger the relationship between the dependent and independent variable.
Third –If your probability is .05 or less, then you can generalize from a random sample to a population, and claim the two variables are associated in the population

3.2       Other Model validation approaches-

-         Stress testing-  Analysis of the impact of single extreme events
-         Scenario testing- A scenario is a probable future environment, either at a point in
time or over a period.
-         Sensitivity testing- A sensitivity is the effect of a set of alternative assumptions regarding a future environment.
-         Reverse stress testing
-         Back testing
-         Simulation/convergence test
-         Profit and loss attribution
-         Challenger/benchmark models
-         Replication
-         Boundary test


Example- Credit Risk Modeling-

Models are typically statistical in nature and the full suite of traditional model validation techniques are applicable. Some Examples:
-         Results benchmarking - process considers the model’s applications/uses to inform meaningful analysis. Benchmark both Expected Loss and Capital using various model settings and assumption.
-         Sensitivity analysis-  considers sensitivity to a variety of inputs and assumptions to provide effective challenge across the modeling process.
-         Back-testing-


No comments:

R3 chase - Pursuit

Change Point Detection Time Series

  Change Point Detection Methods Kernel Change Point Detection: Kernel change point detection method detects changes in the distribution of ...