Showing posts with label Regulatory Reporting & Risk Management. Show all posts
Showing posts with label Regulatory Reporting & Risk Management. Show all posts

Friday, December 15, 2023

Modeling Low Default Portfolio

The BCR Approach (Modeling Low Default Portfolio):


Benjamin, Cathcart and Ryan proposed adjustments to the Pluto & Tasche or called Confidence Based Approach that is widely applied by banks.

Pluto& Tasche propose a methodology for generating grade level PDs by the most prudent estimation, the BCR approach concentrates on estimating a portfolio-wide PD that then apply to grade level estimates and result in capital requirements.

- Independent case: The conservative portfolio estimate as in the BCR setting is therefore given by the solution of (1).


- Dependent Case: Assumed that there is a single normally distributed risk-factor Y to which all assets are correlated and that each asset has the correlation √ρ with the risk factor Y.

For a given value of the common factory=Y the conditional probability of default given a realization of the systematic factor Y is given by (2). The probability of default is equivalent to finding p such that the above is true.


- Multi-Period: Multi-period case:

The, the conditional probability of default given a realization of the systematic factor for t years as in the Vasicek model (3)


Estimation Method:

Steps of Execution:

1-     Draw N samples from N(λ,Σ) where λ is a zero vector with the same length as the time period and Σ is the correlation matrix as in the Vasicek model.

2-     Equation (4)



3-     Find p such that f(p) is close to zero using the following iteration:

-         Set the number of iterations:

n = log2((phigh−plow)/δ) where [plow, phigh]is the interval p is believed to be in and δ is the accepted error.

-         For n number of iterations, the midpoint, pmid, of the interval is calculated. It is then checked if f(pmid)>0 or<0.

If it’s the first case the lower bound is set equal to the midpoint, in the second case the higher bound is set equal to the midpoint.

-         When the n iterations are done the estimated probability of default is set to the final midpoint



 

Sunday, August 27, 2023

Python Generic Code (Probability of Default Model Development, Validation and Testing):

Python Generic Code (Probability of Default Model Development, Validation and Testing):
In the link below:

1- PD_Factors.csv: CSV with factors and required data
2- PD_Model_Generic_Python (Doc and PDF): Python generic code (as in steps)
3- PD_Estimate_Steps_Python: Steps in word as in Screenshot below





Thursday, August 19, 2021

Asymptotic Single Risk Factor Model (ASRF):

 Key assumptions: asymptoticity, a single risk factor, and normality.

PD assumptions and violations:

Probability of observing D defaults over N (total number of exposures in the credit portfolio) independent random draws follows a binomial distribution.
PD ASRF model Binomial Distribution assumptions:
  i.  Each asset in the rating grade has default probability P.
 ii.  Each pair of assets has default correlation ρ
iii.  The conditional correlation between any two assets is constant even if the number of defaults increases.
iv.   Normal distribution assumption for the systematic factor
 
ASRF model assumptions may get violated:

Assumptions (i) and (ii):
Let x1, ..,xn be random indicator variables representing the default behavior of the assets where xj =1 indicates the default of asset j. Define as the probability of default of asset j given that assets 1 to j-1 are known to have defaulted.
Assumptions (i) and (ii) imply that:
P1 = P and P2 = P + (1 - P )*ρ
When assets are independent ρ = 0 than these assumptions lead to the Binomial distribution with Pj = P.
However, If ρ > 0, then x1, ..,xn are not independent than the assumption of Binomial distribution is violated.
Assumption (iii):
If the conditional correlation between any two assets increase as the number of defaults increases will lead to increase in default probability.
The increasing default probability given other defaults results in fatter tails of the Correlated Binomial distribution. Contrast assumption (3) with the Binomial distribution where the independence assumption implies that ρj = ρ for all j=1,..,n assets
 
Assumption (iv):
Systematic factor may follows an autoregressive process.

Thursday, August 5, 2021

Modeling Low Default Portfolio Dependent Case:

 Modeling Low Default Portfolio Dependent Case:


VASCIEK MODEL: Dependence between the default is explained by by Vasicek model.

By using conditional probability from the Vasicek model in the case where there are no defaults, the probability of default is the solution of below equations:




Thursday, July 22, 2021

Low Default Portfolio (PD)

 Modeling Low Default Portfolio (Independent Default Events):

Pluto and Tasche method for calculating probability of default for portfolios with none or very few observations of defaults.
One-sided upper confidence bound as an estimator of PD.

Assumptions:
- n >0 borrowers in the portfolio.
- At the end of the observation period 0≤ d < n defaults are observed among the n borrowers.
- Default events are independent, hence the number of defaults in a portfolio is binomially distributed:
nCr * p^r * ((1-p)^(n-r)
n is the total number of borrowers, r is the total number of defaults and p is the probability of default.
PD to be logical, it should have the following characteristic:
p1 <= p2 <=p3 <=p4..........
It also means that p1=p2=p3=p4=p5...... In this scenario, all the 500 borrowers belong to the same risk characteristic, i.e. homogenous borrowers.

E.g: 
https://drive.google.com/file/d/1OmGmQV-AsYPsfdRYowSy1bkZArgEFMvN/view?usp=sharing




Tuesday, July 18, 2017

Model Risk Management


Model Risk Management

Model risk is the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.
Or
The outcome of the model is not same as expected.

-         Model Failure Examples-


1-     Long Term Capital Management (LTCM) – The fund had followed an arbitrage investment strategy on bonds, involving hedging against a range of volatility in foreign currencies and bonds, based on complex models.
Arbitrage margins are small and the fund took on leveraged positions. Russian crisis kicked off in 1998, European and US markets fell drastically and LTCM was badly hit through market losses.

2-     CDO / MBS – 2007 subprime mortgage crisis- Between 2002 and 2007, the mortgage underwriting standards had significantly deteriorated. However, those loans bundled into MBS and CDO with high ratings which were believed justified by credit enhancement techniques. Investors relied on rating agencies, blindly in many cases. However, a significant portion of AAA CDO and MBS tranches were finally downgraded to junk in 2007 and early 2008, once the housing bubble burst in the 2006.

-         Market risk regulatory pre-crisis models-


-         The VaR metrics used before the outburst of the financial crisis did not adequately capture tail-risk events, credit risk events as well as market illiquidity.
-         When the financial crisis arose, essentially driven by credit risk events, a large number of banks posted daily trading losses many times greater than their VaR estimates.
-         Model risk- tail credit risk events were not adequately modelled, hence underestimating possible losses in stressed conditions.

1-  Types Of Models- 


2-  Elements of Model Risk Management Framework-


2.1  Model Lifecycle Management-

Model development, documentation, classification and follow up-
-         Models are classified according to the level of risk.
-         The documentation should include description, key variables, assumptions and algorithms.

2.2   Model Risk Quantification-

-         Data, sensitivity to errors or absence of variables;
-         Estimates, sensitivity of estimates (maximum impact, alternative models);
-         Uses, predictive power evolution, impact of erroneous use, etc.

2.3   Model Control Framework-

-         Models assigned the highest level of risk are subject to continuous assessment.
-         In addition to the above, all models should be re-evaluated by Validation: o Annually.
If they undergo material changes.
-         Before they are deployed to production, they should have been approved.

3-  Model Risk Assessment Framework-


1)     Aspects - Identifying issues.
2)     Impact = What are the consequences of the issues.
3)     Probability of occurrence
4)     Risk Score = Probability * Consequences
5)     Model Risk Control = Action to eliminate issue.
6)     Residual Risk Score = Risk Score – Risk Score considering model risk control
7)     Ranking = Sorting Residual Risk Score to identify issues that need priorities.



4-  Model validation-

The set of processes and activities intended to verify that models are performing as expected.

3.1       Model Validation Matrices-

a)   Performance Matrices-

1-     Coefficient of Correlation
2-     Root Mean Square root error.
3-     Signal to Noise Ratio –

b)  Cross Validation-

To evaluate model by portioning the original sample into training set to train the model and to test and evaluate the model.
b.1- K-Out Cross Validation- k observations are left out in each set.
b.2-  K-fold Cross Validation – Original sample is portioned into k- sub samples.

c)   KS Test-

Maximum difference between cumulative % of event and that of cumulative % of non- event.

d)  ROC curve for Logistic Regression-

Receiver Operating Characteristics (ROC).
Finding are under the curve with axis as sensitivity and specificity.
Sensitivity (Y-axis) – is defined as model predicting an observation as positive (Y =1).
Specificity (X-axis) – is defined as model predicting an observation as negative (Y =0).
Generally, both are defined by cutoff (c).
P > c as positive (sensitivity)
P < c as negative (specificity)
C increase from 0 to 1
Sensitivity = 1 – Specificity.
Result –
If area under the curve is between 90-100 - Excellent

e)   Hosmer and Lemeshow or Chi- Square Test-

Five groups were formed.
For every group, the average estimated default Probability is calculated and used to derive the expected number of defaults per group.
Next, this number is compared with the amount of realized defaults in the respective group.
Then, test statistic of groups is used for the estimation sample is chi-square distributed in turn calculating p-value for the rating model.
Calculated as =
P-Value – The closer the p-value is to zero, the worse the estimation is.
o k =  (number of rating classes), ni = number of companies in rating class i, Di is the number of defaulted obligors in class i, pi is the forecasted probability of default for rating class i
o Compare with p-value.
o No critical value of p that could be used to determine whether the estimated PD’s are correct or not
o The closer the p-value is to zero the worse the estimation is.
o First –all else equal, the greater the chi square number, the stronger the relationship between the dependent and independent variable.
o Second –the lower the probability associated with a chi-square statistic, the stronger the relationship between the dependent and independent variable.
Third –If your probability is .05 or less, then you can generalize from a random sample to a population, and claim the two variables are associated in the population

3.2       Other Model validation approaches-

-         Stress testing-  Analysis of the impact of single extreme events
-         Scenario testing- A scenario is a probable future environment, either at a point in
time or over a period.
-         Sensitivity testing- A sensitivity is the effect of a set of alternative assumptions regarding a future environment.
-         Reverse stress testing
-         Back testing
-         Simulation/convergence test
-         Profit and loss attribution
-         Challenger/benchmark models
-         Replication
-         Boundary test


Example- Credit Risk Modeling-

Models are typically statistical in nature and the full suite of traditional model validation techniques are applicable. Some Examples:
-         Results benchmarking - process considers the model’s applications/uses to inform meaningful analysis. Benchmark both Expected Loss and Capital using various model settings and assumption.
-         Sensitivity analysis-  considers sensitivity to a variety of inputs and assumptions to provide effective challenge across the modeling process.
-         Back-testing-


R3 chase - Pursuit

Change Point Detection Time Series

  Change Point Detection Methods Kernel Change Point Detection: Kernel change point detection method detects changes in the distribution of ...