Friday, June 14, 2024

Change Point Detection Time Series

 

Change Point Detection Methods

Kernel Change Point Detection:

Kernel change point detection method detects changes in the distribution of the data, not just changes in the mean or variance.

Kernel Method is utilized to map the data into a high-dimensional feature space, where changes are more easily detectable. This approach uses the Maximum Mean Discrepancy (MMD) to measure the difference between the distributions of segments of the time series.

Steps:

1-      Data and Kernel Function: Consider a univariate time series {x1,x2,…,xn} We start by choosing a kernel function k(x,y) to measure similarity between points.

2-      Construction of Kernel Matrix: kernel matrix K is constructed, where each element Kij=k(xi,xj)

For the linear kernel, this is: Kij=xixj (XTX)

 

3-      Maximum Mean Discrepancy (MMD):

 

MMD measures how different two groups of data are by comparing the average of all pairwise similarities within each group and between the groups or compares two distributions to see if they are different.

MMD is used to measure the difference between the distributions before and after a candidate change point t.

For each candidate change point t

In the above equation:

-          The first term measures the similarity within the first segment.

-          The second term measures the similarity within the second segment.

-          The third term measures the similarity between the two segments.

 

4-      To detect the change point, we compute the MMD values are computed for all possible change points t and choose the one that maximizes the MMD value:


Excel Example :https://docs.google.com/spreadsheets/d/1IdC-ss1VjaL2QVQdABNwuIPfRphDtlZi/edit?usp=sharing&ouid=115594792889982302405&rtpof=true&sd=true

Friday, December 15, 2023

Modeling Low Default Portfolio

The BCR Approach (Modeling Low Default Portfolio):


Benjamin, Cathcart and Ryan proposed adjustments to the Pluto & Tasche or called Confidence Based Approach that is widely applied by banks.

Pluto& Tasche propose a methodology for generating grade level PDs by the most prudent estimation, the BCR approach concentrates on estimating a portfolio-wide PD that then apply to grade level estimates and result in capital requirements.

- Independent case: The conservative portfolio estimate as in the BCR setting is therefore given by the solution of (1).


- Dependent Case: Assumed that there is a single normally distributed risk-factor Y to which all assets are correlated and that each asset has the correlation √ρ with the risk factor Y.

For a given value of the common factory=Y the conditional probability of default given a realization of the systematic factor Y is given by (2). The probability of default is equivalent to finding p such that the above is true.


- Multi-Period: Multi-period case:

The, the conditional probability of default given a realization of the systematic factor for t years as in the Vasicek model (3)


Estimation Method:

Steps of Execution:

1-     Draw N samples from N(λ,Σ) where λ is a zero vector with the same length as the time period and Σ is the correlation matrix as in the Vasicek model.

2-     Equation (4)



3-     Find p such that f(p) is close to zero using the following iteration:

-         Set the number of iterations:

n = log2((phigh−plow)/δ) where [plow, phigh]is the interval p is believed to be in and δ is the accepted error.

-         For n number of iterations, the midpoint, pmid, of the interval is calculated. It is then checked if f(pmid)>0 or<0.

If it’s the first case the lower bound is set equal to the midpoint, in the second case the higher bound is set equal to the midpoint.

-         When the n iterations are done the estimated probability of default is set to the final midpoint



 

Saturday, November 4, 2023

PCA: Eigenvalues & Eigen Vectors

 

Excel (Macro enable workbook) and VBA code:

https://drive.google.com/drive/folders/18tIKLLg8MfJ2MYDjLPAWHspEbApVIpGz?usp=sharing


PCA: EigenValues & EigenVectors:



Eigen Values & Vectors-

Eigen Value- a scalar associated with a given linear transformation of a vector space and having the property that there is some nonzero vector which when multiplied by the scalar is equal to the vector obtained by letting the transformation operate on the vector; especially: a root of the characteristic equation of a matrix.

Eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it.

Let A an n*n matrix. The number x is an eigenvalue of A if there exists a non-zero vector v such that.

Av= xv

In this case, vector v is called an eigenvector of A corresponding to eigen value .

Rewrite the condition:                             Av= xv              as

                                                                (A− xI)*v= 0                  (E.1)

Where I am the n n identity matrix.

For a non-zero vector v to satisfy this equation, A− xI must not be invertible. If it is invertible than v = 0.

The Characteristic Polynomial =

                      p( x )=det (A− xI)                                                (E.2)

Roots of the above equation will give us the eigen values x .

Substituting x  in E.2 will give us the corresponding vectors.

-        Eigenvalues represent the variance of the data along the eigenvector directions, whereas the variance components of the covariance matrix represent the spread along the axes.

-        All the eigen vectors of a matrix are perpendicular, i.e. at right angles to each other.

-        The eigen vector of covariance matrix with the highest eigen value is the principle component of the data set.

 

-        The largest eigenvector of the covariance matrix always points into the direction of the largest variance of the data, and the magnitude of this vector equals the corresponding eigenvalue.

-        The second largest eigenvector is always orthogonal to the largest eigenvector, and points into the direction of the second largest spread of the data.

-        If the covariance matrix of our data is a diagonal matrix, such that the covariances are zero, then this means that the variances must be equal to the eigenvalues x . 

Thursday, September 14, 2023

Ratio Outlier (Medcouple)

 Refer for Details:

https://drive.google.com/drive/folders/19IEg3V008AkBwT5UGrsXCL8N6It7BGc_?usp=sharing



Sunday, August 27, 2023

Python Generic Code (Probability of Default Model Development, Validation and Testing):

Python Generic Code (Probability of Default Model Development, Validation and Testing):
In the link below:

1- PD_Factors.csv: CSV with factors and required data
2- PD_Model_Generic_Python (Doc and PDF): Python generic code (as in steps)
3- PD_Estimate_Steps_Python: Steps in word as in Screenshot below





Saturday, August 13, 2022

Bias & Variance Trade off:

 Errors of a model is decomposed into Noise, Bias and variance.


There is a tradeoff between a model's ability to minimize bias and variance.

Overfitting: Variance High; model is good in the training set but not in the testing data set. Low training error does not imply good expected performance: over‐fitting.
Underfitting: Bias High; Model is neither good in the training nor in the testing data.
Noise: The model is neither overfitting or underfitting, and the high MSE is simply due to the amount of noise in the dataset.

Error due to Bias: Actual Value – average (Predicted Value).
A high bias model characteristic:
1.   High training error.
2.   Validation error is similar in magnitude to the training error.
Error due to Variance:
 Is variability of a model prediction for a given data point. Repeating the entire model building process multiple times. The variance is how much the predictions for a given point vary between different realizations of the model.
A high variance model characteristic:
1.   Low training error
2.   Very high Validation error

Sunday, July 3, 2022

Bayesian Inference

  Bayesian Inference

Update probability with new information (data).

Combining two distributions (Likelihood and Prior) into Posterior.

Posterior is used find the “best” parameters in terms of maximizing the posterior probability.

Steps:

i.            Prior: Choose a PDF to model i.e. the prior distribution P(θ).

ii.            Likelihood: Choose a PDF for P(X|θ). How the data X will look like given the parameter θ.

iii.            Posterior: Calculate the posterior distribution P(θ|X) and pick the θ that has the highest P(θ|X).

 

Calculate P(θ) & P(X|θ) for a specific θ and multiply them together. Pick the highest P(θ) * P(X|θ) among different θ’s.

 Posterior becomes the new prior. Repeat step 3 as you get more data.



R3 chase - Pursuit

Change Point Detection Time Series

  Change Point Detection Methods Kernel Change Point Detection: Kernel change point detection method detects changes in the distribution of ...