Showing posts with label Econometrics. Show all posts
Showing posts with label Econometrics. Show all posts

Friday, June 14, 2024

Change Point Detection Time Series

 

Change Point Detection Methods

Kernel Change Point Detection:

Kernel change point detection method detects changes in the distribution of the data, not just changes in the mean or variance.

Kernel Method is utilized to map the data into a high-dimensional feature space, where changes are more easily detectable. This approach uses the Maximum Mean Discrepancy (MMD) to measure the difference between the distributions of segments of the time series.

Steps:

1-      Data and Kernel Function: Consider a univariate time series {x1,x2,…,xn} We start by choosing a kernel function k(x,y) to measure similarity between points.

2-      Construction of Kernel Matrix: kernel matrix K is constructed, where each element Kij=k(xi,xj)

For the linear kernel, this is: Kij=xixj (XTX)

 

3-      Maximum Mean Discrepancy (MMD):

 

MMD measures how different two groups of data are by comparing the average of all pairwise similarities within each group and between the groups or compares two distributions to see if they are different.

MMD is used to measure the difference between the distributions before and after a candidate change point t.

For each candidate change point t

In the above equation:

-          The first term measures the similarity within the first segment.

-          The second term measures the similarity within the second segment.

-          The third term measures the similarity between the two segments.

 

4-      To detect the change point, we compute the MMD values are computed for all possible change points t and choose the one that maximizes the MMD value:


Excel Example :https://docs.google.com/spreadsheets/d/1IdC-ss1VjaL2QVQdABNwuIPfRphDtlZi/edit?usp=sharing&ouid=115594792889982302405&rtpof=true&sd=true

Saturday, November 4, 2023

PCA: Eigenvalues & Eigen Vectors

 

Excel (Macro enable workbook) and VBA code:

https://drive.google.com/drive/folders/18tIKLLg8MfJ2MYDjLPAWHspEbApVIpGz?usp=sharing


PCA: EigenValues & EigenVectors:



Eigen Values & Vectors-

Eigen Value- a scalar associated with a given linear transformation of a vector space and having the property that there is some nonzero vector which when multiplied by the scalar is equal to the vector obtained by letting the transformation operate on the vector; especially: a root of the characteristic equation of a matrix.

Eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it.

Let A an n*n matrix. The number x is an eigenvalue of A if there exists a non-zero vector v such that.

Av= xv

In this case, vector v is called an eigenvector of A corresponding to eigen value .

Rewrite the condition:                             Av= xv              as

                                                                (A− xI)*v= 0                  (E.1)

Where I am the n n identity matrix.

For a non-zero vector v to satisfy this equation, A− xI must not be invertible. If it is invertible than v = 0.

The Characteristic Polynomial =

                      p( x )=det (A− xI)                                                (E.2)

Roots of the above equation will give us the eigen values x .

Substituting x  in E.2 will give us the corresponding vectors.

-        Eigenvalues represent the variance of the data along the eigenvector directions, whereas the variance components of the covariance matrix represent the spread along the axes.

-        All the eigen vectors of a matrix are perpendicular, i.e. at right angles to each other.

-        The eigen vector of covariance matrix with the highest eigen value is the principle component of the data set.

 

-        The largest eigenvector of the covariance matrix always points into the direction of the largest variance of the data, and the magnitude of this vector equals the corresponding eigenvalue.

-        The second largest eigenvector is always orthogonal to the largest eigenvector, and points into the direction of the second largest spread of the data.

-        If the covariance matrix of our data is a diagonal matrix, such that the covariances are zero, then this means that the variances must be equal to the eigenvalues x . 

Thursday, September 14, 2023

Ratio Outlier (Medcouple)

 Refer for Details:

https://drive.google.com/drive/folders/19IEg3V008AkBwT5UGrsXCL8N6It7BGc_?usp=sharing



Saturday, August 13, 2022

Bias & Variance Trade off:

 Errors of a model is decomposed into Noise, Bias and variance.


There is a tradeoff between a model's ability to minimize bias and variance.

Overfitting: Variance High; model is good in the training set but not in the testing data set. Low training error does not imply good expected performance: over‐fitting.
Underfitting: Bias High; Model is neither good in the training nor in the testing data.
Noise: The model is neither overfitting or underfitting, and the high MSE is simply due to the amount of noise in the dataset.

Error due to Bias: Actual Value – average (Predicted Value).
A high bias model characteristic:
1.   High training error.
2.   Validation error is similar in magnitude to the training error.
Error due to Variance:
 Is variability of a model prediction for a given data point. Repeating the entire model building process multiple times. The variance is how much the predictions for a given point vary between different realizations of the model.
A high variance model characteristic:
1.   Low training error
2.   Very high Validation error

Sunday, July 3, 2022

Bayesian Inference

  Bayesian Inference

Update probability with new information (data).

Combining two distributions (Likelihood and Prior) into Posterior.

Posterior is used find the “best” parameters in terms of maximizing the posterior probability.

Steps:

i.            Prior: Choose a PDF to model i.e. the prior distribution P(ฮธ).

ii.            Likelihood: Choose a PDF for P(X|ฮธ). How the data X will look like given the parameter ฮธ.

iii.            Posterior: Calculate the posterior distribution P(ฮธ|X) and pick the ฮธ that has the highest P(ฮธ|X).

 

Calculate P(ฮธ) & P(X|ฮธ) for a specific ฮธ and multiply them together. Pick the highest P(ฮธ) * P(X|ฮธ) among different ฮธ’s.

 Posterior becomes the new prior. Repeat step 3 as you get more data.



Wednesday, June 15, 2022

Beta Distribution

 Beta Distribution: the probability of success on any single trial as the random variable, and the number of trials n and the total number of successes in n trials as constants.

For the Binomial Distribution the number of successes X is a random variable and the number of trials N and the probability of success p on any single trial are parameters (i.e. constants).



Friday, March 4, 2022

Identifying Outlier in Ratio detection for skewed distribution

https://docs.google.com/spreadsheets/d/1gBjuMYN_pRLu2MfNU0WW9zzsq5EMIN6d/edit?usp=sharing&ouid=115594792889982302405&rtpof=true&sd=true:




 

 


Saturday, December 12, 2020

Functions, Processes & Transforms

  Characteristic Function- Inverse of Fourier transform of CF is PDF:


Moment-generating function- Measures of central tendency and dispersion. Points represent probability density, then the 0th moment = total probability (i.e. one), 1st = ยต, 2nd = s2, 3rd = skewness and 4th = kurtosis.


Laplace Transform- converts integral and differential equations into algebraic equations. From the time domain to the frequency domain. Expresses a function as a superposition of moments








Tuesday, December 8, 2020

Distributions and Inequalities

Distributions-

o   Binomial distribution- Probabilities of the number of successes over a given number of trials-



o Normal Distribution- X is normal random variable (mean = ยต, variance = ฯƒ2). PDF for the normal distribution.


o   Poisson Distribution- Average number of successes = สŽ, Poisson probability of k successes


o   Geometric distribution- Number of trials X it takes to get the first success has a geometric distribution.


o   Chi-Square Distribution- used for goodness of fit  and in CI estimation for a PSD (s) from SSD (s).

o   Cauchy distribution- ยต and s2 are undefined. Mode and median are defined and equal to x0.

Location parameter can be mean, median or mode and Scale parameter can be variance, s etc.

o   Weibull Distribution- Lognormal distribution, change in parameter changes shape and in turn skewness.

o   Joint Distribution Function-


o   Marginal distributions- Distribution of each variable separately. If independent, than pxy(s,t) = px(s) * py(t).


o   Conditional Distribution and Expectation-

o   Inequalities-

a)      Jensen's Inequality- -

b)      Markov’s inequality- gives an upper bound of P such that +ve random variables ³ +ve constant.


c)      Tchebychev’s inequality- gives an upper bound of P such that -ve random variables ³ -ve constant.








R3 chase - Pursuit

Change Point Detection Time Series

  Change Point Detection Methods Kernel Change Point Detection: Kernel change point detection method detects changes in the distribution of ...