Nam
Appendix H : Eigenvectors And Eigenvalue
Definition : This appendix explains the concept of eigenvectors and eigenvalues in
linear algebra.Eigenvectors and eigenvalues are the mathematical foundation of PCA
( Principle Component Analysis ) - Dimensionality Reduction
- Eigenvector: Imagine a "main direction" that the data generally moves in. For
example, when analyzing the prices of many stocks, there may be a "common
direction" where most of the stocks move up or down together. The eigenvector will
indicate the extent to which each variable (e.g., individual stock prices) contributes to
that "main direction."
- Eigenvalue: Indicates the "strength" or "importance" of that "main direction." A large
eigenvalue means that the change in that direction accounts for a large portion of the
overall movement in the data
Application: Helps identify the main drivers of financial market movement, such as
in yield curve analysis (Appendix I).
It presents the basic equations:
Ax = λx
( A is a square matrix, x is a non-zero vector, and λ is a scalar)
The equation can be written:
(A - λI)x = 0
The values of λ that satisfy this equation are called the eigenvalues of the matrix A,
and the corresponding x vectors are called the eigenvectors.
- This appendix describes How to find the eigenvalues by solving the determinant
equation of (A - λI) = 0, where I is the identity matrix. This equation is usually a
polynomial of degree n in λ, and its solutions are the eigenvalues.
- After the eigenvalues are found, the corresponding Eigenvectors can be found by
substituting each eigenvalue back into the equation (A - λI)x = 0 and solving for x.
- The appendix also notes that for large matrices, it is often necessary to use
numerical procedures to determine the eigenvectors and eigenvalues.
Appendix I : Principal Components Analysis
Definition :. PCA considers a special case where those special surfaces are linear in
form, called subspaces.The statistical technique uses eigenvectors and eigenvalues
to reduce data complexity by identifying a few "hidden factors" (principal
components) that explain most variation in the original data. This helps in market risk
management, where a portfolio may be affected by hundreds or thousands of risk
factors.(e.g., individual stock prices).
- The first step in PCA is to calculate the covariance matrix from the data. This matrix
is an n x n matrix, where the row (i, j) is the covariance between variables i and j, and
the diagonal elements are the variances of each variable.
- The next step is to calculate the eigenvalues and eigenvectors of the covariance
matrix (as explained in Appendix H). The eigenvectors are chosen to have length 1.
The eigenvector corresponding to the highest eigenvalue is the first principal
component, the eigenvector corresponding to the second highest eigenvalue is the
second principal component, and so on.
- The eigenvalue of the ith principal component, expressed as a percentage of the
sum of all eigenvalues, indicates the proportion of the overall variance explained by
the ith principal component. The square root of the ith eigenvalue is the standard
deviation of the ith factor score.
Example: You have information on 100 different commodities, but there may be only 3-4
key macroeconomic factors (e.g., global economic growth, inflation) that actually affect
the prices of most of these commodities. PCA helps to find these “key” factors.
Applications:
- Simplify risk management: Instead of tracking risk across hundreds of assets,
managers can focus on the risk from a few key components that matter most
- Yield curve analysis : Identify the key shift shapes of the yield curve (e.g., parallel
shifting, up/down sloping, curved) and their relative importance The eigenvectors will
indicate how sensitive yields at different maturities are to each of these types of shifts,
and the eigenvalues will indicate how important each type of shift is to the overall yield
curve movement.
Appendix J : Manipulation of Credit Transition Matrices
Definition: A credit transition matrix is a table that shows the probability that an
entity with a current credit rating will move to another rating (or default) within a given
time period.
- Typically, this matrix is a square matrix, with rows and columns representing different
credit ratings
- Each element (i, j) in the matrix represents the probability that an institution with
credit rating i at the beginning of the period will have credit rating j at the end of the
period.
Purpose of Working with the Transformation Matrix
- Transition matrices are typically given for a specific time period (e.g., one year).
However, in practice, we may need to estimate transition probabilities for different
time periods, such as months, years, or even continuous time.
- Estimating conversion probabilities for maturities longer than the initial period.
- Estimating conversion probabilities for maturities shorter than the initial period.
I + ΓΔt
- Modeling credit rating changes over time.
Practical Applications:
Credit Risk Modeling
- Estimate the probability of default and credit rating changes of counterparties in
credit risk models such as CreditMetrics.
- Assess credit portfolio risk by simulating different rating transition scenarios over
time.
- Calculate Credit Value at Risk (Credit VaR) to measure the potential credit loss of the
portfolio.
Debt Instrument Valuation:
- Conversion and default probabilities are used to adjust the expected cash flows of
debt instruments (e.g., corporate bonds) and calculate their present value, taking into
account credit risk.
Counterparty Credit Risk Analysis:
- In calculating Credit Value Adjustment (CVA), the transition matrix helps estimate the
probability of default of a counterparty at various points in the future.
Portfolio Management:
- Portfolio managers use the transition matrix to monitor and manage the credit risk of
investments in various debt instruments.
Appendix K: Valuation of Credit Default Swaps
Credit Default Swaps (CDS) are contracts where a protection buyer pays periodic premiums
to a protection seller in exchange for compensation if a credit event occurs. The valuation of
a CDS involves calculating the present value of two payment streams: the premium leg and
the protection leg.
The premium leg represents the series of payments from the buyer and is valued as:
PV(Premium Leg) = s × ∑[Δᵢ × v(tᵢ) × q(tᵢ)]
Where s is the CDS spread, Δᵢ is the time period between payments, v(tᵢ) is the discount
factor, and q(tᵢ) is the survival probability to time tᵢ.
The protection leg represents the contingent payment upon default and is valued as:
PV(Protection Leg) = (1-R) × ∑[v(t̄ᵢ) × (q(tᵢ₋₁) - q(tᵢ))]
Where (1-R) is the loss given default, and (q(tᵢ₋₁) - q(tᵢ)) represents the probability of default
during the period.
The fair CDS spread equates these values:
s = (1-R) × ∑[v(t̄ᵢ) × (q(tᵢ₋₁) - q(tᵢ))] / ∑[Δᵢ × v(tᵢ) × q(tᵢ)]
Under a constant hazard rate (λ) assumption, survival probability simplifies to q(t) = e^(-λt),
and λ can be approximated as s/(1-R), providing a direct link between CDS spreads and
implied default probabilities.
Appendix L: Valuation of Synthetic CDOs
A synthetic CDO is a structure where Party A agrees to pay Party B the losses on a portfolio
of debt instruments between an attachment point, αL, and a detachment point, αH. In return,
Party B makes periodic payments to Party A. For example, with αL = 8% and αH = 18%,
Party A covers all losses between 8% and 18% of the portfolio principal. Party B's payments
are determined by applying a spread, s, to the outstanding principal in the tranche.
The standard valuation model is the one-factor Gaussian copula model. We assume all
instruments in the portfolio have the same default probability distribution. The probability of
default by time t is Q(t) = 1 - e^(-λt), where λ is the hazard rate calculable from CDS spreads.
The one-factor Gaussian copula model gives the conditional default probability as:
Q(t|F) = N[(N^-1[Q(t)] - √ρF)/√(1-ρ)]
where N is the cumulative normal distribution function and ρ is the correlation parameter.
For a portfolio of n debt instruments with 1/n principal each, the probability of exactly k
defaults by time t conditional on F follows the binomial probability:
P(k,t|F) = n!/(k!(n-k)!)·[Q(t|F)]^k·[1-Q(t|F)]^(n-k)
With recovery rate R, we define:
nL = αL·n/(1-R) and nH = αH·n/(1-R)
Let m(x) be the smallest integer greater than x. When k < m(nL), there is no loss to the
tranche. When k ≥ m(nH), the tranche is wiped out. When m(nL) ≤ k < m(nH), the
proportional principal remaining is:
(αH - k(1-R)/n)/(αH - αL)
Valuation involves calculating:
● Present value of expected spread payments received by Party A
● Present value of expected default payments made by Party A
● Present value of expected accrual payments
In practice, dealers quote implied correlations rather than spreads. Compound correlation is
the ρ value consistent with a tranche's market value. Base correlation is the ρ value consistent
with a tranche having zero attachment point and x% detachment point. Similar to volatility
smiles for options, CDOs exhibit correlation smiles across different tranches of the same
portfolio.