# Collateralized Debt Obligation

Collateralized Debt Obligation(简称 CDO) 是造成07年次贷危机的"罪魁祸首"之一, 多年过去, 估计很多关注实事, 比较有求知欲的人已经大概搞清了 CDO 合同的定义以及操作程序. 但是 CDO 毕竟是信用风险衍生品中较为复杂的一种, 写这篇 Review 目的在于简单总结一下什么是 CDO, 如何进行数学上的定价, 得到一个大致清晰的轮廓.

$R_j$ 是第 j 个贷款人破产后 CDO 购买者所能获得补偿的比例;

$\tau_j$ 代表破产的时间, $H^j_t=1_{\tau_j\leq t}$ 表示第 j 个人在 t 之前已经破产, 于是总计的损失 $L(t)$ 可以表示为

$L(t)=\sum^n_{j=1}(1-R_j)N_j1_{\tau_j\leq t}.$

$M_t=(L(t)-A)1_{[A,B]}L(t)+(B-A)1_{[B,C]}L(t).$

$\text{Protection Leg}=\int^T_0\beta_tdM_t$,

$\text{Premium Leg}=\kappa\sum^J_{j=1}\beta_{t_j}(t_j-t_{j-1})\left(\max\{M_{t_j},B\}-\max\{A,M_{t_j}\}\right)$,

$\mathbb{E}\left(\int^T_0\beta_tdM_t\right),$
$\mathbb{E}\left(\kappa\sum^J_{j=1}\beta_{t_j}(t_j-t_{j-1})\left(\max\{M_{t_j},B\}-\max\{A,M_{t_j}\}\right)\right)$.

# Fixed Income Modeling Review 8

Feynman-Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations (PDEs) and stochastic processes. Suppose $S(t)$ follows the stochastic process

$dS(t)=\mu(t,\omega)dt+\sigma(t,\omega)dW(t)$

and if $V$ is defined as

$V(t,S(t))=E^Q_t[H(T,S(T))]$

then $V$ satisfies

$\frac{\partial V}{\partial t}+\mu\frac{\partial V}{\partial S}+\frac{1}{2}\sigma^2\frac{\partial^2 V}{\partial S^2}=0$, $V(T,S(T))=H(T,S(T))$.

The theorem can be stated the other way around. Thus, Feynman-Kac formula could be applied to interest rate models, for example, under Hull-White model, zero coupon bond price $P(t,T)$ satisfies

$\frac{\partial P}{\partial t}+(\theta(t)-ar(t))\frac{\partial P}{\partial r}+\frac{1}{2}\sigma^2\frac{\partial^2 P}{\partial r^2}=r(t)P$, $P(T,T)=1$

we can deduce the analytic solution for $P(t,T)$ by using the theorem described above.

Another application of Feynman-Kac formula in finance is discretization: explicit schemes and implicit schemes. For explicit schemes, we use such discretization method:

$\left(\frac{\partial S}{\partial r}\right){i,j}\approx\frac{S{i,j+1}-S_{i,j-1}}{2\Delta r}$,
$\left(\frac{\partial^2 S}{\partial r^2}\right){i,j}\approx\frac{S{i,j+1}-2S_{i,j}+S_{i,j-1}}{(\Delta r)^2}$,
$\left(\frac{\partial S}{\partial t}\right){i,j}\approx\frac{S{i,j}-S_{i-1,j}}{\Delta t}$

substitute these into the PDE then the equation could be written as

$S_{i-1,j}=A_{i,j}S_{i,j+1}+B_{i,j}S_{i,j}+C_{i,j}S_{i,j-1}$

This kind of scheme has equivalence to trinomial tree valuation

$S_{i-1,j}=p_uS_{i,j+1}+p_mS_{i,j}+p_dS_{i,j-1}-r_j\Delta tS_{i,j}$

and setting the size of rate step $\Delta r=\sigma(3\Delta t)^{1/2}$ leads to the most efficient approximation, under which the scheme becomes 4th order w.r.t. to $r$.

While applying explicit finite difference methods, boundary conditions have very little effect. We need to have

$\sigma^2_{i,j}\frac{\Delta t}{(\Delta r)^2}\leq\frac{1}{2}$

hold cause otherwise small numerical errors can grow uncontrollably.

For implicit scheme, we discretize the PDE like this:

$\left(\frac{\partial S}{\partial r}\right){i,j}\approx\frac{S{i-1,j+1}-S_{i-1,j-1}}{2\Delta r}$,
$\left(\frac{\partial^2 S}{\partial r^2}\right){i,j}\approx\frac{S{i-1,j+1}-2S_{i-1,j}+S_{i-1,j-1}}{(\Delta r)^2}$,
$\left(\frac{\partial S}{\partial t}\right){i,j}\approx\frac{S{i,j}-S_{i-1,j}}{\Delta t}$

then Feynman-Kac formula becomes

$A_{i,j}S_{i-1,j+1}+B_{i,j}S_{i-1,j}+C_{i,j}S_{i-1,j-1}=S_{i,j}$

It has a relationship to trinomial trees:

$(1-r_j\Delta t)S_{i,j}\approx p_uS_{i-1,j+1}+p_mS_{i-1,j}+p_dS_{i-1,j-1}$

Since we use time $i-1$ value to calculate $S_{i,j}$, then if we want to do pricing backward, we need to solve a system of linear equations. Also, we need to care about the boundary conditions where we usually set $r_0=0$ and $r_M=r_\infty$ which is large enough. Hence, implicit scheme is slightly more difficult to implement than the explicit one. However, it is always stable and convergent. In practice, one method that is used very often is called Crank-Nicolson method which combines both explicit scheme and implicit scheme:

$\left(\frac{\partial S}{\partial r}\right){i,j}\approx(1-\theta)\frac{S{i,j+1}-S_{i,j-1}}{2\Delta r}+\theta \frac{S_{i-1,j+1}-S_{i-1,j-1}}{2\Delta r}$,
$\left(\frac{\partial^2 S}{\partial r^2}\right){i,j}\approx(1-\theta)\frac{S{i,j+1}-2S_{i,j}+S_{i,j-1}}{(\Delta r)^2}+\theta \frac{S_{i-1,j+1}-2S_{i-1,j}+S_{i-1,j-1}}{(\Delta r)^2}$,
$\left(\frac{\partial S}{\partial t}\right){i,j}\approx(1-\theta)\frac{S{i,j}-S_{i-1,j}}{\Delta t}+\theta \frac{S_{i,j}-S_{i-1,j}}{\Delta t}$

Although Crank-Nicolson method is a little bit harder to implement than both the explicit and implicit scheme, it is as stable as the fully implicit method and is the second-order accurate w.r.t. time step which is better than explicit and implicit schemes.

# Fixed Income Modeling Review 7

In finance, there are two major applications of the Monte Carlo simulation:

-- Generating stochastic paths for interest rates, exchange rates, and stock prices;
-- Numerical valuation of derivative instruments;

consider the risk-neutral pricing equation for security S with payoff $H(T)$

$S(t)=E_t[\exp(-\int^T_tr(s,\omega)ds)H(T,\omega)]$

we can generate a large number of equally probable sample paths then the security value at time $t=0$ can be approximated as

$S(0)=\frac{1}{N}\sum^N_{n=1}\exp(-\int^T_tr(s,\omega_n)ds)H(T,\omega_n)$.

For rate process $dr=\mu(r,t)dt+\sigma(r,t)dW(t)$, path generation is straightforward:

$r(t_{n+1})=r(t_n)+\mu(r(t_n),t_n)\cdot\Delta t+\sigma(r(t_n),t_n)\cdot\sqrt{\Delta t}\cdot \varepsilon_{n+1}$

where $\varepsilon_n$ is standard normal random number. For stochastic driver like this

$r(t)=F(\varphi(t)+u(t))$
$du=\mu(t,u)dt+\sigma(t,u)dW(t)$

we sample $u(t)$ first and set $r_n(t)=F(\varphi(t)+u_n(t))$.

When compared to the lattice valuation method, Monte Carlo approach has two important advantages:

-- Monte Carlo method easily handles path dependent instruments
-- Monte Carlo method is well suited to be used with multi-factor models

however, it also has drawbacks:

-- It is ill-suited for pricing interest rate derivatives with embedded exercise rights
-- It converges to true value very slowly
-- Longer rates along short rate paths cannot be implied from the paths
-- It does not give the same result twice since its value is random
-- Generating sample paths for high dimensional problems has considerable practical difficulties, which is an important feature of interest rate derivatives
-- MC has the opposite problem of recombining lattice: the full future is inaccessible

To implement Monte Carlo method, first we generate uniform random numbers then use methods such as Box-Muller or inverse transform to get normal random numbers. Because of the perfect foresight problem, zero coupon bonds prices cannot be computed without the use of either analytic pricing formulas or interest rate trees. Although MC method will not be used to calibrate the model, MC paths should be calibrated on the top of model calibration to make sure they price zero coupon bonds correctly. Instead of the continuous sampling as described above, we could first discretize the underlying continuous short rate process by means of a short rate lattice and then sample the lattice instead, which is call discrete sampling. The advantage of discrete sampling is that longer yields are readily available at each MC node and we also have a complete view of the stochastic future at each node on a path.

We want to decease the error of MC method as much as possible. One way is to increase the number of paths, the other is variance reduction. There are also two ways to do variance reduction: control variates and improving the sampling quality. The latter's advantage is it does not depend on characteristics of each instrument as the first one does. To improve the sampling quality, there is no other way better than improving the quality of Brownian motion sampled. Since we generate uniform random numbers first, then we can apply an approach called stratified sampling to improve their uniformity by

$u^*_n=\frac{n-1}{N}+\frac{u_n}{N}$

Based on the fact that Brownian motion is perfectly symmetric about zero, we generate Wiener sample paths which are symmetric to each other w.r.t. zero:

$W_{-n}(t)=-W_n(t)$
$V_{MC}=\frac{1}{N}\sum^N_{n=1}\frac{V(W_n)+V(W_{-n})}{2}$

and this is called antithetic sampling. After the sample paths $W_n(t_i)$ are generated, one can make a simple adjustment called moment matching to ensure the correct mean and standard deviation:

$W^*_n(t_i)=\sqrt{t_i}\cdot\frac{W_n(t_i)-M_i}{S_i}$

where $M_i$ and $S_i$ are the actual mean and standard deviation of the paths $W_n$ at time $t_i$.

# Fixed Income Modeling Review 6

To apply the theoretical models, let us first see how to use interest rate trees to price and calibrate. As mentioned in Review 5, markovian property of short rate models is needed to implement recombining lattice.

Usually, a short rate model can be written as

$r(t)=F(u(t)+\varphi(t))$
$du(t)=-au(t)dt+\sigma dW(t).$

Hence, we first discretize the process $u(t)$. Typically, people use binomial tree and trinomial tree but for mean reverting model, binomial tree cannot be used. The time step $\Delta t$ is arbitrary, the state step $\Delta u$ and branch probabilities $p_u$, $p_m$ and $p_d$ should be chosen in such a way that the discrete dynamics has the first few moments matched. Since we could solve

$u(t+\Delta t)=e^{-a\Delta t}u(t)+\sigma\int^{t+\Delta t}_te^{-a\Delta t}dW(s)$

then by moment-matching, we can set

$\Delta u=\sqrt{3V}$, $V=\sigma^2\frac{1-\exp(-2a\Delta t)}{2a}$

and we select the middle branching node at next step to be the closest one to the mean of the continuous process:

$k=\int[je^{-a\Delta t}+\frac{1}{2}$

Branching probabilities are determined as

$p_u=\frac{1}{6}+\frac{1}{2}(\beta^2+\beta)$
$p_m=\frac{2}{3}-\beta^2$
$p_d=\frac{1}{6}+\frac{1}{2}(\beta^2-\beta)$
$beta=je^{-a\Delta t}-k$

Once tree for the stochastic driver $u(t)$ is built, we need to convert it to the short rate tree according to the functions:

normal: $r(i,j)=r_0(i)+u(i,j)$
lognormal: $r(i,j)=r_0(i)\cdot\exp(u(i,j))$

where mean level vector $r_0(i)$ is calibrated to the term structure of zero coupon bond prices.

Since trees represent discretization of the rate dynamics, then within each time step the evolution of short rate is not described by tree and thus needs to be specified. Usually we assume short rate is constant within each period. To price derivatives, we do backwards

$S(t)=E^B_t[\exp(-\int_t^{t+\Delta t}r(s)ds)(S(t+\Delta t)+CF(t+\Delta t))]$

where $CF(t+\Delta t)$ is the cash flow at time $t+\Delta t$. And now the remaining problem is how to calibrate to get mean level vector $r_0(i)$. Denote prices of zero coupon bonds with maturities $t_i=i\Delta t$ by $P(i)$, then apply the following iterative search process forward:

(1) Set $r_0(0)=-\ln(P(1))/\Delta t$
(2) Search for the value of $r_0(1)$ such that lattice price of the zero coupon bond maturing at $t_2$ equals $P(2)$
(3) ...
(4) Assuming $r_0(0),\ldots,r_0(i-1)$ are done, search for the value of $r_0(i)$ to fit $P(i+1)$
(5) ...

We need to keep in mind that we always need to do numerical calibration for $r_0(i)$ even when there are analytic formulas for the no-arbitrage drift because the use of trees alters the continuous dynamics. To be more specific about step (2) and step (4), we introduce Arrow-Debreu Security which pays 1\$ once the node (i,j) is reached and pays nothing otherwise. Thus in trees we have

$AD(i+1,j)=\sum_k AD(i,k)\exp(-r(i,k)\Delta t)P_{i,k\rightarrow i+1,j}$

One very useful property of Arrow-Debreu prices is that

$P(0,t_i)=\sum_j AD(i,j)$

Recombining interest rate trees have a serious shortcoming that the rate history is totally lost and thus it is incompatible with any path dependent instrument. To overcome this problem, one can use the tower law and record the cash flows at the lattice nodes not when they are paid but rather when they are certain and express them with appropriate discounting.