Estimating Market Risk Measures
LOS 1. Estimate VaR using a historical simulation approach.
We start with Profit-And-Loss (P/L) time series data calculated from time series of security prices and their associated interim payments. $$P/L_t = P_t + D_t – P_{t-1}$$ where, $D_t$ is an interim payment (say, a coupon or dividend). We also adjust the P/L calculation for the time value of money as: $$P/L_t = \frac{P_t+D_t}{1+r}-P_{t-1} ,\\ P/L_t = \left(P_t +D_t\right) – P_{t-1} (1+r)$$
Using the above, we can define a return series as:
- Arithmetic Returns $(r_t)$: $$r_t = \frac{P_t+D_t – P_{t-1}}{P_{t-1}}$$
- Geometric Returns $(R_t)$: $$R_t =\ln \left(\frac{P_t +D_t}{P_{t-1}}\right)$$
The two returns type can be converted from one type to another by using a simple equivalence relation. $$R_t =\ln\left(1+r_t\right)$$ Historical simulation simply involves the following two steps:
- Order or sort the daily $P/L_t$ series
- Corresponding to the chosen confidence level ($c\%$, say 95%, 99% etc.), pick the observation $P/L^*_t$ such that:
LOS 2. Estimate VaR using a parametric approach for both normal and lognormal return distributions.
This approach assumes that the P/L follows a parametric distribution (specifically, the normal distribution). VaR at confidence level of $c\%$ can then be defined by: $$VaR = – (\mu_{P/L} -\sigma_{P/L} z_c)$$ One can define two types of parametric VaR:
1. Normal VaR
We apply the normality assumption to arithmetic returns i.e. $r_t \sim N(\mu_r, \sigma_r)$. Let the notional amount invested at the beginning of the period be $P_{t-1}$. Based on our confidence level $c\%$, we can pick our critical return and state the VaR as: $$VaR = -\left[\mu_r -z_c \sigma_r\right]P_{t-1}$$
2. Lognormal VaR
In this case, we apply the normality assumption to geometric returns $(R)$ (implicitly assign lognormal distribution to prices). $$returns \sim N\left(\mu_R , \sigma_R\right) \equiv P \sim LN()$$ $$VaR = P_{t-1}\left(1- \exp\left[\mu_R- \sigma_R z_c\right]\right)$$ Normal and Lognormal VaR are close to each other if the return period is small (e.g. daily).
LOS 3. Define coherent risk measures.
Consider two portfolios $X$ and $Y$ and let $\rho()$ be a risk measure that is coherent if it satisfies all of the following conditions (for any choice of $X$ and $Y$):
- Monotonicity: $Y \geq X \Rightarrow \rho (Y) \leq \rho (X)$
- Subadditivity: $\rho(X+Y)\leq \rho(X)+\rho(Y)$
- Positive Homogeneity: $\rho(hX) = h\rho(X); h\gt 0$
- Translational Invariance: $\rho(X+n) = \rho (X) -n$
Subadditivity is important in the following sense:
- [Margin Calculations] If margin calculations are done using a risk measure that is not sub-additive, then it will tempt investors to break down their accounts into numerous small accounts (for each risk).
- [Regulatory Capital]: If a regulator uses a risk measure that is not sub-additive to set regulatory capital then they are incentivizing firms to break down into smaller businesses.
- [Upper Bound]: Using a sub-additive risk measure helps us define a conservative risk estimate for our portfolio by simply adding the risks of the constituent sub-portfolios.
VaR is not a sub-additive risk measure. It can be made so, if you impose the condition that the P/L distribution should be elliptical.
LOS 4. Estimate the expected shortfall given P/L or return data.
The expected shortfall (ES) is the probability weighted average of tail losses. It gives us an insight into how much loss to expect if indeed our loss were to exceed VaR. $$ES = E(loss|loss \gt VaR)$$ In case of simple historical simulation, ES can be estimated as a simple average of $[N(1-c)]$ observations (since in this case, each observation carries the same probability mass or weight). Unlike VaR that may be ambiguous for discrete (or granular) data, ES does not have any such ambiguity (provided you’re supplied a VaR number).
Since ES is the probability weighted average of tail losses, it can also be estimated as the average of tail VaRs. To do so, we slice the tail into $n$ slices – sliced in such a way that each slice has the same probability mass (this creates $n-1$ VaRs). We then estimate the VaR corresponding to each slice and compute ES as the average of these VaRs.
LOS 5. Estimate risk measures by estimating quantiles.
A general risk measure is a weighted average of the quantiles of a loss distribution: $$M_{\emptyset} = \int ^1_0 \emptyset(p)q_p dp$$ where, $\emptyset (p)$ is the weighting function or risk spectrum function. For VaR at confidence $c$ choose $$\emptyset (p) = \begin{cases} 1 & \mbox{if } p = c \\ 0 & \mbox{otherwise} \end{cases}$$ and for ES choose: $$\emptyset (p) = \begin{cases} 0 & \mbox{if } p \lt c \\ \frac{1}{1-c} & \mbox{if } p \geq c \end{cases}$$ If $\emptyset (p)$ satisfies a few conditions (listed below), the resulting risk measure is coherent and belongs to the category of spectral risk measures. $$\emptyset (p) \geq 0 \forall p \in [0,1]\\ \int^1_0 \emptyset (p) dp =1\\ p_2 \gt p_1 \Rightarrow \emptyset (p_2) \geq \emptyset (p_1)$$
LOS 6. Evaluate estimators of risk measures by estimating their standard errors.
The precision of an estimate of risk measure (say, a quantile based measure like VaR with a sample estimate $q$) can be measured by it’s standard error $se(q)$. If this standard error is known, one can readily calculate the confidence interval corresponding to a significance level of $\alpha:$ $$q -se(q)\cdot z_{\alpha /2} \lt VaR \lt q + se(q)\cdot z_{\alpha /2}$$ To calculate the standard error, we pick a bin width $(h)$ surrounding the sample risk estimate $(q)$. $$se(q)= \frac{\sqrt{\frac{p_{LEFT}(1-p_{LEFT})}n}}{p_{MIDDLE}} \\ p_{MIDDLE} =1-p_{LEFT}-p_{RIGHT}$$ Note following rules of thumb – the standard error:
- decreases as sample size ($n$) increases (see formula above),
- decreases as bin size ($h$) increases, since owing to this the $p_{MIDDLE}$ increases.
- increases as confidence level ($c$) increases, since this means that $p_{LEFT}$ decreases and so does $p_{MIDDLE}$ in the denominator.
LOS 7. Interpret QQ plots to identify the characteristics of a distribution.
A QQ plot plots the quantiles of our empirical distribution vs those of a specified benchmark distribution. If the QQ plot is linear, the specified distribution fits the data and we have identified the right distribution. If it isn’t then we dismiss this specified distribution and try others.
Departures of the QQ plot from linearity in the tails tells us whether the tails are fatter or thinner (vs the specified distribution). If empirical distribution has heavier tails QQ plot has steeper slopes at tails and central part is linear. A linear transformation of one of the distributions in QQ plot changes the slope and intercept of the plot. We can then use the slope and intercept to give us an idea of the location and scale parameters of sample data. QQ plot is very good in identifying outliers.