Hypothesis Testing: p-value Approach (FRM Part 1)

Posted On:
Hypothesis Testing: p-value Approach (FRM Part 1)

1. Context

We usually study and understand the decision rules for hypothesis testing that involve the test statistic computed from sample with the critical value and then depending on whether it falls in the ‘reject’ or ‘do-not-reject’ regions, we conclude if we reject the null hypothesis or not. The p-value approach is an analogous approach for specifying decision rules that make use of probabilities instead of critical values. Hypothesis tests figure at many points of the Part 1 and Part 2 curriculum so it is good to understand them well. The details of the reading in which this topic appears are given below:

AreaQuantitative Analysis
ReadingHypothesis Testing and Confidence Intervals
ReferenceMiller, Michael, Ch17. Hypothesis Testing and Confidence Intervals. In Mathematics and Statistics for Financial Risk Management, 2nd Edition, John Wiley and Sons, Hoboken NJ, 2013.

2. Terminology

We will use the following symbols in the sections below:

$T$Test statistic, as a random variable
$t_{test}$Test statistic, actual value computed from current sample
$t_{C}$Critical value for a one-tailed test
$t_{C,L}$Lower critical value for a two-tailed test
$t_{C,U}$Upper critical value for a two-tailed test
$\alpha$Level of significance
$\Pr()$Probability of
$F()$Cumulative distribution function

3. Critical Value Based Testing

Depending on which test we are doing (one-tailed vs two-tailed), we quickly recap both how critical values are obtained and the corresponding decision rules:

TestCritical Value
Left Tailed$\Pr(T < t_C)=\alpha \Leftrightarrow t_C=F^{-1}(\alpha)$
Right Tailed$$\Pr(T > t_C)=\alpha \Leftrightarrow \Pr(T < t_C)=1-\alpha$$ $$t_C=F^{-1}(1-\alpha)$$
Two Tailed$\Pr(T < t_{C,L})=\alpha/2 \Leftrightarrow t_{C,L}=F^{-1}(\alpha/2)$
$$\Pr(T > t_{C,U})=\alpha/2 \Leftrightarrow \Pr(T < t_{C,U})=1-\alpha/2$$ $$t_{C,U}=F^{-1}(1-\alpha/2)$$
TestReject IfDon’t Reject If
Left Tailed$t_{test} < t_C$$t_{test} > t_C$
Right Tailed$t_{test} > t_C$$t_{test} < t_C$
Two Tailed$t_{test} < t_{C,L}$ or $t_{test} > t_{C,U}$$t_{C,L} < t_{test} < t_{C,U}$

4. p-Value Based Testing

In p-Value based testing, instead of using critical values in our decision rules we instead use probabilites. The corresponding decision rules are much simpler and the same irrespective of the type of test you are conducting – one tailed (left or right) or two tailed.

Testp-Value
Left Tailed$\mbox{$p$-$\mathit{value}$}=\Pr\left (T< t_{test}\right)=F(t_{test})$
Right Tailed$\begin{align} \mbox{$p$-$\mathit{value}$} &=\Pr\left (T > t_{test}\right) \\ &= 1-\Pr(T< t_{test})=1-F(t_{test})\\ \end{align}$
Two Tailed$\mbox{$p$-$\mathit{value}$} = \Pr\left(T > |t_{test}|)+ \Pr(T< - |t_{test}|\right)$

To understand the table above, just think of p-Value to be the probability of obtaining a worse (or more extreme) outcome compared to the one just obtained (i.e. $t_{test}$) and in the direction of the alternate hypothesis ($H_a$).

TestReject IfDon’t Reject If
Any Test$\mbox{$p$-$\mathit{value}$} < \alpha$$\mbox{$p$-$\mathit{value}$} > \alpha$

5. Critical Value vs p-Value

The two methods of testing hypotheses are essentially the same, which we demonstrate below through some simple (and a bit sketchy) arguments below. In each case, we move from our critical value based decision rule for rejecting null hypothesis and arrive at the p-Value based rule, thereby demonstrating their equivalence:

5.1 Left Tailed Test

$$t_{test} < t_{C}\Leftrightarrow t_{test} < F^{-1}(\alpha)$$ $$ \Rightarrow F(t_{test}) < \alpha$$ $$ \Rightarrow \mbox{$p$-$\mathit{value}$} < \alpha$$

5.2 Right Tailed Test

$$t_{test}>t_{c} \Leftrightarrow t_{test}> F^{-1}(1-\alpha)$$ $$ \Rightarrow F(t_{test}) > 1-\alpha \Leftrightarrow 1 – F(t_{test}) < \alpha$$ $$ \Rightarrow \mbox{$p$-$\mathit{value}$} < \alpha$$

5.3 Two Tailed Test

Assuming $t_{test}$ to be positive and hence not using the absolute sign, we have our critical value based rejection condition as: $$-t_{test} < t_{C,L} \mbox{ or } t_{test} > t_{C,U}$$ $$-t_{test} < F^{-1}\left(\alpha/2 \right) \mbox{ or } t_{test} > F^{-1}\left(1-\alpha/2 \right)$$ $$ F(-t_{test}) < \alpha/2 \mbox{ or } F(t_{test}) > 1- \alpha/2 $$ $$ F(-t_{test}) < \alpha/2 \mbox{ or } 1-F(t_{test}) < \alpha/2 $$ $$\Pr(t < -t_{test}) < \alpha/2 \mbox{ or } \Pr(t>t_{test}) < \alpha/2 $$ $$\Pr(t < -t_{test}) + \Pr(t > t_{test}) < \alpha $$ $$\Rightarrow \mbox{$p$-$\mathit{value}$} < \alpha$$