EE 364 Supplemental – Week 05 (Part 2)

Signal Detection, Cauchy, Gamma, Exponential, Chi-Square, Beta

Signal Detection

Hypothesis Model Decision rule
\(H_0\): no signal \(X \sim N(0, \sigma^2)\) \(H_0\) if \(X \leq T\)
\(H_a\): signal present \(X \sim N(1, \sigma^2)\) \(H_1\) if \(X > T\)
NoteDefinition: Type I and Type II Errors

Type I error (false positive / false alarm): \[ \alpha \triangleq P[\text{Type 1 error}] = P[\text{reject } H_0 \mid H_0 \text{ true}] = P[X > T \mid H_0] = P[\text{false alarm}] \]

\(\therefore 1 - \alpha = P[\text{correct rejection}]\)

Type II error (false negative / miss): \[ \beta \triangleq P[\text{Type 2 error}] = P[\text{accept } H_0 \mid H_a \text{ true}] = P[X \leq T \mid H_a] = P[\text{miss}] \]

Power: \[ \text{"Power"} \triangleq 1 - \beta = P[\text{Hit}] = P[\text{Correct detection}] = P_D \]

\(\therefore \alpha \downarrow \;\Longleftrightarrow\; \beta \uparrow\)

Neyman-Pearson detection: maximize \(P_D\) for fixed false alarm, e.g. \(\alpha \leq 0.05\).

Example

\(H_0: N(0, 4)\), \(H_a: N(3, 4)\) (\(\sigma = 2\)).

Threshold \(T = 2\):

\[ \begin{aligned} \alpha = P(\text{type 1 error}) &= P[\text{reject } H_0 \mid H_0 \text{ true}] \\ &= P(X \geq 2 \mid X \sim N(0,4)) \\ &= P\!\left(\frac{X - 0}{2} \geq \frac{2 - 0}{2}\right) = P(Z \geq 1) \\ &= 1 - 0.8332 = \boxed{0.0668} \end{aligned} \]

\[ \begin{aligned} \beta = P(\text{type 2 error}) &= P[\text{accept } H_0 \mid H_a \text{ true}] \\ &= P\!\left(\frac{X - 3}{2} \leq \frac{2 - 3}{2}\right) = P\!\left(Z \leq -\tfrac{1}{2}\right) \\ &= \boxed{0.3085} \end{aligned} \]

Ex: (at home) Repeat with \(T = 1\) (move threshold left, \(2 \to 1\)). \(\alpha \uparrow\), \(\beta \downarrow\) – tradeoff.

Ex: (at home) Find \(T\) so that \(\alpha = \beta\).


Trigonometry Review

Pythagoras: \(c^2 = a^2 + b^2\) (right triangle)

Cosine Law: \(c^2 = a^2 + b^2 - 2ab \cos\Theta\) (Pythag.: \(\cos\frac{\pi}{2} = 0\))

\[ \cos\Theta = \frac{x}{r}, \quad \sin\Theta = \frac{y}{r} \]

\[ x^2 + y^2 = r^2\cos^2\Theta + r^2\sin^2\Theta = r^2(\sin^2\Theta + \cos^2\Theta) \]

\(\therefore r = 1 \;(x^2 + y^2 = 1) \implies \sin^2\Theta + \cos^2\Theta = 1\)

\[ \tan\Theta = \frac{\sin\Theta}{\cos\Theta} = \frac{y}{x} \]

\[ 1 + \tan^2\Theta = 1 + \frac{\sin^2\Theta}{\cos^2\Theta} = \frac{1}{\cos^2\Theta} = \sec^2\Theta \]

\[ \frac{d(\tan\Theta)}{d\Theta} = \frac{\cos^2\Theta + \sin^2\Theta}{\cos^2\Theta} = 1 + \tan^2\Theta = \sec^2\Theta \]

Taylor Series and Euler’s Formula

\[ e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} \]

\[ \begin{aligned} e^{i\Theta} &= \sum_{n=0}^{\infty} \frac{(i\Theta)^n}{n!} = \frac{i^0\Theta^0}{0!} + \frac{i^1\Theta^1}{1!} + \frac{i^2\Theta^2}{2!} + \frac{i^3\Theta^3}{3!} + \cdots \\[6pt] &= \underbrace{\left(\frac{\Theta^0}{0!} - \frac{\Theta^2}{2!} + \frac{\Theta^4}{4!} - \cdots\right)}_{\sum_{n=0}^{\infty} \frac{(-1)^n \Theta^{2n}}{(2n)!} = \cos\Theta} + i\underbrace{\left(\frac{\Theta^1}{1!} - \frac{\Theta^3}{3!} + \cdots\right)}_{\sum_{n=0}^{\infty} \frac{(-1)^n \Theta^{2n+1}}{(2n+1)!} = \sin\Theta} \end{aligned} \]

\[ \therefore e^{i\Theta} = \cos\Theta + i\sin\Theta \]

Special case: \(\Theta = \pi\), \(\cos\pi = -1\), \(\sin\pi = 0\):

\[ \boxed{e^{i\pi} + 1 = 0} \]


Cauchy PDF

“Thick tail” bell curve (infinite variance).

\(X \sim C(m, d)\): \[ f_X(x) = \frac{1}{\pi d\left(1 + \left(\frac{x-m}{d}\right)^2\right)} \]

\(d > 0\) = scale or “dispersion”, \(m\) = location.

CDF: \(F = \frac{1}{\pi}\tan^{-1} + \frac{1}{2}\)

\[ F_X(x) = P(X \leq x) = \frac{1}{\pi}\tan^{-1}\!\left(\frac{x - m}{d}\right) + \frac{1}{2} \]

Standard Cauchy: \[ Z \sim C(0, 1): \quad f_Z(z) = \frac{1}{\pi(1 + z^2)} \]

S\(\alpha\)S: Symmetric Alpha-Stable PDF

Family of bell-curves: \(0 < \alpha \leq 2\).

\(\alpha \downarrow \implies\) thicker tails.

  • \(\alpha = 2\): Gaussian
  • \(\alpha = 1\): Cauchy

(Related: Generalized Central Limit Theorem)

Cauchy Integral

TipTheorem

\[ \int_{-\infty}^{\infty} \frac{dx}{1 + x^2} = \pi \]

Proof. Let \(x = \tan\Theta\), \(dx = d(\tan\Theta) = (1 + \tan^2\Theta)\, d\Theta\).

\[ \therefore \int_{-\infty}^{\infty} \frac{dx}{1 + x^2} = \int_{-\pi/2}^{\pi/2} \frac{1 + \tan^2\Theta}{1 + \tan^2\Theta}\, d\Theta = \int_{-\pi/2}^{\pi/2} 1\, d\Theta = \Theta\Big|_{\Theta = -\pi/2}^{\Theta = \pi/2} = \pi \quad \square \]

\(\therefore Z \sim \text{Cauchy}(0,1)\), \(f_Z(z) = \frac{1}{\pi(1+z^2)}\):

\[ \int_{-\infty}^{\infty} f_Z(z)\, dz = \int_{-\infty}^{\infty} \frac{1}{\pi(1+z^2)}\, dz = 1 \]


Gamma Function

NoteDefinition: Gamma Function

For \(\alpha > 0\): \[ \Gamma(\alpha) = \int_0^{\infty} x^{\alpha - 1} e^{-x}\, dx \]

(power function \(\times\) exponential function)

Integration by Parts

TipTheorem

\[ \int u\, dv = uv - \int v\, du \]

(“inverse” chain rule: \(\int u(x)\, dv(x) = u(x) \cdot v(x) - \int v(x)\, du(x)\))

Proof. \((uv)' = u'v + uv' \approx du\, v + u\, dv\)

\(\therefore u\, dv = (uv)' - v\, du\)

\(\therefore \int u\, dv = uv - \int v\, du \quad \square\)

Gamma Recursion

TipTheorem

\(\Gamma(\alpha + 1) = \alpha \cdot \Gamma(\alpha)\)

Proof. \(\Gamma(\alpha + 1) = \int_0^{\infty} x^{\alpha+1-1} e^{-x}\, dx\)

Let \(u = x^\alpha\), \(du = \alpha x^{\alpha-1}\, dx\); \(dv = e^{-x}\, dx\), \(v = -e^{-x}\).

\[ \begin{aligned} &= -x^\alpha e^{-x}\Big|_{x=0}^{x=\infty} - \int_0^{\infty}(-e^{-x})(\alpha x^{\alpha-1}\, dx) \\ &= -(0 - 0) + \alpha \int_0^{\infty} x^{\alpha-1} e^{-x}\, dx \\ &= \alpha \cdot \underbrace{\int_0^{\infty} x^{\alpha-1} e^{-x}\, dx}_{\Gamma(\alpha)} \\ &= \alpha \cdot \Gamma(\alpha) \quad \square \end{aligned} \]

\(\therefore \Gamma(\alpha + 1) = \alpha \cdot \Gamma(\alpha)\) if \(\alpha \in \mathbb{R}^+\).

\(\therefore \Gamma(n + 1) = n!\) if \(n \in \mathbb{Z}^+\).

\(\Gamma(1/2) = \sqrt{\pi}\)

TipTheorem

\(\Gamma\!\left(\frac{1}{2}\right) = \sqrt{\pi}\)

Proof. \[ \Gamma\!\left(\tfrac{1}{2}\right) = \int_0^{\infty} x^{-1/2} e^{-x}\, dx \]

Let \(u = \sqrt{x} = x^{1/2}\), \(du = \frac{1}{2} x^{-1/2}\, dx\).

\[ = \int_{u=0}^{u=\infty} 2\, e^{-u^2}\, du = \int_{-\infty}^{+\infty} e^{-u^2}\, du = \sqrt{\pi} \quad \square \]

(Gaussian integral)


Exponential PDF

\[ X \sim \text{Exp}(\Theta): \quad f(x) = \frac{1}{\Theta} e^{-x/\Theta} \quad \text{if } x > 0, \quad \Theta > 0. \]

Exponential Family Tree

BEG CUP: \(E \to \Gamma \to \begin{cases} \chi^2 \\ \text{Erlang} \end{cases}\), also \(\to\) Extreme Value pdfs (Weibull, Frechet, Gumbel).

Extreme value pdfs related to \(\max(X_1, \ldots, X_n)\) – Fisher-Tippets (E.V. theorem).

Gamma PDF

\(X \sim \Gamma(\alpha, \Theta)\), \(x > 0\), \(\alpha > 0\), \(\Theta > 0\):

\[ f_X(x) = \frac{x^{\alpha-1}\, e^{-x/\Theta}}{\Gamma(\alpha) \cdot \Theta^\alpha} \]

Recall: \(\Gamma(\alpha) = \int_0^{\infty} x^{\alpha-1} e^{-x}\, dx\).

3 special cases:

  1. \(\alpha = 1 \implies\) Exponential (“waiting time”, \(k=1\))
  2. \(\alpha \in \mathbb{Z}^+ \implies\) Erlang (\(k\) waiting times)
  3. \(\alpha = \frac{r}{2}\), \(\Theta = 2 \implies X \sim \chi^2(r)\) (“\(r\) degrees of freedom”)

Verifying the Gamma PDF

Recall: \(\Gamma(\alpha) = \int_0^{\infty} y^{\alpha-1} e^{-y}\, dy\).

Put \(y = \frac{x}{\Theta}\) (\(\Theta > 0\)), \(\therefore dy = \frac{1}{\Theta}\, dx\):

\[ \Gamma(\alpha) = \int_{x=0}^{x=\infty} e^{-x/\Theta} \cdot \left(\frac{x}{\Theta}\right)^{\alpha-1} \cdot \frac{1}{\Theta}\, dx = \frac{1}{\Theta^\alpha} \int_0^{\infty} x^{\alpha-1}\, e^{-x/\Theta}\, dx \]

\(\therefore\) For \(X \sim \Gamma(\alpha, \Theta)\):

\[ \int_0^{\infty} \frac{x^{\alpha-1}\, e^{-x/\Theta}}{\Gamma(\alpha) \cdot \Theta^\alpha}\, dx = \frac{1}{\Gamma(\alpha)} \cdot \underbrace{\int_0^{\infty} \frac{x^{\alpha-1}\, e^{-x/\Theta}}{\Theta^\alpha}\, dx}_{\Gamma(\alpha)} = \frac{\Gamma(\alpha)}{\Gamma(\alpha)} = 1 \]

\(\therefore f_X(x)\) is a valid pdf.


Chi-Square PDF

\[ X \sim \Gamma\!\left(\tfrac{r}{2}, 2\right) = \chi^2(r) \quad \text{with "} r \text{ degrees of freedom"} \]

\[ f(x) = \frac{x^{r/2-1}\, e^{-x/2}}{\Gamma(r/2)\, 2^{r/2}} \quad \text{if } x > 0 \quad (r > 0) \]

\(\therefore X \sim \Gamma(\alpha, \Theta)\) if \(\alpha = \frac{r}{2}\) & \(\Theta = 2\).

ImportantSums

\[ \sum_{k=1}^{n} X_k \sim \chi^2\!\left(\sum_{k=1}^{n} r_k\right) \quad \text{if independent } X_k \sim \chi^2(r_k) \]


Gamma Function Values

\(r\) even: \(\Gamma\!\left(\frac{r}{2}\right) = \left(\frac{r}{2} - 1\right)!\) \(\qquad\) \(r\) odd: \(\Gamma\!\left(r + \frac{1}{2}\right) = \frac{(2r)!}{r!\, 4^r}\sqrt{\pi}\)

\(\alpha\) \(\Gamma(\alpha)\) Note
\(\frac{1}{2}\) \(\sqrt{\pi}\) Gaussian integral
\(1\) \(1\) mass of \(\text{Exp}(1)\) pdf
\(\frac{3}{2}\) \(\frac{1}{2}\sqrt{\pi}\) from \(\Gamma(\alpha+1) = \alpha\,\Gamma(\alpha)\)
\(2\) \(1\)
\(\frac{5}{2}\) \(\frac{3}{4}\sqrt{\pi}\)
\(3\) \(2\)
\(\frac{7}{2}\) \(\frac{15}{8}\sqrt{\pi}\)
\(4\) \(3! = 6\)

Chi-Square Example

Ex: Find \(P[3.25 \leq X \leq 20.5]\) if \(X \sim \chi^2(10)\).

Using R:

\[ P[3.25 \leq X \leq 20.5] = P[X \leq 20.5] - P[X \leq 3.25] = \text{pchisq}(20.5, 10) - \text{pchisq}(3.25, 10) \]

\[ = 0.9751371 - 0.0250865 = 0.9500506 \]

Using table:

See: Chi-Square Table

\(\chi^2_{0.025}(10)\) closest to \(20.5\): \(P[X \leq 20.5] \approx 1 - 0.025 = 0.975\).

\(\chi^2_{0.975}(10)\) closest to \(3.25\): \(P[X \leq 3.25] \approx 1 - 0.975 = 0.025\).

\[ \therefore P[3.25 \leq X \leq 20.5] = P[X \leq 20.5] - P[X \leq 3.25] \approx 0.975 - 0.025 = 0.95 \]

Reading the table: \(X \sim \chi^2(r)\), \(r\) = d.f. \(P[X \leq x] \approx 1 - \alpha\) for \(\chi^2_\alpha\) closest to \(x\).


Beta Random Variable

\(X \sim \text{Beta}(\alpha, \beta)\):

\[ f_X(x) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\,\Gamma(\beta)}\, x^{\alpha-1}(1-x)^{\beta-1} \quad \text{if } 0 < x < 1, \quad \text{else } 0. \]

\(\alpha > 0\), \(\beta > 0\).

\(\therefore \alpha = \beta = 1 \implies X \sim U[0, 1]\).

\(n\)-dim generalization: Dirichlet pdf.

Beta Function

NoteDefinition: Beta Function

\[ B(\alpha, \beta) = \int_0^1 x^{\alpha-1}(1-x)^{\beta-1}\, dx \qquad \alpha > 0, \; \beta > 0 \]

\[ = \frac{\Gamma(\alpha)\,\Gamma(\beta)}{\Gamma(\alpha + \beta)} \]

(see proof below)

Proof: \(B(\alpha, \beta) = \frac{\Gamma(\alpha)\,\Gamma(\beta)}{\Gamma(\alpha+\beta)}\)

TipTheorem

\[ B(\alpha, \beta) = \int_0^1 u^{\alpha-1}(1-u)^{\beta-1}\, du = \frac{\Gamma(\alpha)\,\Gamma(\beta)}{\Gamma(\alpha + \beta)} \qquad \alpha > 0,\; \beta > 0 \]

Proof. \[ \Gamma(\alpha) \cdot \Gamma(\beta) = \int_{x=0}^{\infty} e^{-x} x^{\alpha-1}\, dx \int_{y=0}^{\infty} e^{-y} y^{\beta-1}\, dy \]

By Fubini:

\[ = \int_{y=0}^{\infty}\left[\int_{x=0}^{\infty} e^{-(x+y)} x^{\alpha-1} y^{\beta-1}\, dx\right] dy = \int_{y=0}^{\infty}\int_{x=0}^{\infty} f(x(u,v),\, y(u,v))\, dx\, dy \]

Double substitution: Let \(x = x(u,v) = uv\), \(y = y(u,v) = u(1-v)\).

Determine limits:

  • Given \(0 < x < \infty\) and \(0 < y < \infty\)
  • \(0 < x + y = uv + u(1-v) = u \implies u > 0\)
  • \(x > 0 \implies uv > 0 \implies v > 0\) (since \(u > 0\))
  • \(u < \infty\) since \(x = uv < \infty\) and \(v > 0\)
  • \(\therefore 0 < u < \infty\) – first limit of integration
  • \(y > 0 \implies u(1-v) > 0 \implies 1 - v > 0\) (since \(u > 0\))
  • \(v < 1\), \(\therefore 0 < v < 1\) – second limit of integration

Jacobian (change of variable theorem):

\[ \therefore \Gamma(\alpha) \cdot \Gamma(\beta) = \int_{v=0}^{1}\int_{u=0}^{\infty} f(u,v) \cdot \left|\frac{\partial(x,y)}{\partial(u,v)}\right|\, du\, dv \]

\[ = \int_{v=0}^{1}\int_{u=0}^{\infty} f(u,v) \cdot u\, du\, dv \]

since:

\[ \left|\frac{\partial(x,y)}{\partial(u,v)}\right| = \begin{vmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{vmatrix} = \begin{vmatrix} v & u \\ 1-v & -u \end{vmatrix} = |-vu - u + vu| = |-u| = u \]

(since \(x = uv\), \(y = u - uv\))

\[ \begin{aligned} &= \int_{v=0}^{1}\int_{u=0}^{\infty} e^{-u}(uv)^{\alpha-1}(u(1-v))^{\beta-1}\, u\, du\, dv \\[6pt] &= \int_{v=0}^{1}\int_{u=0}^{\infty} e^{-u}\, u^{\alpha-1} v^{\alpha-1} u^{\beta-1}(1-v)^{\beta-1}\, u\, du\, dv \\[6pt] &= \left[\int_{u=0}^{\infty} e^{-u}\, u^{\alpha-1+\beta-1+1}\, du\right] \cdot \left[\int_{v=0}^{1} v^{\alpha-1}(1-v)^{\beta-1}\, dv\right] \\[6pt] &= \left[\int_{u=0}^{\infty} e^{-u}\, u^{(\alpha+\beta)-1}\, du\right] \cdot \left[\int_{v=0}^{1} v^{\alpha-1}(1-v)^{\beta-1}\, dv\right] \\[6pt] &= \Gamma(\alpha + \beta) \cdot B(\alpha, \beta) \end{aligned} \]

\[ \therefore B(\alpha, \beta) = \frac{\Gamma(\alpha) \cdot \Gamma(\beta)}{\Gamma(\alpha + \beta)} \quad \square \]