Google
 
Web unafbapune.blogspot.com

Thursday, March 27, 2014

 

Random Variables

Uniform from \(a\) to \(b\)

Discrete:
  \[ \begin{aligned} p_X(x) &= \frac{1}{b-a+1} \quad \mathbb{E}[X] = \frac{a+b}{2} \quad var(X) = \frac{1}{12}(b-a)\cdot(b-a\color{blue}{+2}) \\ \mathbb{P}(a \le x \le b) &= \sum_{a \le x \le b}p_X(x) \\ \end{aligned} \]
Continuous:
  \[ \begin{aligned} f_X(x) &= \frac{1}{b-a} \quad \mathbb{E}[X] = \frac{a+b}{2} \quad var(X) = \frac{(b-a)^2}{12} \\ \mathbb{P}(a \le x \le b) &= \int_a^b f_X(x)\,dx \\ \end{aligned} \]

Bernoulli with parameter \(p \in [0, 1]\)

  \[ \begin{aligned} p_X(0) = 1-p \quad p_X(1) = p \quad \mathbb{E}[X] = \color{blue}{p} \quad var(X) = \color{blue}{p - p^2} \le {1 \over 4} \quad(\text{max variance}) \\ \end{aligned} \]

Binomial with parameter \(p \in [0, 1]\)

Model number of successes (\(k\)) in a given number of independent trials (\(n\)):
  \[ \begin{aligned} p_X(k) &= \mathbb{P}(X=k) = {n \choose k}p^k(1-p)^{n-k} \\ \mathbb{E}[X] &= n\cdot \color{blue}{p} \quad var(X) = n\cdot\color{blue}{(p - p^2)} \\ \end{aligned} \]

Poisson with parameter \(p \in [0, 1]\)

Large \(n\), small \(p\), moderate \(\lambda = np\) which is the arrival rate. Model number of arrivals \(S\):
  \[ \begin{aligned} p_S(k) &\to \frac{\lambda^k}{k!}e^{-\lambda} \qquad \mathbf{E}(S) = \lambda \qquad \text{var}(S) = \lambda \\ \end{aligned} \]

Beta with parameters \((\alpha, \beta)\)

Infer the posterior unknown bias \(\Theta\) of a coin with \(k\) number of heads in \(n\) (fixed) tosses:
  \[ \begin{aligned} f_{\Theta|K}(\theta\,|\,k) &= \frac{1}{d(n,k)} \theta^k (1-\theta)^{n-k} \\ \int_0^1 \theta^\alpha (1-\theta)^\beta\,d\theta &= \frac{\alpha! \, \beta!}{(\alpha+\beta+1)!} & \text{beta distribution} \end{aligned} \]

Geometric with parameter \(p \in [0, 1]\)

Model number of trials (\(k\)) until a success:
  \[ \begin{aligned} p_X(k) = \mathbb{P}(X = k) = (1-p)^{k-1}p \quad \mathbb{E}[X] = \frac{1}{p} \quad var(X) = \frac{1-p}{p^2} \\ \end{aligned} \]

Exponential with parameter \(\lambda > 0\)

Model amount of time elapsed (\(x\)) until a success:
  \[ \begin{aligned} f_X(x) &= \lambda e^{-\lambda x} \quad \mathbb{E}[X] = \frac{1}{\lambda} \quad var(X) = \frac{1}{\lambda^2} \\ \mathbb{P}(X > a) &= \int_a^\infty \lambda e^{-\lambda x} \, dx = e^{-\lambda a} \\ \mathbb{P}(T - t > x\, |\, T > t) &= e^{-\lambda x} = \mathbb{P}(T > x) & \text{Memorylessness!} \\ \mathbb{P}(0 \le T \le \delta) &\approx \lambda\delta \approx \mathbb{P}(t \le T \le t+\delta\,|\, T > t) & \mathbb{P}(\text{(success}) \text{ at every }\delta \text{ time step } \approx \lambda\delta\\ \end{aligned} \]

Normal (Gaussian)

  \[ \begin{aligned} N(0,1): f_X(x) &= \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \\ N(\mu,\sigma^2): f_X(x) &= \frac{1}{\sigma\sqrt{2\pi}} e^{-(x-\mu)^2/2\sigma^2} \\ \end{aligned} \]
\(\qquad\)If \(X \thicksim N(\mu,\sigma^2)\) and \(Y = aX + b\), then \(Y \thicksim N(a\mu + b,a^2\sigma^2) \)

\(\qquad\)If \(X = \Theta + W\) where \(W \thicksim N(0,\sigma^2)\), indep. of \(\Theta\), then \(f_{X|\Theta}(x\,|\,\theta) = f_W(x-\theta)\); or \(X \thicksim N(\Theta,\sigma^2)\)

Cumulative distributive function (CDF)

Discrete:
  \[ \begin{aligned} F_X(x) = \mathbb{P}(X \le x) = \sum_{k \le x} p_X(k) \\ \end{aligned} \]
Continuous:
  \[ \begin{aligned} F_X(x) = \mathbb{P}(X \le x) = \int_{-\infty}^x f_X(t)\,dt \quad \therefore \frac{d}{dx}F_X(x) = f_X(x) \\ \end{aligned} \]

Source: MITx 6.041x, Lecture 5, 6, 8, 14.


Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?