Processing math: 21%
Google
 
Web unafbapune.blogspot.com

Thursday, March 27, 2014

 

Random Variables

Uniform from a to b

Discrete:
  pX(x)=1ba+1E[X]=a+b2var(X)=112(ba)(ba+2)P(axb)=axbpX(x)
Continuous:
  fX(x)=1baE[X]=a+b2var(X)=(ba)212P(axb)=bafX(x)dx

Bernoulli with parameter p[0,1]

  pX(0)=1ppX(1)=pE[X]=pvar(X)=pp214(max variance)

Binomial with parameter p[0,1]

Model number of successes (k) in a given number of independent trials (n):
  \begin{aligned} p_X(k) &= \mathbb{P}(X=k) = {n \choose k}p^k(1-p)^{n-k} \\ \mathbb{E}[X] &= n\cdot \color{blue}{p} \quad var(X) = n\cdot\color{blue}{(p - p^2)} \\ \end{aligned}

Poisson with parameter p \in [0, 1]

Large n, small p, moderate \lambda = np which is the arrival rate. Model number of arrivals S:
  \begin{aligned} p_S(k) &\to \frac{\lambda^k}{k!}e^{-\lambda} \qquad \mathbf{E}(S) = \lambda \qquad \text{var}(S) = \lambda \\ \end{aligned}

Beta with parameters (\alpha, \beta)

Infer the posterior unknown bias \Theta of a coin with k number of heads in n (fixed) tosses:
  \begin{aligned} f_{\Theta|K}(\theta\,|\,k) &= \frac{1}{d(n,k)} \theta^k (1-\theta)^{n-k} \\ \int_0^1 \theta^\alpha (1-\theta)^\beta\,d\theta &= \frac{\alpha! \, \beta!}{(\alpha+\beta+1)!} & \text{beta distribution} \end{aligned}

Geometric with parameter p \in [0, 1]

Model number of trials (k) until a success:
  \begin{aligned} p_X(k) = \mathbb{P}(X = k) = (1-p)^{k-1}p \quad \mathbb{E}[X] = \frac{1}{p} \quad var(X) = \frac{1-p}{p^2} \\ \end{aligned}

Exponential with parameter \lambda > 0

Model amount of time elapsed (x) until a success:
  \begin{aligned} f_X(x) &= \lambda e^{-\lambda x} \quad \mathbb{E}[X] = \frac{1}{\lambda} \quad var(X) = \frac{1}{\lambda^2} \\ \mathbb{P}(X > a) &= \int_a^\infty \lambda e^{-\lambda x} \, dx = e^{-\lambda a} \\ \mathbb{P}(T - t > x\, |\, T > t) &= e^{-\lambda x} = \mathbb{P}(T > x) & \text{Memorylessness!} \\ \mathbb{P}(0 \le T \le \delta) &\approx \lambda\delta \approx \mathbb{P}(t \le T \le t+\delta\,|\, T > t) & \mathbb{P}(\text{(success}) \text{ at every }\delta \text{ time step } \approx \lambda\delta\\ \end{aligned}

Normal (Gaussian)

  \begin{aligned} N(0,1): f_X(x) &= \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \\ N(\mu,\sigma^2): f_X(x) &= \frac{1}{\sigma\sqrt{2\pi}} e^{-(x-\mu)^2/2\sigma^2} \\ \end{aligned}
\qquadIf X \thicksim N(\mu,\sigma^2) and Y = aX + b, then Y \thicksim N(a\mu + b,a^2\sigma^2)

\qquadIf X = \Theta + W where W \thicksim N(0,\sigma^2), indep. of \Theta, then f_{X|\Theta}(x\,|\,\theta) = f_W(x-\theta); or X \thicksim N(\Theta,\sigma^2)

Cumulative distributive function (CDF)

Discrete:
  \begin{aligned} F_X(x) = \mathbb{P}(X \le x) = \sum_{k \le x} p_X(k) \\ \end{aligned}
Continuous:
  \begin{aligned} F_X(x) = \mathbb{P}(X \le x) = \int_{-\infty}^x f_X(t)\,dt \quad \therefore \frac{d}{dx}F_X(x) = f_X(x) \\ \end{aligned}

Source: MITx 6.041x, Lecture 5, 6, 8, 14.


Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?