A point estimate, \(\hat\theta = g(x) \), is a number, whereas an estimator, \(\widehat\Theta = g(X) \), is a random variable.
  |
\[
\begin{aligned}
\hat\theta_{MAP} &= g_{MAP}(x): \text{ maximises } p_{\Theta|X}(\theta\,|\,x) \\
\hat\theta_{LMS} &= g_{LMS}(x) = \mathbb{E}[\Theta \,|\, X=x] \\
\end{aligned}
\] |
Conditional probability of error
  |
\[
\begin{aligned}
\mathbb{P}(\hat\theta &\ne \Theta \,|\, X =x) & \text{smallest under the MAP rule} \\
\end{aligned}
\] |
Overall probability of error
  |
\[
\begin{aligned}
\mathbb{P}(\widehat\Theta \ne \Theta) &= \int \mathbb{P}(\widehat\Theta \ne \Theta\,|\,X=x)\cdot f_X(x)\, dx \\
&= \sum_\theta \mathbb{P}(\widehat\Theta \ne \Theta\,|\,\Theta=\theta)\cdot p_\Theta(\theta) \\
\end{aligned}
\] |
Mean squared error (MSE)
  |
\[
\begin{aligned}
&\mathbb{\mathbb{E}}\left[\big(\Theta - \hat\theta\big)^2\right] \\
\end{aligned}
\] |
Minimized when \(\hat\theta = \mathbb{\mathbb{E}}[\Theta], \text{ so that } \)
  |
\[
\begin{aligned}
&\mathbb{E}\left[\big(\Theta - \hat\theta\big)^2\right] = \mathbb{\mathbb{E}}\left[\big(\Theta - \mathbb{E}[\Theta]\big)^2\right] = \text{var}(\Theta) & \text{least mean square (} \color{blue}{\text{LMS}}) \\
\end{aligned}
\] |
Conditional mean squared error
  |
\[
\begin{aligned}
&\mathbb{E}\left[\big(\Theta - \hat\theta\big)^2 \,\big|\, \color{blue}{X=x} \right] & \text{with observation }x \\
\end{aligned}
\] |
Minimized when \(\hat\theta = \mathbb{E}[\Theta\,\big|\, \color{blue}{X=x}], \text{ so that } \)
  |
\[
\begin{aligned}
\mathbb{\mathbb{E}}\left[\big(\Theta - \hat\theta\big)^2\,\big|\, \color{blue}{X=x} \right] &= \mathbb{E}\left[\big(\Theta - \mathbb{E}[\Theta\,|\, \color{blue}{X=x}]\big)^2 \,\big|\, \color{blue}{X=x} \right] \\
&= \color{red}{\text{var}(\Theta\,|\, X=x)} \qquad \text{expected performance, given a measurement} \\
\end{aligned}
\] |
Expected performance of the design:
  |
\[
\begin{aligned}
\mathbb{E}\left[\big(\Theta - \mathbb{E}[\Theta\,|\, \color{blue}{X}]\big)^2 \right] &= \color{red}{\mathbb{E}\left[\text{var}(\Theta\,|\,\ X)\right]} \\
\end{aligned}
\] |
Note that \(\hat\theta\) is an estimate whereas \(\widehat\Theta = \mathbb{E}[\Theta\,|\,X]\) is an estimator.
Linear least mean square (LLMS) estimation
\(\quad\) Minimize \(\mathbb{E}\left[(\Theta - aX - b)^2\right]\) w.r.t. \(a, b\)
  |
\[
\begin{aligned}
\widehat\Theta_L &= \mathbb{E}[\Theta] + \frac{cov(\Theta,X)}{var(X)}\big(X - \mathbb{E}[X]\big)\\
&= \mathbb{E}[\Theta] + \rho\frac{\sigma_\Theta}{\sigma_X}\big(X - \mathbb{E}[X]\big) & \text{only means, variances and covariances matter} \\
\end{aligned}
\] |
\(\quad\)Error variance:
  |
\[
\begin{aligned}
\mathbb{E}[(\widehat\Theta_L - \Theta)^2] &= (1-\rho^2) \cdot var(\Theta) \\
\end{aligned}
\] |
Source: MITx 6.041x, Lecture 16, 17.
# posted by rot13(Unafba Pune) @ 8:54 AM