Google
 
Web unafbapune.blogspot.com

Saturday, January 25, 2014

 

Hacks to help install Andriod app on LG G2

Using Andrioid Studio (Preview 0.4.3) on my MacBook Pro running OSX 10.9.1, when I tried to run an Android app on an LG G2 Andriod phone connected via USB, I either didn't see the device appear at the Choose Device dialogue at all or saw the device but with an offline state. Here are the hacks that got me over these problems. In particular, the part on enabling the Developer Settings on an Android phone seems particularly hacky.

  1. On the phone, enable USB Debugging from Developer Settings, which is hidden by default on Android 4.2+ devices like LG G2. To enable USB Debugging, go to
    Settings -> About Phone -> Software Information
    and tap the Build Number seven times. You should now see a new Developer Settings sub-menu inside Settings, which contains the option to enable USB Debugging.
  2. On my mac, restart adb by issuing 'adb kill-server' followed by 'adb start-server' at a cmd prompt; The adb command can be found under /Applications/Android Studio.app/sdk/platform-tools
  3. Disable and re-enable USB Debugging on the phone
  4. Rebooting the phone if it still doesn't work.
(Finally succeeded after many attempts, switching USB cable and ports also; plus answering a prompt on the phone on accepting a fingerprint when my mac tried to connect to it.)

Good luck!

Sources:


Sunday, January 19, 2014

 

From binomial to Poisson

Turns out for large \(n\) and small \(p\), a binomial distribution can be approximated by a Poisson distribution with \(\lambda = np\). Why ?
  \[ \begin{aligned} {n \choose k} p^k (1-p)^{n-k} &= {n! \over (n-k)!\, k!} ({\lambda \over n})^k (1 - {\lambda \over n})^{n-k} \\ &= {n(n-1) \cdots (n-k+1) \over n^k} {\lambda^k \over k!} {\color{blue}{(1 - {\lambda \over n})^n} \over (1 - {\lambda \over n})^k} \\ &= \color{blue}{e^{-\lambda}} {\lambda^k \over k!} & \text{as }n \rightarrow \infty \\ \end{aligned} \]
which is the Poisson distribution !

\(\Box\)

Source: Kiryl Tsishchanka


 

Maximum binomial

When is the maximum value attained in a binomial distribution ?
  \[ \begin{aligned} f(k, n, p) = {n \choose k} p^k (1-p)^{n-k} \end{aligned} \]
Your first instinct might consider figuring out the derivative but that would quickly turn into a horrific mess. A nice route is to consider the \(k+1\)st term against the \(k\)th term:
  \[ \begin{aligned} \rho &= {f(k+1, n, p) \over f(k, n, p)} \\ &= {{n \choose k+1} p^{k+1} (1-p)^{n-k-1} \over {n \choose k} p^k (1-p)^{n-k}} \\ &= {(n-k) p \over (k+1)(1-p) } \\ \end{aligned} \]
If \(\rho > 1\), \(f\) increases as \(k\) increases, whereas if \(\rho < 1\), \(f\) decreases as \(k\) increases. The maximum is attained when \(\rho = 1\), which is when:
  \[ \begin{aligned} (k+1)(1-p) &= (n-k) p \\ k+1-p &= np \\ k+1 &= (n+1)p \\ \end{aligned} \]
So if \((n+1)p\) is an integer, that would be the value of \((k+1)\) that would yield the maximum value \(M\).
  \[ \begin{aligned} M = f(k+1, n, p) = f((n+1)p, n, p) \\ \end{aligned} \]
But given \(\displaystyle \rho = 1 = {f(k+1, n, p) \over f(k, n, p)} \), both \((k+1)\) and \(k\) would yield the same \(M\):
  \[ \begin{aligned} M &= f(k+1, n, p) = f((n+1)p, n, p) \\ &= f(k, n, p) = f((n+1)p-1, n, p) \\ \end{aligned} \]
If \((n+1)p\) is not an integer, then M would be attained when \(k + 1 = \lfloor (n+1)p \rfloor \) or \(k = \lfloor (n+1)p \rfloor - 1 \).

Therefore, \(k \approx np\).

\(\Box\)


Friday, January 17, 2014

 

\(1 - {1 \over x} \le \ln{x} \le x - 1 \)


Sunday, January 12, 2014

 

Week 1 Assignment 1 Q3 Independence

For \(n \ge 3\), random variables \(X_1, X_2, \cdots, X_n\) are mutually independent if
  \[ \begin{aligned} p(x_1, x_2, \cdots, x_n) &= p(x_1) p(x_2) \cdots p(x_n) \\ \end{aligned} \]
and are pairwise independent if \(X_i\) and \(X_j\) are independent, ie
  \[ \begin{aligned} p(x_i, x_j) &= p(x_i) p(x_j) \\ \end{aligned} \]
for all \(1 \le i < j \le n\).

What would be an example where pairwise independence holds but not mutual independence ? Here is an idea. Imagine there is a slot machine that has 4 equally likely outcomes:
  \[ \begin{aligned} (1,2), (1,3), (2,3), \text{FAIL} \end{aligned} \]
So, for example,
  \[ \begin{aligned} p(1) &= p(2) = (1+1)/4 = 1/2 \\ p(1,2) &= p(1) p(2) = 1/4 & \text{pairwise independent} \\ p(1,2,3) &= 0 \not = p(1) p(2) p(3) & \text{not mutually independent} \\ \end{aligned} \]
\(\Box\)

Source: Information Theory.


 

Infinity Entropy

What would be a good example of an infinite entropy ? Let
  \[ \begin{aligned} p(x) &= {1 \over Ax\log^2{x}} & \text{where } A = \sum_x {1 \over x\log^2{x}} \\ \end{aligned} \]
Note \(A\) converges as \(\displaystyle \int {1 \over x\log^2{x}}\,dx = -{1 \over \log{x}}\). Now,
  \[ \begin{aligned} H(X) &= - \sum_x p(x) \log{p(x)} \\ &= - \sum_x {1 \over Ax\log^2{x}} \log{{1 \over Ax\log^2{x}}} \\ &= \sum_x {\log{(Ax\log^2{x})} \over Ax\log^2{x}} \\ &= \sum_x \left[ {\log{A} \over Ax\log^2{x}} + {\log{x} \over Ax\log^2{x}} + {\log{\log^2{x}} \over Ax\log^2{x}} \right] \\ &= \sum_x \left[ {\log{A} \over Ax\log^2{x}} + \color{blue}{1 \over Ax\log{x}} + {\log{\log^2{x}} \over Ax\log^2{x}} \right] \\ \end{aligned} \]
But the middle term (in blue) diverges, as \(\displaystyle \int {1 \over x\log{x}}\,dx = \log\log x\). Hence \(H(X)\) diverges. \(\Box\)

Source: math.stackexchange.com.


Saturday, January 11, 2014

 

Mutual Information

For random variable \(X\) and \(Y\), the mutual information between them is defined as:
  \[ \begin{aligned} I(X;Y) = \sum_{x,y}p(x,y)\,\log{p(x,y) \over p(x)\,p(y)} = E \log{p(X,Y) \over p(X)\,p(Y)} \end{aligned} \]
which is symmetrical. Alternatively,
  \[ \begin{aligned} I(X;Y) = \sum_{x,y}p(x,y)\,\log{p(x\,|\,y) \over p(x)} = E \log{p(X|Y) \over p(X)} \end{aligned} \]
Interestingly, the mutual information of a random variable with itself, called the self-information, is the entropy of the variable, ie \(I(X;X) = H(X)\), for
  \[ \begin{aligned} I(X;X) &= E \log{p(X,X) \over p(X) p(X)} = E \log{p(X) \over p(X) p(X)} \\ &= - E \log{p(X)} \\ &= H(X) \\ \end{aligned} \]
Proposition 2.19, provided the entropies and conditional entropies are finite:
  \[ \begin{aligned} I(X;Y) &= H(X) - H(X|\,Y) \\ \end{aligned} \]
and
  \[ \begin{aligned} I(X;Y) &= H(X) + H(Y) - H(X,Y) \\ \end{aligned} \]
Both equations can be easily verified. For example,
  \[ \begin{aligned} H(X) - H(X|\,Y) &= -E\log p(X) + E \log{p(X|\,Y)} \\ &= -E\log p(X) + E \log{p(X,Y) \over p(Y)} \\ &= E \log{p(X,Y) \over p(X) p(Y)} \\ &= I(X;Y) \\ \end{aligned} \]













Furthermore, for random variable \(X, Y\) and \(Z\), the mutual information between \(X\) and \(Y\) conditioning on \(Z\) is defined as:
  \[ \begin{aligned} I(X;Y\,|\,Z) = \sum_{x,y,z}p(x,y,z)\,\log{p(x,y\,|\,z) \over p(x\,|\,z)\,p(y\,|\,z)} = E \log{p(X,Y\,|\,Z) \over p(X\,|\,Z)\,p(Y\,|\,Z)} \end{aligned} \]

Source: Information Theory.


 

Entropy

The entropy \(H(x)\) of a random variable \(X\) is defined as
  \[ \begin{aligned} \color{blue}{H(X) = - \sum_x p(x) \log{p(x)}} \end{aligned} \]
which measures the uncertainty of \(X\). Note \(H(X)\) depends only on the distribution of \(X\) but not on the actual values taken by \(X\). For example, here is a cute binary entropy function for \(X \thicksim \{\gamma, 1-\gamma\}\):
  \[ \begin{aligned} h_b(\gamma) = - \gamma \log{\gamma} - (1-\gamma) \log{(1-\gamma}) \\ \end{aligned} \]
Via the use of expectation:
  \[ \begin{aligned} E g(X) = \sum_x p(x) g(x) \\ \end{aligned} \]
Entropy can also be expressed compactly in terms of expectation:
  \[ \begin{aligned} \color{blue}{H(X) = - E \log{p(X)}} = - \sum_x p(x) \log{p(x)} \end{aligned} \]
Interestingly, the joint entropy:
  \[ \begin{aligned} H(X,Y) = - E \log{p(X,Y)} = - \sum_{x,y} p(x,y) \log{p(x,y)} \\ \end{aligned} \]
But the conditional entropy of \(Y\) given \(X\):
  \[ \begin{aligned} \color{blue}{H(Y\,|\,X) = - E \log{p(Y\,|\,X)}} &= - \sum_{x,y} p(x,y) \log{p(y\,|\,x)} \\ &= - \sum_x \sum_y p(x) p(y\,|\,x) \log{p(y\,|\,x)} \\ &= \sum_x p(x) \left[-\sum_y p(y\,|\,x) \log{p(y\,|\,x)}\right] \\ &= \color{blue}{\sum_x p(x)\, H(Y\,|\,X=x)} \\ \end{aligned} \]
Similarly,
  \[ \begin{aligned} H(Y\,|\,X,Z) = \sum_z p(z)\, H(Y\,|\,X,Z=z) \\ \end{aligned} \]
where
  \[ \begin{aligned} H(Y\,|\,X,Z=z) = -\sum_{x,y} p(x,y\,|\,z) \log{p(y\,|\,x,z)} \\ \end{aligned} \]
Now it's not hard to show that
  \[ \begin{aligned} \color{blue}{H(X,Y) = H(X) + H(Y\,|\,X)} \\ \end{aligned} \]
for
  \[ \begin{aligned} H(X,Y) &= -E \log{p(X,Y)} = -E \log{\left[p(X) \, p(Y\,|\,X)\right]} \\ &= -E \log{p(X)} - E \log{p(Y\,|\,X)} \\ &= H(X) + H(Y|X) \\ \end{aligned} \]
In general, there is the Chain rule for Entropy:
  \[ \begin{aligned} \color{blue}{H(X_1,X_2, \cdots, X_n)} &\color{blue}{= \sum_{i=1}^n H(X_i|X_1, \cdots , X_{i-1})} \\ \end{aligned} \]
which can be easily verified via induction. The base case is known to be true above. Suppose it is true for \(n\), then
  \[ \begin{aligned} H(X_1,X_2, \cdots, X_n, X_{n+1}) &= H(X_{n+1}| X_1, \cdots , X_n) + H(X_1, \cdots , X_n) \\ &= H(X_{n+1}| X_1, \cdots , X_n) + \sum_{i=1}^{n} H(X_i|X_1, \cdots , X_{i-1}) & \text{induction hypothesis} \\ &= \sum_{i=1}^{n+1} H(X_i|X_1, \cdots , X_{i-1}) \\ \end{aligned} \]
\(\Box\)

Source: Information Theory.


 

Markov subchains

A subchain of a Markov chain is also a Marchov chain. How to prove this ? Here is an idea. Suppose there is a Marchov subchain from \(X_i\) to \(X_n\)
  \[ \begin{aligned} \cdots \rightarrow X_i \rightarrow \color{blue}{X_j \rightarrow \cdots} \rightarrow X_n \rightarrow \cdots \\ \end{aligned} \]
This means
  \[ \begin{aligned} p(x_1, \cdots, x_i, \color{blue}{x_j, \cdots,} x_n, \cdots) &= p(x_1,x_2) p(x_3|x_2) \cdots \color{blue}{p(x_j|x_i) \cdots p(x_n|x_{n-1})} \cdots \\ &= p(x_1,x_2) {p(x_2,x_3) \over p(x_2)} \cdots {\color{blue}{p(x_i,x_j)} \over p(x_i)} \color{blue}{\cdots {p(x_n,x_{n-1}) \over p(x_{n-1})}} \cdots \\ \end{aligned} \]
Now remove the subchain from \(X_j\) to \(X_{n-1}\) inclusive, resulting in
  \[ \begin{aligned} p(x_1, \cdots, \color{teal}{x_i, x_n}, \cdots) &= p(x_1,x_2) {p(x_2,x_3) \over p(x_2)} \cdots {\color{teal}{p(x_i, x_n)} \over p(x_i)} \cdots \\ \end{aligned} \]
All other parts remain unchanged, but this means
  \[ \begin{aligned} \cdots \rightarrow X_i \rightarrow X_n \rightarrow \cdots \\ \end{aligned} \]
\(\Box\)

Formally, Let \(\mathscr{N}_n = \{1,2,...,n\}\,\) and let \(X_1 \rightarrow X_2 \rightarrow \cdots \rightarrow X_n\) form a Markov subchain. For any subset \(\alpha\) of \(\mathcal{N}_n\), denote \(X_i, i\in \alpha\) by \(X_\alpha\). Then for any disjoint subsets \(\alpha_1, \alpha_2, \cdots, \alpha_n\) of \(\mathcal{N}_n\) such that
  \[ \begin{aligned} k_1 < k_2 < \cdots < k_m \end{aligned} \]
for all \(k_j \in \alpha_j,\, j = 1, 2, \cdots, m,\)
  \[ \begin{aligned} X_{\alpha_1} \rightarrow X_{\alpha_2} \rightarrow \cdots \rightarrow X_{\alpha_m} \end{aligned} \]
forms a Markov subchain.

Source: Information Theory.


 

\(p(x,y,z) = a(x,y)b(y,z)\)

Proposition 2.5 For random variable \(X,Y\), and \(Z\), \(X \perp Z\,|\,Y\) if and only if:
  \[ \begin{aligned} p(x,y,z) = a(x,y)b(y,z) \\ \end{aligned} \]
Why ? The only-if part is rather trivial. To prove the "if" part, the key is to consider \(p(x,y), p(y,z)\), and \(p(y)\) individually, expressing each in terms of the functions \(a\), and \(b\). Here we go. Suppose \(\color{teal}{p(x,y,z) = a(x,y)b(y,z)}\).
  \[ \begin{aligned} \color{blue}{p(x,y)} &= \sum_z \color{teal}{p(x,y,z)} = \sum_z \color{teal}{a(x,y) b(y,z)} = \color{blue}{a(x,y) \sum_z b(y,z)} \\ \color{orange}{p(y,z)} &= \sum_x \color{teal}{p(x,y,z)} = \sum_x \color{teal}{a(x,y) b(y,z)} = \color{orange}{b(y,z) \sum_x a(x,y)} \\ p(y) &= \sum_z \color{orange}{p(y, z)} = \sum_z \left( \color{orange}{b(y,z) \sum_x a(x,y)} \right) \\ &= \left(\sum_x a(x,y)\right) \left(\sum_z b(y,z)\right) \\ &= \color{orange}{p(y,z) \over b(y,z)} \color{blue}{p(x,y) \over a(x,y)} \\ &= \frac{p(x,y) p(y,z)}{\color{teal}{p(x,y,z)}} \\ \therefore p(x,y,z) &= \frac{p(x,y) p(y,z)}{p(y)} & \text{ie }X \perp Z\,|\,Y \\ \end{aligned} \]
\(\Box\)

Source: Information Theory.


 

\(X \perp Z\, | \,Y\)

Definition 2.4 (Conditional Independence): \(X\) is independent of \(Z\) conditioning on \(Y\), if
  \[ \begin{aligned} p(x, y, z) &= \\ & \begin{cases} {p(x,y)p(y,z) \over p(y)} = p(x,y)p(z|\,y) & \text{if }p(y) > 0 \\ 0 & \text{otherwise.} \\ \end{cases} \end{aligned} \]
Note:
  \[ \begin{aligned} {p(x,y)p(y,z) \over p(y)} = p(x,y)p(z|\,y) = p(x|\,y)p(y,z) = p(x)p(y|\,x)p(z|\,y) \end{aligned} \]

Source: Information Theory.


Friday, January 10, 2014

 

Proposition 2.7 Markov Chain

For random variables \(X_1, X_2, \cdots , X_n\), where \(n \ge 3, X_1 \rightarrow X_2 \rightarrow \cdots \rightarrow X_n\) forms a Markov chain if
  \[ \begin{aligned} p(x_1, x_2, \cdots, x_n) &= \\ & \begin{cases} p(x_1,x_2)p(x_3|\,x_2)\cdots p(x_n|\,x_{n-1}) & \text{if } p(x_2), p(x_3), \cdots, p(x_{n-1}) > 0 \\ 0 & \text{otherwise}. \end{cases} \end{aligned} \]
Show that \(X_1 \rightarrow X_2 \rightarrow \cdots \rightarrow X_n\) forms a Markov Chain if and only if \(X_n \rightarrow X_{n-1} \rightarrow \cdots \rightarrow X_1\) forms a Markov Chain.

By definition,
  \[ \begin{aligned} p(a |\, b) &= {p(a, b) \over p(b) } \\ \end{aligned} \]
Furthermore, if \(X_1 \rightarrow X_2 \rightarrow \cdots \rightarrow X_n\) forms a Markov Chain,
  \[ \begin{aligned} p(x_1, x_2, \cdots, x_n) &= p(x_1,x_2)p(x_3|\,x_2)\cdots p(x_{n-1}|\,x_{n-2}) p(x_n|\,x_{n-1}) \\ &= p(x_1,x_2){p(x_3, x_2) \over p(x_2)} \cdots {p(x_{n-1},x_{n-2}) \over p(x_{n-2})} {p(x_n,x_{n-1}) \over p(x_{n-1})} \\ &= {p(x_1,x_2) \over p(x_2)} {p(x_3, x_2) \over p(x_3)} \cdots {p(x_{n-2,}x_{n-1}) \over p(x_{n-1})} p(x_n,x_{n-1}) \\ &= p(x_n,x_{n-1})p(x_{n-2}|\,x_{n-1})\cdots p(x_2|\,x_3) p(x_1|\,x_2) \\ \end{aligned} \]
\(\therefore X_n \rightarrow X_{n-1} \rightarrow \cdots \rightarrow X_1\) forms a Markov Chain.

\(\Box\)

Source: Information Theory.


Thursday, January 09, 2014

 

\(\widehat{f^{(n)}} = (ik)^n\cdot\widehat{f}\)

The Fourier Transform of a function \(f(x)\) can be defined as:
  \[ \begin{aligned} \color{blue}{\widehat{f(x)} = {1 \over \sqrt{2\pi}}\int_{-\infty}^{\infty} e^{-ikx} f(x)\,dx} \\ \end{aligned} \]
So,
  \[ \begin{aligned} \color{teal}{\widehat{f'(x)}} &\color{teal}{= {1 \over \sqrt{2\pi}}\int_{-\infty}^{\infty} e^{-ikx} f'(x)\,dx} & \text{substitue } f(x) \text{ by } f'(x) \\ \end{aligned} \]
Using integration by parts, let
  \[ \begin{aligned} dv &= f'(x)\,dx \\ v &= f(x) \\ u &= e^{-ikx} \\ du &= -ike^{-ikx}\,dx \\ \end{aligned} \]
Now,
  \[ \begin{aligned} \color{teal}{{1 \over \sqrt{2\pi}}\int_{-\infty}^{\infty} e^{-ikx} f'(x)\,dx} &= {1 \over \sqrt{2\pi}} \int_{-\infty}^{\infty} u\,dv \\ &= {1 \over \sqrt{2\pi}} \left( uv \bigg|_{-\infty}^{\infty} - \int_{-\infty}^{\infty} v\,du \right) \\ &= \color{blue}{{1 \over \sqrt{2\pi}}} \left( e^{-ikx} f(x) \bigg|_{-\infty}^{\infty} + ik \color{blue}{\int_{-\infty}^{\infty} e^{-ikx}f(x)\,dx} \right) \\ \end{aligned} \]

Suppose \(f(x) \rightarrow 0 \) as \(x \rightarrow \pm \infty\),
  \[ \begin{aligned} \color{teal}{{1 \over \sqrt{2\pi}}\int_{-\infty}^{\infty} e^{-ikx} f'(x)\,dx} &= ik \color{blue}{{1 \over \sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-ikx}f(x)\,dx} \\ \therefore \widehat{f'(x)} &= ik \cdot \widehat{f(x)} \\ \end{aligned} \]
Telescoping,
  \[ \begin{aligned} \widehat{f^{(n)}} &= (ik)^{n} \cdot \widehat{f} \\ \end{aligned} \]
\(\Box\)

Source: Computational Methods for Data Analysis.


Tuesday, January 07, 2014

 

Fourier Series

Fourier made the astounding claim that any function can be represented by an infinite sum of sines and consines. That is,
  \[ \begin{aligned} f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty (\color{blue}{a_n} \cos{nx} + \color{blue}{b_n} \sin{nx}) & x \in (-\pi, \pi] \end{aligned} \]
Suppose that is true, how do we figure out the coefficients \(a_n\) and \(b_n\) ? Multiply and integrate! For example,
  \[ \begin{aligned} \int_{-\pi}^{\pi} f(x) \cos{mx}\,dx &= \int_{-\pi}^{\pi} \left(\frac{a_0}{2}\cos{mx} + \sum_{n=1}^\infty (a_n \cos{nx}\cos{mx} + b_n \sin{nx}\cos{mx})\right)\,dx \\ &= \frac{a_0}{2m}\sin{mx} \big|_{-\pi}^{\pi} + \sum_{n=1}^\infty \left( \int_{-\pi}^{\pi} a_n \cos{nx}\cos{mx} + \int_{-\pi}^{\pi} b_n \sin{nx}\cos{mx} \right)\,dx \\ &= \sum_{n=1}^\infty \left( a_n \int_{-\pi}^{\pi} \cos{nx}\cos{mx}\,dx + b_n \int_{-\pi}^{\pi} \sin{nx}\cos{mx}\,dx \right) \\ \end{aligned} \]
But as we saw earlier, \(\displaystyle \int_{-\pi}^{\pi} \sin{nx}\cos{mx}\,dx\) simply evaluates to zeros, whereas \(\displaystyle \int_{-\pi}^{\pi} \cos{nx}\cos{mx}\,dx\) also evaluate to zeros except when \(m = n\) it evaluates to \(\pi\). So,
  \[ \begin{aligned} \int_{-\pi}^{\pi} f(x) \cos{nx}\,dx = a_n \pi \\ \therefore \color{blue}{a_n = {1 \over \pi} \int_{-\pi}^{\pi} f(x) \cos{nx}\,dx} \\ \end{aligned} \]
Similarly, we can multiply both sides with \(\sin{mx}\) and integrate, and come up with:
  \[ \begin{aligned} \color{blue}{b_n} &\color{blue}{= {1 \over \pi} \int_{-\pi}^{\pi} f(x) \sin{nx}\,dx} \\ \end{aligned} \]
Furthermore, similar methods can be applied to the complex plane. Suppose any complex function can be represented as:
  \[ \begin{aligned} f(x) &= \sum_{-\infty}^{\infty} \color{blue}{c_n} e^{in \pi x/L} & x \in [-L, L] \\ \end{aligned} \]
To find \(c_n\), we can multiply both sides with \(\displaystyle e^{-im \pi x/L}\) and then integrate from \(-L\) to \(L\). The integral will evaluate to zeros except when \(n=m\), the value becomes \(2L\). So
  \[ \begin{aligned} \color{blue}{c_n} &\color{blue}{= {1 \over 2L} \int_{-L}^{L} f(x) e^{-in \pi x/L}\,dx} \\ \end{aligned} \]
\(\Box\)

Source: Computational Methods for Data Analysis.


 

\(\int_{-\pi}^{\pi}\cos{nx}\cos{mx}\,dx\)

What is the value this integral ?

Surprisingly it's always zero when \(n \not = m\), but when \(n = m\) the value is exactly \(\pi\)!

How so ? Suppose \(n \not = m\), similar steps used to evalue \(\displaystyle \int_{-\pi}^{\pi}\sin{nx}\cos{mx}\,dx\) can be used to integrate and show that the value is zero. When \(n = m\), however, some of the terms "collapse" early into the constant \(\displaystyle \frac{1}{2}\) before the integration, and that's the key step to show why the value ends up as \(\pi\). Rather cool.


Monday, January 06, 2014

 

\(\int_{-\pi}^{\pi}\sin{nx}\cos{mx}\,dx\)

What is the value this integral ?

Thanks to Euler, we know that
  \[ \begin{aligned} \sin{\theta} &= \frac{e^{i\theta}-e^{-i\theta}}{2i} \\ \cos{\theta} &= \frac{e^{i\theta}+e^{-i\theta}}{2} \\ \therefore \sin{nx} &= \frac{e^{inx}-e^{-inx}}{2i} \\ \cos{mx} &= \frac{e^{imx}+e^{-imx}}{2} \\ \end{aligned} \]
Therefore
  \[ \begin{aligned} \int_{-\pi}^{\pi}\sin{nx}\cos{mx}\,dx &= \int_{-\pi}^{\pi} \frac{e^{inx}-e^{-inx}}{2i} \cdot \frac{e^{imx}+e^{-imx}}{2} \,dx \\ &= \frac{1}{4i} \int_{-\pi}^{\pi} \left( e^{i(n+m)x} + e^{i(n-m)x} - e^{-i(n-m)x} - e^{-i(n+m)x} \right)\,dx \\ &= \left[-\frac{e^{i(n+m)x}}{4(n+m)} - \frac{e^{i(n-m)x}}{4(n-m)} - \frac{e^{-i(n-m)x}}{4(n-m)} - \frac{e^{-i(n+m)x}}{4(n+m)} \right]_{-\pi}^{\pi} \\ &= -\frac{1}{4(n+m)}\left[ e^{i(n+m)x} + e^{-i(n+m)x} \right]_{-\pi}^{\pi} -\frac{1}{4(n-m)}\left[ e^{i(n-m)x} + e^{-i(n-m)x} \right]_{-\pi}^{\pi} \\ \end{aligned} \]
But anything of the form:
  \[ \begin{aligned} \left[e^{i\beta x} + e^{-i\beta x}\right]_{-\pi}^{\pi} &= (\cos{\beta\pi} + i\sin{\beta\pi}) + (\cos{(-\beta\pi)} + i\sin{(-\beta\pi)}) - \left[ (\cos{(-\beta\pi)} + i\sin{(-\beta\pi)}) + (\cos{\beta\pi} + i\sin{\beta\pi}) \right] \\ &= \cos{\beta\pi} + i\sin{\beta\pi} + \cos{\beta\pi} - i\sin{\beta\pi} - \left[ \cos{\beta\pi} - i\sin{\beta\pi} + \cos{\beta\pi} + i\sin{\beta\pi} \right] \\ &= 0 & ! \end{aligned} \]
Therefore,
  \[ \begin{aligned} \int_{-\pi}^{\pi}\sin{nx}\cos{mx}\,dx &= 0 \\ \end{aligned} \]
\(\Box\)

Who would have thought ?


Saturday, January 04, 2014

 

Ramanujan Q-distribution

Ramanujan Q-distribution:
  \[ \begin{aligned} \frac{N!}{(N-k)!N^k} &= \exp(\color{teal}{\ln N!} - \color{blue}{\ln(N-k)!} -k \ln N) \\ \end{aligned} \]
As we saw earlier, by Stirling's approximation,
  \[ \begin{aligned} \ln N! &= (N + \frac{1}{2}) \ln{N} - N + \ln{\sqrt{2\pi}} + O(\frac{1}{N}) \\ \end{aligned} \]
Therefore,
  \[ \begin{aligned} \frac{N!}{(N-k)!N^k} &= \exp\left( \color{teal}{(N + \frac{1}{2}) \ln{N} - N + \ln{\sqrt{2\pi}} + O(\frac{1}{N})} - \color{blue}{\left((N - k + \frac{1}{2}) \ln{(N-k)} - (N - k) + \ln{\sqrt{2\pi}} + O(\frac{1}{N-k})\right)} - k \ln N \right) \\ &= \exp\left( (N + \frac{1}{2}) \ln{N} - N + \ln{\sqrt{2\pi}} - (N - k + \frac{1}{2}) \ln{(N-k)} + N - k - \ln{\sqrt{2\pi}} - k \ln N + O(\frac{k}{N(N-k)}) \right) \\ &= \exp\left( (N + \frac{1}{2}) \ln{N} - (N - k + \frac{1}{2}) \ln{(N-k)} - k - k \ln N + O(\frac{k}{N}) \right) \\ &= \exp\left( (N + \frac{1}{2}) \ln{N} - (N - k + \frac{1}{2}) \ln{N(1-\frac{k}{N})} - k - k \ln N + O(\frac{k}{N}) \right) \\ &= \exp\left( (N + \frac{1}{2}) \ln{N} - (N - k + \frac{1}{2}) \ln{N} \color{blue}{- (N - k + \frac{1}{2}) \ln{(1-\frac{k}{N})} - k} - k \ln N + O(\frac{k}{N}) \right) \\ &= \exp\left(\color{blue}{ -\, (N - k + \frac{1}{2}) \ln{(1-\frac{k}{N})} - k} + O(\frac{k}{N}) \right) \\ \end{aligned} \]
Using Taylor expansion,
  \[ \begin{aligned} \ln{(1-\frac{k}{N})} &= -\frac{k}{N} - \frac{k^2}{2N^2} + O(\frac{k^3}{N^3}) \\ \end{aligned} \]
So,
  \[ \begin{aligned} \frac{N!}{(N-k)!N^k} &= \exp\left(-\, \left(N - k + \frac{1}{2}\right) \left( -\frac{k}{N} - \frac{k^2}{2N^2} + O(\frac{k^3}{N^3}) \right) - k + O(\frac{k}{N}) \right) \\ &= \exp\left(\left(N - k + \frac{1}{2}\right) \left( \frac{k}{N} + \frac{k^2}{2N^2} + O(\frac{k^3}{N^3}) \right) - k + O(\frac{k}{N}) \right) \\ &= \exp\left(k + \color{blue}{\frac{k^2}{2N}} + \color{orange}{O(\frac{k^3}{N^2})} - \color{blue}{\frac{k^2}{N}} -\color{orange}{\frac{k^3}{2N^2}} + \color{teal}{O(\frac{k^4}{N^3})} + \frac{k}{2N} + \color{orange}{\frac{k^2}{4N^2}} + \color{teal}{O(\frac{k^3}{N^3})} - k + O(\frac{k}{N}) \right) \\ &= \exp\left(\color{blue}{-\frac{k^2}{2N}} + O(\frac{k}{N}) + \color{orange}{O(\frac{k^3}{N^2})} \right) \\ \frac{N!}{(N-k)!N^k} & \thicksim e^{-\frac{k^2}{2N}} \\ \end{aligned} \]
Applying this to the Birthday problem, let
  \[ \begin{aligned} N &= \text{ number of possible unique values} \\ k &= \text{ number of (uniform) random selections (from } N \text{ values)} \\ P &= \text{ probability of collision (among the } k \text{ selections)} \\ \end{aligned} \]
  \[ \begin{aligned} P &= 1 - \frac{N!}{(N-k)! N^k } \\ \therefore P &\thicksim 1 - e^{-\frac{k^2}{2N}} \\ \end{aligned} \]
\(\Box\)

Source: Analysis of Algorithm


Thursday, January 02, 2014

 

Euler-Maclaurin Summation

A classic formula for estimating sums via integrals. Suppose \(f\) is a function defined on \([1, \infty)\) whose derivatives exist and are absolutely integrable. Then
  \[ \begin{aligned} \color{blue}{ \sum_{1 \le k \le N} f(k) = \int_1^N f(x)\,dx + \frac{1}{2}f(N) + C_f + \frac{1}{12} f'(N) - \frac{1}{720}f^{(3)}(N) + \cdots } \qquad &\text{!} \\ \end{aligned} \]
Examples:
  \[ \begin{aligned} H_N &= \int_1^N \frac{1}{x}\,dx + \frac{1}{2} \cdot \frac{1}{N} + \gamma - \frac{1}{12} \cdot \frac{1}{N^2} + O(\frac{1}{N^4}) \\ &= \ln{N} + \frac{1}{2N} + \gamma - \frac{1}{12N^2} + O(\frac{1}{N^4}) \\ \end{aligned} \]
  \[ \begin{aligned} \ln{N!} &= \ln{N} + \ln{(N-1)} + \cdots \\ &= \int_1^N \ln{x}\,dx + \frac{1}{2} \ln{N} + \frac{1}{2} \ln{2\pi} + \frac{1}{12N} + O(\frac{1}{N^3}) \\ &= N\ln{N} - N + \ln{\sqrt{2\pi N}} + \frac{1}{12N} + O(\frac{1}{N^3}) & \text{Stirling's approximation} \\ &= (N + \frac{1}{2}) \ln{N} - N + \ln{\sqrt{2\pi}} + O(\frac{1}{N}) \\ \end{aligned} \]

Source: Analysis of Algorithm


Wednesday, January 01, 2014

 

Asymptotics for linear recurrences

Assume that a rational generating function \(\displaystyle f(z) \over g(z) \), with

  1. \(f(z)\) and \(g(z)\) relatively prime and
  2. \(g(0)\ne0\),
  3. has a unique pole \(\displaystyle 1 \over \beta\) of smallest modulus (that is, \(\displaystyle g({1 \over \alpha})=0\) and \(\alpha\ne\beta\) implies that \(\displaystyle |{1 \over \alpha}|>|{1 \over \beta}|\), or \(|\alpha|<|\beta|\)).
Then, if the multiplicity of \(\displaystyle 1 \over \beta\) is \(\nu\), we have a powerful theorem for precise approximation
  \[ \begin{aligned} \color{red}{[z^n]{f(z)\over g(z)}\sim C\beta^{n}n^{\nu-1}}\quad\hbox{where}\quad \color{red}{C=\nu{(-\beta)^\nu f(1/\beta)\over g^{(\nu)}(1/\beta)}} \qquad &\text{!} \\ \end{aligned} \]
Example:
  \[ \begin{aligned} a_n &= 5_{n-1} - 6a_{n-2} &\text{for } n \ge 2 \text{ with } a_0=0 \text{ and } a_1=1 \\ a_n &= 5_{n-1} - 6a_{n-2} + \delta_{n1} &\text{make recurrence work for all } n \\ A(z) &= 5z A(z) - 6z^2 A(z) + z & \text{multiply by }z^n \text{ and sum on }n \\ A(z) &= \frac{z}{1 - 5z + 6z^2} \\ &= \frac{z}{(z-{1 \over 2})(z-{1 \over 3})} \\ \end{aligned} \]
The root closest to zero in the denominator is \(\displaystyle \frac{1}{3}\), which is therefore the "smallest modulus" with a multiplicity of \(1\) (ie as a root \(\displaystyle \frac{1}{3}\) appears only once).
  \[ \begin{aligned} {1 \over \beta} &= {1 \over 3} \\ v &= 1 \\ g'(z) &= -5 + 12z \\ g'({1 \over 3}) &= -5 + {12 \over 3} = -1 \\ C &= \frac{(-3)^1 f({1 \over 3})}{-1} \\ &= \frac{(-3) \cdot {1 \over 3}}{-1} = 1 \\ [z^n]A(z) &\equiv [z^n]\frac{z}{1 - 5z + 6z^2} \equiv a_n \thicksim 3^n \\ \end{aligned} \]

Source: Analysis of Algorithm


This page is powered by Blogger. Isn't yours?