Google
 
Web unafbapune.blogspot.com

Monday, December 30, 2019

 

Plugboard of an Enigma Machine


The Plugboard of an Enigma machine was by far the largest contribution to the number of keys for encrypting messages. If I am not mistaken, given an alphabet of \(n\) letters, the number of ways of connecting, thereby swapping, \(m\) pairs of letters can be computed using this formula:

\(\qquad\displaystyle{n \choose 2m}\prod_{0\le k< m}(2m - (2k+1))\)

Or, equivalently, \(\displaystyle{n \choose 2m}\frac{(2m)!}{2^{m}\cdot m!}\)

During the second world war, the initial Enigma machine used had a configuration of \(n=26\) and \(m=6\). So the number of ways of swapping pairs of letters was:

\(\qquad\displaystyle{26 \choose 12}\times 11\times 9\times 7\times 5\times 3\times 1 = 100,391,791,500\)

But why did Arthur Scherbius, the inventor of the Enigma machine, chose 6 as the number of pairs for the plugboard?

Suppose the plugboard could fit any number of cables (from 1 to 13), what is the relative strength of swapping 6 pairs of letters, compared to, say, 11 pairs? To find out, we can plugin in different values to the above formula. For example, using Sagemath:

n = 26
m = int(n/2)

for i in (1..m):
    print i,'\t', binomial(n, 2*i) * prod( (2*i-(2*j+1)) for j in (0..(i-1)) )

 Pairs   Ways of connecting
     1                  325
     2               44,850
     3            3,453,450
     4          164,038,875
     5        5,019,589,575
     6      100,391,791,500
     7    1,305,093,289,500
     8   10,767,019,638,375
     9   53,835,098,191,875
    10  150,738,274,937,250
    11  205,552,193,096,250
    12  102,776,096,548,125
    13    7,905,853,580,625
As we can see, had the Enigma machine been equipped with a plugboard with 11 cables instead of 6, it would have increased the size of the key space by over two thousand times. On the other hand, it's a waste of resources to have more than 11 cables :)

Tuesday, April 23, 2019

 

Evaluating higher moment of a Gaussian integral


How to evaluate \[\begin{aligned} \int_{-\infty}^\infty e^{-s^2}s^4 ds & & ? \end{aligned}\]

Solution:

This clearly looks related to the Gaussian distribution. So we can start with \[\begin{aligned} \int_{-\infty}^\infty e^{-x^2} dx &= \sqrt{\pi} \end{aligned}\] Let \(x = \sqrt{\lambda} s\), so \(dx = \sqrt{\lambda}\, ds\) \[\begin{aligned} \int_{-\infty}^\infty e^{-x^2} dx &= \sqrt{\pi} \\ \int_{-\infty}^\infty e^{-\lambda s^2} \sqrt{\lambda}\, ds &= \sqrt{\pi} & \text{when }\lambda = 1 \\ \int_{-\infty}^\infty e^{-\lambda s^2}\, ds &= \frac{\sqrt{\pi}}{\sqrt{\lambda}} \\ \end{aligned}\] Apply differentiation with respect to \(\lambda\) twice, \[\begin{aligned} -\int_{-\infty}^\infty e^{-\lambda s^2} s^2\, ds &= -\frac{1}{2}\cdot\frac{\sqrt{\pi}}{\lambda^{3/2}} \\ \int_{-\infty}^\infty e^{-\lambda s^2} s^4\, ds &= \frac{3}{4}\cdot\frac{\sqrt{\pi}}{\lambda^{5/2}} \end{aligned}\] Set \(\lambda = 1\), \[\begin{aligned} \int_{-\infty}^\infty e^{-s^2} s^4\, ds &= \frac{3}{4}\sqrt{\pi} \end{aligned}\] \(\Box\)

Friday, April 19, 2019

 

Double Integration by Parts


A fun practice of double integration by parts. Evaluate: \[\begin{aligned} -\int_R \frac{\partial^2}{\partial y^2}\left(\frac{t}{t^2 + y^2}\right)\cos y\,dy \end{aligned}\]

Solution:

\[\begin{aligned} -\int_R \frac{\partial^2}{\partial y^2}\left(\frac{t}{t^2 + y^2}\right)\cos y\,dy &= -\int_R \frac{\partial}{\partial y}\left(\frac{\partial}{\partial y}\left(\frac{t}{t^2 + y^2}\right)\right)\cos y\,dy \\ &= -\frac{\partial}{\partial y}\left(\frac{t}{t^2 + y^2}\right)\bigg\vert_{y=-\infty}^{\infty} - \int_R \frac{\partial}{\partial y}\left(\frac{t}{t^2 + y^2}\right) \sin y\,dy \\ &= -\int_R \frac{\partial}{\partial y}\left(\frac{t}{t^2 + y^2}\right) \sin y\,dy \\ &= -\left(\frac{t}{t^2 + y^2}\right) \sin y \bigg\vert_{y=-\infty}^{\infty} + \int_R \left(\frac{t}{t^2 + y^2}\right) \cos y\,dy \\ &= \int_R \frac{t\cos y}{t^2 + y^2} \,dy \end{aligned}\] \(\Box\)

Sunday, April 14, 2019

 

Euler's Factorial Integral


Ever wonder how the Euler's Factorial Integral \[\begin{aligned} \int_0^\infty x^n e^{-x} dx = n! \end{aligned}\] can be obtained from \(\displaystyle \int_0^\infty e^{-x} dx \) ?

A great practice of repeated integration by parts: \[\begin{aligned} \int_0^\infty e^{-x} dx = -e^{-x}\big\vert_0^\infty &= 1 \\ \int_0^\infty e^{-x} dx = \int_0^\infty e^{-x} x' dx = x e^{-x}\big\vert_0^\infty + \int_0^\infty x e^{-x} dx &= \int_0^\infty x e^{-x} dx \\ \int_0^\infty (\frac{x^2}{2})' e^{-x} dx = \frac{x^2}{2} e^{-x}\big\vert_0^\infty + \frac{1}{2} \int_0^\infty x^2 e^{-x} dx &= \frac{1}{2} \int_0^\infty x^2 e^{-x} dx \\ \frac{1}{2} \int_0^\infty (\frac{x^3}{3})' e^{-x} dx = \frac{1}{2}\cdot \frac{x^3}{3} e^{-x}\big\vert_0^\infty + \frac{1}{2\cdot 3} \int_0^\infty x^3 e^{-x} dx &= \frac{1}{3!} \int_0^\infty x^3 e^{-x} dx \\ &\vdots \\ \frac{1}{(n-1)!} \int_0^\infty (\frac{x^n}{n})' e^{-x} dx = \frac{1}{(n-1)!} \frac{x^n}{n} e^{-x}\big\vert_0^\infty + \frac{1}{(n-1)!} \frac{1}{n} \int_0^\infty x^n e^{-x} dx &= \frac{1}{n!} \int_0^\infty x^n e^{-x} dx \\ \therefore \frac{1}{n!} \int_0^\infty x^n e^{-x} dx &= 1 \\ \int_0^\infty x^n e^{-x} dx &= n! \end{aligned}\] \(\Box\)

But wait. We can do better than this!

See Differentiating under the integral.


Friday, December 28, 2018

 

How to make Latex editing work in vscode/OSX

1. Download and install TexStudio (texstudio-2.12.14-osx.dmg)
2. Download and install mactex (mactex-20180417.pkg)
3. Install extension LaTeX Workshop in Visual Studio Code

Open any .tex document.  Preview in vscode may not work, but once I get it compiled and displayed (as pdf) in TexStudio, the preview in vscode started to work, and the pdf gets updated everytime the Latex document is saved.

Tuesday, December 11, 2018

 

Why the way standard normal table is used works?

Ever wonder why the way we normalize a value for looking up the standard normal table works?

For example, suppose we know a random variable \(X\) has a normal distribution with mean \(\mu\) and standard deviation \(\sigma\), and we want to find \(\mathbb{P}(X \le L)\).

As usual, the procedure is to normalize the value \(L\) into \(\displaystyle M = \frac{L - \mu}{\sigma}\) and then look up \(M\) from the standard normal table. This works. But why?

Technically the reason is simple. It's just a change of variable in the underlying integral.

How? Remember \(\mathbb{P}(X \le L)\) is the cumulative distribution function of the normal distribution?

In particular, by definition \[\begin{aligned} \mathbb{P}(X \le L) &= \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^{L} e^{-\frac{(x-\mu)^2}{2\sigma^2}} \, dx \\ \end{aligned}\] Let \(\displaystyle y = \frac{x-\mu}{\sigma}\) and \(\displaystyle M = \frac{L-\mu}{\sigma}\) \[\begin{aligned} \frac{dy}{dx} &= \frac{1}{\sigma} \\ dy &= \frac{dx}{\sigma} \\ \end{aligned}\] Substituting variable with \(y\), which leads to the necessity of adjusting the bound with \(M\), we turn the above integral into: \[\begin{aligned} \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{M} e^{-\frac{y^2}{2}} \, dy \\ \end{aligned}\] but that's equivalent to \(\mathbb{P}(Y \le M)\) under the standard normal distribution! That's why it works.


Tuesday, November 27, 2018

 

Angular Frequency Refresher

MITx: 18.031x, Introduction to Differential Equations > Unit 3: Due 09 March > 6 Sinusoidal Functions > 7. Graphing sinusoidal functions.

Prof. Poonen's 18.03 Lecture Notes, Spring 2018.


This page is powered by Blogger. Isn't yours?