Processing math: 7%
Google
 
Web unafbapune.blogspot.com

Sunday, August 25, 2013

 

Differentiation

Newton's method

Used to approximate the root of a function. The update rule:
  xn+1=xnf(x)f(x)
See more details at PennCalcWiki.


Monday, August 19, 2013

 

Discretization

Power Series

Given a sequence an, the power series assoiciated with an is the infinite sum
  f(x)=n=0anxn=a0+a1x+a2x2+

Interval and radius of convergence

Given a power series, there exists some 0R such that

The interval of convergence is the interval (R,R), possibly with the endpoints included (they need to be individually checked in general). The radius of convergence is half the length of the interval of convergence.

To find the interval of convergence, use ratio test or root test to find the interval where the series converges absolutely and then check the endpoints of the interval. For example,
  ρ=lim

Shifted power series

  \begin{aligned} f(x) &= a_0 + a_1 (x - c) + a_2 (x - c)^2 + \cdots \\ \lvert x - c\rvert &< \lim_{n \rightarrow \infty} \frac{\lvert a_n \rvert}{\lvert a_{n+1} \rvert} = R \end{aligned}
Interval of convergence: (c - R, c + R). Such a power series is said to be centered at c, since the interval of convergence for the series will be centered at c.

Taylor series convergence

For what values of x does a function's Taylor series converge ?

The series:
  \begin{aligned} \sum_{n=0}^\infty a_n x^n &= \sum_{n=0}^\infty \frac{1}{n!} f^{(n)}(0) \, x^n \\ \end{aligned}
converges absolutely for \lvert x \rvert < R , where
  \begin{aligned} R = \lim_{n \rightarrow \infty} \frac{\lvert a_n \rvert}{\lvert a_{n+1} \rvert} = \lim_{n \rightarrow \infty} \frac{\lvert f^{(n)}(0) \rvert }{\lvert f^{(n+1)}(0) \rvert} \cdot \frac{(n+1)!}{n!} = \lim_{n \rightarrow \infty} (n+1) \cdot \frac{\lvert f^{(n)}(0) \rvert }{\lvert f^{(n+1)}(0) \rvert} \\ \end{aligned}
Within the interval of convergence, differentiation and integration of a power series are nice, in that they can be done term by term.

When a Taylor series converges, it does not always converge to the function. But when it does, the function is referred to as real-analytic. That is, a function f is real-analytic at a if the Taylor series for f converges to f near a.

Error defined

Given a convergent series,
  \begin{aligned} E_k &= s - s_k \\ &= \sum_{n=0}^\infty a_n - \sum_{n=0}^k a_n \\ &= \sum_{n=k+1}^\infty a_n \\ \end{aligned}

Alternating series error bound

  \begin{aligned} \lvert E_k \rvert \le \lvert a_{k+1} \rvert \\ \end{aligned}

Integral test for error bounds

  \begin{aligned} \int_{m+1}^\infty f(x)\,dx < \sum_{n=m+1}^\infty f(n) = E_m < \int_{m}^\infty f(x)\,dx \\ \end{aligned}

Taylor approximations

  \begin{aligned} f(x) &\approx \sum_{n=0}^N \frac{f^{(n)}(0)}{n!}\,x^n \\ f(x) &= \sum_{n=0}^N \frac{f^{(n)}(0)}{n!}\,x^n + E_N(x) \\ \end{aligned}

Weak bound

  \begin{aligned} E_N(x) \le C\,x^{N+1} = O(x^{N+1}) \end{aligned}
for some constant C and x sufficiently close to 0.

Taylor Remainder Theorem

  \begin{aligned} E_N(x) = \frac{f^{(n+1)}(t)}{(N+1)!}x^{N+1} \end{aligned}
for \lvert t \rvert \le \lvert x \rvert , which is not very useful since there is no way to determine t.

Taylor error bound

  \begin{aligned} \lvert E_N(x) \rvert \le \frac{max(\lvert f^{(n+1)}(t) \rvert)}{(N+1)!}\lvert x^{N+1} \rvert \end{aligned}
for \lvert t \rvert \le \lvert x \rvert .

Saturday, August 10, 2013

 

Series convergence tests

Partial sum

\displaystyle \sum_{n=1}^T a_n is called the Tth partial sum of the series \displaystyle \sum_{n=1}^\infty a_n. A series converges if the sequence of partial sums converges.

nth term test

If \displaystyle \lim_{n \rightarrow \infty} a_n \ne 0, then the series \displaystyle \sum_{n=0}^\infty a_n diverges. Can be used to test for divergence (but not convergence). ie The converse is not always true, but the contra-positive is.

Integral test

  \begin{aligned} \sum_{n=m}^\infty f(n) \hskip1em \text{converges} \hskip2em \Leftrightarrow \hskip2em \int_{m}^\infty f(x) \, dx \hskip1em \text{converges} \\ \end{aligned}
Condition: f(x) is a positive decreasing function.

p-series test

  \begin{aligned} \sum_{n=1}^\infty \frac{1}{n^p} \hskip1em \text{converges if and only if } p > 1 \\ \end{aligned}

Comparison test

Let a_n and b_n be positive sequences such that b_n > a_n for all n. It follows that if \sum b_n converges, then so does \sum a_n, and if \sum a_n diverges, then so does \sum b_n.

Limit test

Let a_n and b_n be positive series. If \displaystyle \lim_{n \rightarrow \infty} \frac{a_n}{b_n} = L and 0 < L < \infty, then the series \sum a_n and \sum b_n either both converge or both diverge.

Root test

Given a series \sum a_n with all a_n > 0, let \displaystyle \rho = \lim_{n \rightarrow \infty} \sqrt[n]{a_n} . If \rho < 1, then \sum a_n converges. If \rho > 1, then \sum a_n diverges. If \rho = 1, then the test is inconclusive. Based on the idea of "eventual geometric series".

Ratio test

Given a series \sum a_n with all a_n > 0, let \displaystyle \rho = \lim_{n \rightarrow \infty} \frac{a_{n+1}}{a_n} . If \rho < 1, then \sum a_n converges. If \rho > 1, then \sum a_n diverges. If \rho = 1, then the test is inconclusive. Work best when exponential or factorial factor is involved - not so with ratio of polynomials.

Summary of methods for a positive series

Given a series \sum a_n, where a_n > 0 for all n:

  1. Do the terms go to 0 ? If \displaystyle \lim_{n \rightarrow \infty} a_n \ne 0 , then by the nth term test, the series diverges. (If \displaystyle \lim_{n \rightarrow \infty} a_n = 0 , then the test is inconclusive).
  2. Does a_n involve exponential functions like c^n where c is constant ? Does a_n involve factorial ? Then the ratio test should be used.
  3. Is a_n of the form (b_n)^nfor some sequence b_n ? Then use the root test.
  4. Does ignoring lower order terms make a_n look like a familiar series (e.g. p-series or geometric series) ? Then use the comparison test or the limit test.
  5. Does the sequence a_n look easy to integrate ? Then use the integral test.

Alternating Series test

An alternating series \sum (-1)^n b_n with decreasing terms converges if and only if \displaystyle \lim_{n \rightarrow \infty} b_n = 0 . (Note when to examine the asymptotic value of the nth term vs. examining the p-value of an integral.)

Conditional vs Absolute Convergence

\displaystyle \sum a_n converges absolutely if \displaystyle \sum \lvert a_n \rvert converges. \displaystyle \sum a_n converges conditionally if \displaystyle \sum \lvert a_n \rvert diverges but \displaystyle \sum a_n converges. In other words, a series is conditionally convergent if it is convergent but not absolutely convergent.

Absolute Convergence test

If a series converges absolutely, then it always converges. (But this is not necessarily true if it converges conditionally.)

Interesting notes


Wednesday, August 07, 2013

 

Discrete Calculus

Product Rule

  \begin{aligned} \Delta (ab)_n &= a_n \Delta b_n + b_n \Delta a_n + \Delta a_n \Delta b_n \\ \triangledown (ab)_n &= a_n \triangledown b_n + b_n \triangledown a_n - \triangledown a_n \triangledown b_n \\ \\ \Delta (uv) &= u \Delta v + Ev \Delta u \\ \end{aligned}
If a is a polynomial of degree p, then
  \begin{aligned} \Delta^{p+1} a &= (0) \\ \end{aligned}

Falling Power Rule

  \begin{aligned} n^{\underline k} &= n (n-1) \cdots (n-k+1), \hskip4em n^\underline{0} = 1 \\ \Delta n^{\underline k} &= k \, n ^{\underline{k-1}}\\ n^{- \underline k} &= \frac{1}{(n+1) (n+2) \cdots (n+k)} \\ \end{aligned}

Sequence operators

  \begin{aligned} (Ia)_n &= a_n \\ (Ea)_n &= a_{n+1} \\ (\Delta a)_n &= a_{n+1} - a_n, \hskip4em \Delta = E - I \\ (\triangledown a)_n &= a_n - a_{n-1}, \hskip4em \triangledown = I - E^{-1} \\ (\triangledown \Delta a)_n &= a_{n+1} - 2a_n + a_{n-1}, \hskip1em \text{second central difference} \\ \triangledown \Delta \, a &= \Delta \triangledown a \\ \end{aligned}

Higher Derivatives

  \begin{aligned} \Delta^k &= (E - I)^k = \sum_{i=0}^k (-1)^{k-i} {k \choose i} E^i \\ \end{aligned}

Anti-difference

  \begin{aligned} \Delta^{-1} &= (E - I)^{-1} = -(I - E)^{-1} = -(I + E + E^2 + E^3 + \cdots) + C \\ \end{aligned}
so long as the sequence a_n is eventually 0

Discrete Fundamental Theorem of Integral Calculus

  \begin{aligned} \sum_{n=A}^B \Delta u &= u \, \bigg \rvert_{n=A}^{B+1} \\ \sum_{n=A}^B \triangledown u &= u \, \bigg \rvert_{n=A-1}^{B} \\ \end{aligned}
Alternatively,
  \begin{aligned} \sum_{n=A}^B u &= \Delta^{-1} u \, \bigg \rvert_{n=A}^{B+1} \\ \sum_{n=A}^B u &= \triangledown^{-1} u \, \bigg \rvert_{n=A-1}^{B} \\ \end{aligned}

Integration by parts

  \begin{aligned} \Delta (uv) &= u \Delta v + Ev \Delta u \\ \sum_{n=A}^B u \Delta v &= uv \, \bigg \rvert_{n=A}^{B+1} - \sum_{n=A}^B Ev \Delta u \\ \end{aligned}

Differential equations

Linear first-order recurrence relation:
  \begin{aligned} u_{n+1} &= \lambda u_n \\ E \, u &= \lambda u \\ (E - \lambda \, I) \, u &= 0 \\ \implies u_n &= u_0 \, \lambda^n \\ \end{aligned}
Linear first-order difference equation:
  \begin{aligned} \Delta u &= \lambda u \\ (E - I) \, u &= \lambda u \\ (E - (\lambda + 1) \, I) \, u &= 0 \\ \implies u_n &= u_0 \, (\lambda + 1)^n \\ \end{aligned}

Numerical ODE's

Many differential equations of the form \displaystyle \frac{dx}{dt} = f(x,t) cannot be solved exactly. Hence the use of numerical ODE's to approximate a solution.

Euler's method

Uses a difference equation to approximate the solution of an initial value problem. Essentially it's Taylor series 1st order approximation for the recursion relation. Given \displaystyle \frac{dx}{dt} = f(x,t) , and initial value x_0 = x(t_0) , Euler's method approximates x(t_*) for some t_* > t_0 .
  \begin{aligned} x_{n+1} = x_n + h\,f(x_n, t_n) \\ t_{n+1} = t_n + h \\ h = \frac{t_* - t_0}{N} \\ \end{aligned}
where N is the number of intervals.

Numerical Integration

How to approximate the definite integral \displaystyle \int_a^b f(x)\,dx via finite sums which approximates the area under the curve ? Common techniques: Riemann sums, trapezoid rules and Simpson's rule.

Left Riemann Sum

  \begin{aligned} \int_a^b f(x)\,dx \approx \sum_{n=0}^{N-1} f_n \cdot (\Delta x)_n = \sum_{n=0}^{N-1} f_n \cdot (x_{n+1} - x_n) \\ \end{aligned}

Right Riemann Sum

  \begin{aligned} \int_a^b f(x)\,dx \approx \sum_{n=1}^{N} f_n \cdot (\triangledown x)_n = \sum_{n=1}^{N} f_n \cdot (x_n - x_{n-1}) \\ \end{aligned}

Trapezoid Rule

An improvement over the left and right Riemann sums by averaging them.
  \begin{aligned} \int_a^b f(x)\,dx \approx \sum_{n=0}^{N-1} \frac{1}{2} (f_n + f_{n+1}) \cdot (\Delta x)_n = \sum_{n=0}^{N-1} \frac{1}{2} (f_n + f_{n+1}) \cdot (x_{n+1} - x_n) \\ \end{aligned}
If sample points are evenly spaced:
  \begin{aligned} \int_a^b f(x)\,dx \approx h \, \bigg(\frac{1}{2}(f_0 + f_N) + \sum_{n=1}^{N-1} f_n \bigg) \\ \end{aligned}
where \displaystyle h = \frac{b-a}{N}


Sunday, August 04, 2013

 

i^i

  \begin{aligned} i &= 0 + i = \cos(\frac{\pi}{2}) + i \sin(\frac{\pi}{2})\\ &= e^{\frac{i \pi}{2}} \end{aligned}
Thanks to Euler! Therefore,
  \begin{aligned} i^i &= (e^{\frac{i \pi}{2}})^i = e^{\frac{i^2 \pi}{2}} \\ &= e^{\frac{-\pi}{2}} = \frac{1}{e^{\frac{\pi}{2}}} \\ &\approx 0.2078796 \end{aligned}
Furthermore, there is apparently more than one solution, as i^i can be derived with multiples of 2 \pi in the periodic functions. The solution set can therefore be represented by a sequence, with
  \begin{aligned} a_n = e^{-(\frac{\pi}{2} + 2\,n\,\pi)} \end{aligned}
where n is any integer.

For example, for n = -1,0,1,\cdots, etc. a_n = 111.3178, 0.2078796, 0.0003882032, \cdots, etc.


Thursday, August 01, 2013

 

Binomial theorem for rational exponents

\displaystyle (1+x)^a = 1 + ax + a(a-1)\frac{x^2}{2!} + a(a-1)(a-2)\frac{x^3}{3!} + \cdots

where a can be any rational number including fractions and negative numbers !


This page is powered by Blogger. Isn't yours?