Sunday 25 August 2013

(Weak) Law of Large Numbers and the Central Limit Theorem

Following the definitions of convergence in probability and convergence in distribution in the previous post - we now state two fundamental theorems which incorporate these concepts.

(Weak) Law of Large Numbers:
Let $\left\{X_i \right\}$ be a sequence of iid (independent and identically distributed) random variables with $\mathbb{E}\left[X_i\right] = \mu \lt \infty$ and $\text{Var}\left(X_i \right) = \sigma^2 $. Now define $$S_n:=\sum_{i=1}^{n} X_i$$ then $$\frac{S_n}{n}\rightarrow \mu$$ as $n\rightarrow \infty$.

Proof:
Since the random variables  are all iid then $$\mathbb{E}\left[\frac{S_n}{n} \right] = \frac{1}{n} \mathbb{E}\left[S_n\right] = \frac{n \mu}{n} = \mu$$ Similarly $$\text{Var}\left( \frac{S_n}{n} \right) = \frac{1}{n^2} \left(\mathbb{E}[S_n^2]- \mathbb{E}[S_n]^2 \right)=\frac{n \sigma^2}{n^2} = \frac{\sigma^2}{n}$$ We now call upon the Chebyshev's Inequality which states if $X$ is a random variable with finite expectation $\mu$ and non-zero variance $\sigma^2$ then $$P\left( \left|X-\mu \right| \ge n \sigma \right) \le \frac{1}{n^2}$$It gives a bound on the distributions values in terms of the variance - this is completely distribution agnostic.

We can now use Chebyshev's Inequality and re-write it using $X=S_n$ and $n \sigma = \epsilon$:
$$P\left( \left|S_n-\mu \right| \ge \epsilon \right) \le \frac{\sigma^2}{n \epsilon^2}$$Hence
$$ P\left( \left|S_n-\mu \right| \lt \epsilon \right) = 0$$ as $n \rightarrow \infty$. Which is precisely the definition of convergence in probability.

Central Limit Theorem (CLT):
We shall state a slightly restricted version of the CLT which is fine for illustrative purposes. Taking our $S_n$ from above, then the CLT states:
Let $\left\{X_1, X_2, . . . \right\}$ be a sequence of i.i.d random variables with finite expectation $\mathbb{E}[X_i]  = \mu < \infty$ and finite non-zero variance $\text{Var}\left(X_i\right) = \sigma^2 < \infty$. Then

$$ P \left( \frac{S_n-n \mu}{\sigma \sqrt{n}} \leq x \right) \rightarrow \Phi(x)$$
as $n \rightarrow \infty$ and the convergence is in distribution and $\Phi(x)$ is the cumulative density function of of a Standard Normal variable. We shall not prove the CLT as only an understanding of what it says and how it can be applied is required.

This explains why Normal distributions are so common in modelling (and even nature) since under these mild conditions, regardless of distribution - if this combination of random variables is formed then it will tend to the normal distribution for large enough n.

Thursday 22 August 2013

Convergence of Random Variables




Before moving on to scaling a random walk into a Wiener process, we must understand the notion of convergence of random variables.

We won't cover the notions of uniform and pointwise convergence of a sequence of functions here - these are covered in any undergraduate analysis class. We will delve right into Convergence of random variables.

Convergence in probability:
Let $\left\{X_n\right\}$ be a sequence of of random variables and $X$ be a random variable. $\left\{X_n\right\}$  is said to converge in probability to $X$ if $\forall \epsilon \lt 0$
$$P\left(\left| X_n-X \right| \lt \epsilon \right)=0$$ as $n \rightarrow \infty$.

Another notion of convergence of random variables is Convergence in distribution.

Convergence in distribution:
Given random variables $\{X_1, X_2, ...\}$ and the cumulative distribution functions $F_i(x)$ of $X_i$ then we say that $X_n$ converges to the distribution of $X$ if
$$\lim_{n \rightarrow \infty} F_n(x) = F(x)$$ for every $x \in \mathbb{R}$ at which $F(x)$ is continuous.

Note that only the distributions of the variables matter - so they random variables themselves could in fact be defined on different probability spaces.

These may seem like fairly elementary definitions, but they hold great importance in applications of probability, namely through the (Weak) Law of Large numbers and the Central Limit Theorem - which will be covered in the next post.

Monday 19 August 2013

I've returned with a random walk

So after two years absent from blogging, I am now back with a rejuvenated interest in mathematics, more specifically financial mathematics. So from now on, this blog will serve as my workbook as I endeavour to learn more about portfolio valuation, solving stochastic differential equations and other such fun. Such posts will not be rigorous in the mathematical sense, but will serve as a heuristic survey of the different tools available and how they are used in a financial context. The posts may also seem scattered content wise as I really haven't planned a syllabus - I'm just learning things as I come across them.

With that in mind - I will start with the very basics; introducing my first blog in over two years, I give a quick overview of Random Walks in $\mathbb{Z}$

Let $X_0 := 0$ be our starting position and for each $i \in \mathbb{N}$ let $$X_i:=X_{i-1}+dX_i$$ where $dX_i = \pm 1$ with equal probability - i.e is a Bernoulli distributed random variable with $p=q=\frac{1}{2}$. Therefore our position at step $N$ is just the sum of the independent Bernoulli variables;
$$X_N= \sum_{i=0}^{N} dX_i$$.
We can calculated the expectation: $$\mathbb{E}[X_N]= \mathbb{E}\left[ \sum_{i=0}^{N} dX_i\right]= \sum_{i=0}^{N} \mathbb{E} [ dX_i]=0$$ by using the definition of the expectation of a discrete random variable and the fact that $dX_i$ and $dX_j$ are independent for $i \neq j$. Now
$$Var(X)=\mathbb{E}[X^2]- \mathbb{E}[X]^2$$ we can calculate $$\mathbb{E}[X_N^2] = \sum_{i=0}^N 1 = N$$That is, the variance of $X_N$ is the jump size times the total steps $N$.
Below I have simulated the aforementioned random walk for $N=100$ and $N=100000$ respectively.

Next we'll look at a specific scaling of these random walks to obtain a Wiener Process and eventually talk about Stochastic Integrals and Ito's lemma.