Hi all,
I am trying understand a heuristic derivation of Ito's lemma ([imath]B[/imath] follows Brownian motion):
[math]df(t,B) = \frac{\partial f}{\partial t}dt + \frac{\partial f}{\partial B}dB + \frac{1}{2} \frac{\partial^2 f}{\partial^2 B}dB^2,[/math]
but I do not understand why the derivation says that [imath]dB^2[/imath] does not vanish. I've read/watched a few sources (below) to figure out why it doesn't vanish and the one I'm currently stuck with is here. To be specific, I'm stuck on the line after equation (9):
[math]E[(B(t_{i+1}) - B(t_i))^2] = E[B(t_{i+1} - t_i)^2] = t_{i+1} - t_i.[/math]
How could the expected value of the difference squared be at all related to the time interval? Unless we explicitly make the "jump size" proportional to the time intervals i.e. [imath]B(t_{i+1}) - B(t_i) \propto t_{i+1} - t_i[/imath].
Almost all of my effort has been trying to understand the link between [imath]dt[/imath] and [imath]dB[/imath]; the intuition that I get is that [imath]dB[/imath] is in a loose sense some kind of random variable whose variance is explicitly made to be related to [imath]dt[/imath] so that when we square it we get nice relations. For example:
[math] B(t_{i+1}) - B(t_i) = \Delta B_i = \sigma \sqrt{\Delta t} X_i, \, \Delta t = t_{i+1} - t_i \\ P(X_i = 1) = P(X_i = -1) = 0.5 \\ E[X_i] = (1) \cdot 0.5 + (-1) \cdot 0.5 = 0 \\ E[X_i^2] = V[X_i], \text{because } V[X_i] = E[X_i^2 - E[X_i]] \text{ but } E[X_i] = 0 \\ V[X_i] = (1)^2 \cdot 0.5 + (-1)^2 \cdot 0.5 = 1 \\ E[(B(t_{i+1}) - B(t_i))^2] = E[\Delta B_i^2] \\ = E[(\sigma \sqrt{\Delta t} X_i)^2] \\ = \sigma^2 \Delta t E[X_i^2] \\ = \sigma^2 (t_{i+1} - t_i) \\ = t_{i+1} - t_i \text{ if } \sigma = 1 [/math]
Really not sure. Also sorry my equations look bad I can't get \begin{align}\end{align} to work. Thanks a lot.
Sources:
I am trying understand a heuristic derivation of Ito's lemma ([imath]B[/imath] follows Brownian motion):
[math]df(t,B) = \frac{\partial f}{\partial t}dt + \frac{\partial f}{\partial B}dB + \frac{1}{2} \frac{\partial^2 f}{\partial^2 B}dB^2,[/math]
but I do not understand why the derivation says that [imath]dB^2[/imath] does not vanish. I've read/watched a few sources (below) to figure out why it doesn't vanish and the one I'm currently stuck with is here. To be specific, I'm stuck on the line after equation (9):
[math]E[(B(t_{i+1}) - B(t_i))^2] = E[B(t_{i+1} - t_i)^2] = t_{i+1} - t_i.[/math]
How could the expected value of the difference squared be at all related to the time interval? Unless we explicitly make the "jump size" proportional to the time intervals i.e. [imath]B(t_{i+1}) - B(t_i) \propto t_{i+1} - t_i[/imath].
Almost all of my effort has been trying to understand the link between [imath]dt[/imath] and [imath]dB[/imath]; the intuition that I get is that [imath]dB[/imath] is in a loose sense some kind of random variable whose variance is explicitly made to be related to [imath]dt[/imath] so that when we square it we get nice relations. For example:
[math] B(t_{i+1}) - B(t_i) = \Delta B_i = \sigma \sqrt{\Delta t} X_i, \, \Delta t = t_{i+1} - t_i \\ P(X_i = 1) = P(X_i = -1) = 0.5 \\ E[X_i] = (1) \cdot 0.5 + (-1) \cdot 0.5 = 0 \\ E[X_i^2] = V[X_i], \text{because } V[X_i] = E[X_i^2 - E[X_i]] \text{ but } E[X_i] = 0 \\ V[X_i] = (1)^2 \cdot 0.5 + (-1)^2 \cdot 0.5 = 1 \\ E[(B(t_{i+1}) - B(t_i))^2] = E[\Delta B_i^2] \\ = E[(\sigma \sqrt{\Delta t} X_i)^2] \\ = \sigma^2 \Delta t E[X_i^2] \\ = \sigma^2 (t_{i+1} - t_i) \\ = t_{i+1} - t_i \text{ if } \sigma = 1 [/math]
Really not sure. Also sorry my equations look bad I can't get \begin{align}\end{align} to work. Thanks a lot.
Sources:
Last edited: