What is the link between dt and dB in stochastic calculus; why is dB^2 = dt?

  • Thread starter Thread starter alovya
  • Start date Start date
Joined
1/19/22
Messages
51
Points
28
Hi all,

I am trying understand a heuristic derivation of Ito's lemma ([imath]B[/imath] follows Brownian motion):

[math]df(t,B) = \frac{\partial f}{\partial t}dt + \frac{\partial f}{\partial B}dB + \frac{1}{2} \frac{\partial^2 f}{\partial^2 B}dB^2,[/math]
but I do not understand why the derivation says that [imath]dB^2[/imath] does not vanish. I've read/watched a few sources (below) to figure out why it doesn't vanish and the one I'm currently stuck with is here. To be specific, I'm stuck on the line after equation (9):

[math]E[(B(t_{i+1}) - B(t_i))^2] = E[B(t_{i+1} - t_i)^2] = t_{i+1} - t_i.[/math]
How could the expected value of the difference squared be at all related to the time interval? Unless we explicitly make the "jump size" proportional to the time intervals i.e. [imath]B(t_{i+1}) - B(t_i) \propto t_{i+1} - t_i[/imath].

Almost all of my effort has been trying to understand the link between [imath]dt[/imath] and [imath]dB[/imath]; the intuition that I get is that [imath]dB[/imath] is in a loose sense some kind of random variable whose variance is explicitly made to be related to [imath]dt[/imath] so that when we square it we get nice relations. For example:

[math] B(t_{i+1}) - B(t_i) = \Delta B_i = \sigma \sqrt{\Delta t} X_i, \, \Delta t = t_{i+1} - t_i \\ P(X_i = 1) = P(X_i = -1) = 0.5 \\ E[X_i] = (1) \cdot 0.5 + (-1) \cdot 0.5 = 0 \\ E[X_i^2] = V[X_i], \text{because } V[X_i] = E[X_i^2 - E[X_i]] \text{ but } E[X_i] = 0 \\ V[X_i] = (1)^2 \cdot 0.5 + (-1)^2 \cdot 0.5 = 1 \\ E[(B(t_{i+1}) - B(t_i))^2] = E[\Delta B_i^2] \\ = E[(\sigma \sqrt{\Delta t} X_i)^2] \\ = \sigma^2 \Delta t E[X_i^2] \\ = \sigma^2 (t_{i+1} - t_i) \\ = t_{i+1} - t_i \text{ if } \sigma = 1 [/math]
Really not sure. Also sorry my equations look bad I can't get \begin{align}\end{align} to work. Thanks a lot.

Sources:
 
Last edited:
How could the expected value of the difference squared be at all related to the time interval?
Because this is how Brownian motion is constructed! It is merely a property of the process that the variance (mean zero, 2nd uncentered moment is just the variance) of any increment equals the amount of time that elapsed for the increment. In Kuo’s “Introduction to Stochastic Integration” Ch. 3 you will find 3 constructions of Brownian motion where the existence of a process possessing the defining properties of Brownian motion is verified.
Almost all of my effort has been trying to understand the link between dt and dB; the intuition that I get is that dBdB is in a loose sense some kind of random variable whose variance is explicitly made to be related to dt so that when we square it we get nice relations
The reason for dBdB = dt is that the quadratic variation of Brownian motion up to time t is equal to t. There are several rigorous proofs of Itô’s Lemma and in each one you will find the key step where quadratic variation of Brownian motion is used to show dBdB = dt.
 
Because this is how Brownian motion is constructed! It is merely a property of the process that the variance (mean zero, 2nd uncentered moment is just the variance) of any increment equals the amount of time that elapsed for the increment. In Kuo’s “Introduction to Stochastic Integration” Ch. 3 you will find 3 constructions of Brownian motion where the existence of a process possessing the defining properties of Brownian motion is verified.

The reason for dBdB = dt is that the quadratic variation of Brownian motion up to time t is equal to t. There are several rigorous proofs of Itô’s Lemma and in each one you will find the key step where quadratic variation of Brownian motion is used to show dBdB = dt.
Thanks for the clarification and the book suggestion. I have gone through proofs about quadratic variation, but was unsure where [imath]V[\sum \Delta B] = t[/imath] kept coming from. Am I right in saying that this is so because of the way Brownian motion is constructed?
 
Thanks for the clarification and the book suggestion. I have gone through proofs about quadratic variation, but was unsure where [imath]V[\sum \Delta B] = t[/imath] kept coming from. Am I right in saying that this is so because of the way Brownian motion is constructed?
So first you need the sum of the differences to be such that the differences are over (1) mutually disjoint (excluding endpoints) intervals that (2) form a partition of some larger interval.

(1) matters because you need to use the independence of increments of Brownian motion to split the variance of the sum as the sum of the variances — you only have independent increments provided the increments are over mutually disjoint (again, disjoint once you have excluded endpoints) intervals. This is a defining property of Brownian motion and is as equally important to know as the fact that Brownian motion has Gaussian increments.

(2) matters because you want to obtain an alternating sum. That is, since Var(B(t)-B(s))=t-s, in order for the variance of the sum of the increments to equal t you need to have increments of the form: B(t_1)-B(0), B(t_2)-B(t_1), …, B(t_n)-B(t_{n-1})
If you have, for example, B(3)-B(2) and B(1)-B(0) using independent increments you get that the variance of the sum is 3-2+1-0=2 not 3.

Probably being overly pedantic…
 
Back
Top Bottom