• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Student-t Innovation Problems - GARCH(1,1)

Joined
10/14/13
Messages
37
Points
18
This is an annoying problem I have but I have summarised pretty much everything as concisely as possible and I hope that someone can help me.

I'm tidying up a series of FORTRAN 95 modules to create a fast volatility calculator and teach myself the application of quantitative methods. I'll throw it out open source when I've done with it in FORTRAN and C++ variants, as I don't know if one actually exists in FORTRAN 95 and someone somewhere might want it / give me a job ;)

Problem Overview
  • Simulation of log-returns with a Student's-t GARCH(1,1) process results in volatility far lower than it should be and a tiny returns series (see the GNUplot figures at the bottom of the post)
  • The a1 parameter seems to be underestimating.
  • I obtain values as below from optimisation:
    • Gaussian GARCH... a0 ~ 0, a1 ~ 0.08, b1 ~ 0.9
    • Student's-t GARCH... a0 ~ 0, a1 ~ 0.02, b1 ~ 0.9, v = 7.4
    • When I tweak the Student's-t a1 value to around 0.08 like the Gaussian it gives decent results!
  • The problem I believe must lie in the Monte Carlo simulation of the Student's-t innovations or the MLE and its gradient for the Student's-t distribution
What I'm doing
  • Parameters derived from a maximisation of the Student's-t MLE through a BFGS quasi-newton optimisation.
  • Optimisation uses the Student's-t MLE and MLE gradient.
  • Failing to get a good Student's-t GARCH(1,1) return series for Monte Carlo simulation :(
  • Parameters of a Gaussian GARCH(1,1) using the same optimisation & Monte Carlo algorithms give good results.

GARCH(1,1) Equation
I used the following equation for Garch(1,1) variance... I don't think there's anything wrong with this :/
rsz0.png
where p = 1 and q = 1I also used omega = a0, alpha(i) = a1 and Beta(j) = b1I set the funny et = log-returns series minus the mean

Student's-t Monte Carlo Simulations
I wonder if I have the correct distribution and returns model as I've seen a few distributions for the Student's-t p.d.f online and I'm unsure if I can just multiply the T distributed random variable by the GARCH volatility like I can with the Gaussian distribution...

I calculate the next Student's-t GARCH return from the following equation:
ehs8.png
where,
f8zm.png
To simulate student's-t distributed random numbers, I generated the p.d.f of the student's-t distribution from the standardised student's-t distribution:
cq5x.png
This distribution was summed from x=-10 to x=10 and inverted to give the inverse cumulative distribution (iCDF) so that random t distributed variables could be generated from uniform random numbers through the iCDF.

Code for how I generated the MC simulation can be found in the appendix also!

Student's-t GARCH(1,1) MLE
There seems to be a term added onto the MLE of -log(volatility) and I wonder if this is causing a problem in optimisation...

I used this source which gave a formula for the MLE as follows (copied directly from the link):

14qp.png

I derived the MLE as below, where psi represents the GARCH(1,1) parameters a0, a1 and b1...
(nice code output taken from Wolfram Alpha!)


ip6l.png


A pseudo-code example can be found at the bottom of the post in the appendix.

Student's-t GARCH(1,1) MLE Gradient
I found the gradient of the MLE by simply taking the gradient of the MLE through Wolfram Alpha because I'm lazy and got the following:
6g9m.png

z0e6.png
*side note - my digamma derivative function works fine and I've tested it many times

The derivative of the GARCH(1,1) equation I got as follows:
6faa.png

The pseudo-code of my MLE gradient function can be found at the bottom of the post in the appendix.


Figures
Figure 1.a (left): The log-returns from 25 Monte Carlo simulations using a Student's-t GARCH(1,1) process
Figure 1.b (right): The daily volatility from 25 Monte Carlo simulations using a Student's-t GARCH(1,1) process
Monte_Carlo_Sims_Returns_Student_t.jpg
Monte_Carlo_Sims_Vola_Student_t.jpg


Figure 2.a (left): The log-returns from 25 Monte Carlo simulations using a Gaussian GARCH(1,1) process
Figure 2.b (right): The daily volatility from 25 Monte Carlo simulations using a Gaussian GARCH(1,1) process

Monte_Carlo_Sims_Returns_Gaussian.jpg
Monte_Carlo_Sims_Vola_Gaussian.jpg


Figure 3: The historic volatility with Student's-t parameters and Gaussian parameters. The index is the FTSE100 and the log-returns are not to scale - rather to show important events in the time series
Gaussian_Student_t_Historic_Errors.png


Appendix
The FORTRAN 95 MLE function...
Code:
Function MLE_student_t(x)
   n = size(hist_r) !determine length of historic log-returns array
     
   !initialise the conditional variance
   a0=x(1) ; a1=x(2) ; b1=x(3) ; v=x(4) 
   h0 = a0 / (1.0 - a1 - b1)
   h1 = 0.0
   
   !set MLE at 0
   MLE_student_t = 0.0
                   
   !Assign historic variances - loop from t=1 to end of returns array
   do t = 2,n
       
      !Calculate GARCH variance for time t (this function is 100% correct)
      h1 = GARCH11_Variance(a0,a1,b1,h0,hist_r(t-1))
       
      !Calculate MLE with student-t distribution      
      MLE_student_t = MLE_student_t &
         & + log(h1) &
         & + (v+1.0)*0.5*log( 1.0 + hist_r(t)**2 / ( (v-2.0)*h1) )
       
      !h0 becomes h1 for next time step
      h0=h1
   end do

   !Add constant parts to the MLE expression
   MLE_student_t = - MLE_student_t &
      & + real(n-1)*log(gamma((v+1.0)/2.0)/gamma(v/2.0)/sqrt((v-2.0)*pi))
       
End Function MLE_student_t

The gradient function...
Code:
Function Grad_Student_t(x)
   n = size(hist_r) ! again define n as size of historic returns array
   a0=x(1) ; a1=x(2) ; b1=x(3) ; v=x(4)

   ! set initial variances and derivatives w.r.t. parameters a0,a1,b1 for time step t-1
   h0 = a0 / (1.0 - a1 - b1)
   dh0_a0 = 1.0 / (1.0 - a1 - b1)
   dh0_a1 = a0/(1.0 - a1 - b1)**2
   dh0_b1 = a0/(1.0 - a1 - b1)**2

   ! set initial variance and derivatives for time step t at zero
   h1 = 0.0
   dh1_a0 = 0.0
   dh1_a1 = 0.0
   dh1_b1 = 0.0

   ! set ML derivatives to zero
   dML_a0 = 0.0
   dML_a1 = 0.0
   dML_b1 = 0.0
   dML_v  = 0.0
               
   !This loops from time step t = 2 to n to calculate
   Do t = 2,n
      !save memory accesses
      r = hist_r(t-1) ! r = historic log-returns for time step t-1
      rsq = r**2
      !Calculate variance for time t-1
      h1 = a0 + a1*rsq + b1*h0
       
      !differentiate GARCH variance w.r.t. parameters at time step t-1
      dh1_a0 = 1.0 + b1*dh0_a0
      dh1_a1 = rsq + b1*dh0_a1
      dh1_b1 = h0 + b1*dh0_b1
       
      !*** go forward one time step ***!
      r = hist_r(t)
      rsq = r**2
       
      !differentiate MLE w.r.t. GARCH parameters a0, a1, b1
      dML_a0 = dML_a0 + dh1_a0*(((2.0-v)*h1+rsq*(0.5*v-0.5))/(h1*((v-2.0)*h1+rsq)))
      dML_a1 = dML_a1 + dh1_a1*(((2.0-v)*h1+rsq*(0.5*v-0.5))/(h1*((v-2.0)*h1+rsq)))
      dML_b1 = dML_b1 + dh1_b1*(((2.0-v)*h1+rsq*(0.5*v-0.5))/(h1*((v-2.0)*h1+rsq)))
       
      !differentiate MLE w.r.t. Student's-t parameter, v
      dML_v  = dML_v &
        & + 0.5*rsq*(v+1.0)/((v-2.0)*(h1*(v-2.0)+rsq)) &
        & - 0.5*log(1.0 + rsq/(h1*(v-2.0))) &
        & - 0.5/(v-2.0) &
        & + 0.5*Psi((v+1.0)/2.0) &
        & - 0.5*Psi(v/2.0)
       
      ! Reset time step dummy variables for next loop
      h0 = h1
      dh0_a0 = dh1_a0
      dh0_a1 = dh1_a1
      dh0_b1 = dh1_b1
   End Do
   
   !Output gradient function terms
   Grad_Student_t(1) = dML_a0
   Grad_Student_t(2) = dML_a1
   Grad_Student_t(3) = dML_b1
   Grad_Student_t(4) = dML_v    
End Function Grad_Student_t

Code for Monte Carlo Simulations ...

(To condense, the first loop has been removed as it is separate from the main time step iterations because it calculates the first step from the last historic return in the returns array)
Code:
! initialise first simulation from last entry in historic data
! loop from second time step to final time step
do t=2, end_of_time_series
   h1 = GARCH11_Variance(a0,a1,b1,h0,MC_r(sim,t-1)) ! Calculate variance from last sim return
   MC_s2(sim,t) = h1  ! store Monte Carlo simulation variance in an array for plotting
   
   if (student_t) then ! User has decided to choose student-t or Gaussian GARCH model
         
      ! call a uniform random number between 0 and 1
      call random_number(X)

      ! calculate return for timestep t
      ! r_t = mean + t_distributed_rand_number(from inverse cum. distr.) * daily volatility
      ! new return = mean +  T(0,1,v) * sqrt( variance )
      MC_r(sim,t) = r_u + t_iCDF(X)*sqrt(h1)

   else ! Gaussian GARCH return = mean + N(0,1) * sqrt(variance)
      MC_r(sim,t) = r_u + Gauss()*sqrt(h1)
   end if

   ! reset variance for next time step
   h0 = h1
end do ! end time step loop and start next simulation
 
Last edited:
Solved it!

Described below if anyone wants to know where the problem was...

I did a load of reading on MLE estimators and I also noticed a small subtlety in the formulation of the equation for MLE's that I quoted below:
14qp.png
The likelihood is maximising the distribution of yt if you are generating innovations for zt. The conditional student's-t distribution is the distribution of yt not zt so the - log(s.dev) term is not necessary.

The result was a super fast optimisation that gave some really pretty GNUplot figures for a 7000 day extension to the FTSE 100 :)

(trading days x axis and volatility y axis)
4bqi.png

(trading days x axis, log-return y axis)
mvm9.png
 
Back
Top