• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

What do you Pros do? Time Series Analysis

Joined
12/3/14
Messages
24
Points
13
Hey guys,

So I finished a pretty hefty project for a course I'm taking, Time Series Analysis, and basically had to do analysis on a bivariate time series; CAD/USD FX monthly, and US/Canada Trade Balance Monthly.

I did the entire thing on R, and it felt really sloppy.

The first part was to do univariate analysis so I would check the ACF/PACFs each, realize that it didn't follow any sort of MA/AR-esque pattern. Then I'd take the first difference with a yearly lag, which made things significantly bettter, but still had some trouble guessing the MA/AR from the ACF/PACF. So I ended up using the auto.arima function as a starting point, and played around with parameters until the diagnostics (residual acfs, p-value for box-ljung etc) looked as decent as I could make it. When finished with model selection I did a two year forecast.

After that I had to model both together using VAR, which was done through VARselect and selecting the AIC parameter for predictive purposes Once again did a two year forecast. Then I compared both forecasts from the univariate results to the VAR results and realized they were both very different, and sorta tried to justify that VAR is better anyways since the two Time Series are inherently related.

Anyways, my question is, does all of this sound very amateur? (I suspect that it is). If so, I'm curious about what some of the more advanced techniques that can be used. I was thinking throwing in a GARCH Model in there but I can't say I understand it enough to interpret it in a report.

Side question: Do we just use GARCH(1,1) for everything in economic/finance or is there some sort of "auto.arima" type funtion that does something similar for finding garch models.
 
I would recommend you try a GARCH model as it is less restrictive than a basic ARMA type model.
You should always be looking for the best tradeoff between complexity of the model and capacity of modelling. A GARCH model might not always improve your modelling capacity.
Use AIC and BIC to help you determine the number of parameters to be used.
 
The astsa package in R has quite a few solid time series functions, and fGarch has functions similar to auto.arima for GARCH models.

I'd take a look at this book: Time Series Analysis and Its Applications: With R Examples - Third Edition

It has quite a few awesome exercises and was a really good companion to my time series class I took recently. It also starts going into really advanced frequency domain and kalman filtering algorithms that have been cited as useful to financial time series in some regard.

Best of luck.
 
I have used GARCH to generate missing implied vol data.
Other times series techniques are widely used in reg calculations.
Op risk often uses ARIMAX.
Incremental Default Risk often uses ARIMA.
Monte Carlo (w time series elements) is used in IMM credit.
Gaussian Copulae are used for those Monte Carlos and for IRB.
Extensive actuarial modeling is done for AMA op risk.
Extensive EDA testing is done for CCAR GMS.
OLS Regression and time series are used for PPNR.
 
Last edited:
Nice, those are definitely some concepts (and acronyms) I'm looking forward to learning haha, thank you!
 
I did the entire thing on R, and it felt really sloppy.
If you don't like cats, likely, you just cannot cook them properly! :D
R is a great tool with a great amount of (not always great) packages.

As to fitting a time series model to the real data, it is both art and science.
Don't expect to learn it from profs that never dealt with real-world problems.
And have a look at the QQ-Plot of residuals of day-ahead NatGas prices on the German market (I once need to create a model for it):
http://www.yetanotherquant.com/fig5b.pdf
Looks nearly perfect, doesn't it? But in order to achieve such perfection I hadto invent a new probability distribution.
 
Looks nearly perfect, doesn't it? But in order to achieve such perfection I hadto invent a new probability distribution.

Just got your book in today. Looking forward to going through it and seeing what's what. Figured I'd share since you popped up on a thread I was on. Thanks for being willing to share.
 
Doing acf and pacf without first djfferencing is useless. Acf determines the order of ma model and pacf determines the order of Ar model. Auto.arima is the worst function to use. You would have to do eacf (extended autocorrelation function) to determine the order and finally use Yule walker equations. I have done extensive time series modeling.
 
Doing acf and pacf without first djfferencing is useless. Acf determines the order of ma model and pacf determines the order of Ar model. Auto.arima is the worst function to use. DO NOT USE AUTO.ARIMA. You would have to do eacf (extended autocorrelation function) to determine the order and finally use Yule walker equations to determine the covariance. finally, run a ljung box test on the residuals for both short and long term lags to determine if theres any correlation. I have done extensive time series modeling.
Also, you would have to first check if ur series exhibits any conditional heterosxedasticity. Otherwise u can't even use a regular adf test and might have to resort to Phillips Perron tests prior to using acf and pacf. Finally if your residuals have autocorrelation, you can't use the regular arima model and might have to resort to regression with time series errors or some type of gls model. I would suggest reading Forecasting structural time series model and the Kalman filter by Harvey

do not use ADF on residuals. residuals do not follow the usual dickey fuller distributions.

@Ken Abbott can you please elaborate how you used garch to find missing vol data? I'm assuming u created h day future forecasts when doing this? I.e
Using. P(t) = p(0) exp(r-0.5 sigma^2)t + sigma Z^0.5
And then using volatility as a garch volatility?
 
Last edited:
Hey guys,

So I finished a pretty hefty project for a course I'm taking, Time Series Analysis, and basically had to do analysis on a bivariate time series; CAD/USD FX monthly, and US/Canada Trade Balance Monthly.

I did the entire thing on R, and it felt really sloppy.

The first part was to do univariate analysis so I would check the ACF/PACFs each, realize that it didn't follow any sort of MA/AR-esque pattern. Then I'd take the first difference with a yearly lag, which made things significantly bettter, but still had some trouble guessing the MA/AR from the ACF/PACF. So I ended up using the auto.arima function as a starting point, and played around with parameters until the diagnostics (residual acfs, p-value for box-ljung etc) looked as decent as I could make it. When finished with model selection I did a two year forecast.

After that I had to model both together using VAR, which was done through VARselect and selecting the AIC parameter for predictive purposes Once again did a two year forecast. Then I compared both forecasts from the univariate results to the VAR results and realized they were both very different, and sorta tried to justify that VAR is better anyways since the two Time Series are inherently related.

Anyways, my question is, does all of this sound very amateur? (I suspect that it is). If so, I'm curious about what some of the more advanced techniques that can be used. I was thinking throwing in a GARCH Model in there but I can't say I understand it enough to interpret it in a report.

Side question: Do we just use GARCH(1,1) for everything in economic/finance or is there some sort of "auto.arima" type funtion that does something similar for finding garch models.

I'm assuming you blindly used var without understanding how the model is even used. It's a systems regression model which includes contemporaneous terms. purpose of a var model is to model the lead/lag relationship. first, did you check if your time series was co-integrated? if so, you would have to include an error correction model to model the short term deviations. if not, then you need to stick with the levels of the data and use toda/yamamoto procedure in determining the correct lags of the model and determine granger causality using block F tests. once all this is done, you can remove the unnecessary parameters that you have included in the model. you would also need to conduct a serial.test function in R using brusch Godfrey test on the residuals (why? google and find out) And how did you take structural changes into account? Any structural changes (u can test using bai Perron test) will void using the var model completely and u might have to use a 2 stage linear least squares to model the bivariate nature of the two series. do not use chow test as it doesn't tell you when a break point has occured. but bai/perron test does. you basically regress the matrix of two series against a constant and check for changes in the mean parameter. changes in the mean parameter indicaates structural changes in the model. This might be the main reason why your univariate results are different than the systems model. The systems model might indicate interdepndenies among the variables. also, plot the impulse response to see whether one series actually impacts the other. All this provides little informatin about what is going on in the model. if block F tests and impulse response functions dont show any impact of one variable against another, then the systems model should give identical results as the univariate case.

There is also a bit more advanced concept called looking at CUSUM and MOSUM of residuals. This is another way to determine structural changes in the model. R has a library called 'structchange'. i encourage you to look into that. there is also a nice lecture somewhere on google but the concepts can get pretty advanced. multiple breaks indicate possible regime switches that occur in the model. hence, the parameters of the mean equation can be modelling as a markov process. you would have to calculate the probability the particular series stays in a particular regime. once you do that, and get the expected duration, you can forecast the two series and dtermine how long each series is either going up, down or stays the same.
 
Last edited:
I wouldn't get the book analysis of time series using R. It's a very elementary book. I would suggest to look into analysis of integrated and cointegrated time series by Bernard ptaff. This is what's done practically. The original book by coperwalt is good if you are in high school.
 
Also, you would have to first check if ur series exhibits any conditional heterosxedasticity. Otherwise u can't even use a regular adf test and might have to resort to Phillips Perron tests prior to using acf and pacf. Finally if your residuals have autocorrelation, you can't use the regular arima model and might have to resort to regression with time series errors or some type of gls model. I would suggest reading Forecasting structural time series model and the Kalman filter by Harvey

do not use ADF on residuals. residuals do not follow the usual dickey fuller distributions.

@Ken Abbott can you please elaborate how you used garch to find missing vol data? I'm assuming u created h day future forecasts when doing this? I.e
Using. P(t) = p(0) exp(r-0.5 sigma^2)t + sigma Z^0.5
And then using volatility as a garch volatility?
Correct. This is frequently used for reg calculations where you need some kind of reasonable proxy.
 
@rajanS That was a solid response. I actually did check Granger Causality but even still, I can't say I knew 100% what was going on with the VAR model. I'll def look into those packages you suggested, thanks!

Also out of curiosity, since I found this sort of analysis very interesting, is this something that you would expect to become very good at at through completing one of the top ranked MFE programs?
 
@rajanS That was a solid response. I actually did check Granger Causality but even still, I can't say I knew 100% what was going on with the VAR model. I'll def look into those packages you suggested, thanks!

Also out of curiosity, since I found this sort of analysis very interesting, is this something that you would expect to become very good at at through completing one of the top ranked MFE programs?

No. You will only get good at this stuff by learning it on your own. Also, a course in econometrics/time series definitely helps
 
Back
Top