'===============================
' VAR Tutorial
' by: NF Katzke
'===============================
' Before doing any analysis of our data, it might be useful to look at some statistics of our data series. You will often include this in a paper / thesis - let's look at doing the following on a rolling basis. So let's calculate some statistics per year for our series:
'===============================
' Rolling Statistics
'===============================
' Rolling desc stats table of min, max, mean med since 2010:
' set sample
smpl 2010m01 @last
' create a series containing the year identifier to be used to get annual numbers
series year = @year
' Use a loop to compute statistics for each year and
' freeze the output from each of the tables.
' Remember from my Eviews code pdf - %X is a place holder (parameter) and can be effectively used in loops...
for %var gold_index goldprice goldproduction
%name = "tab_" + %var
freeze({%name}) {%var}.statby(min,max,mean,med, sd) year
show {%name}
next
' TIP: Save your Eviews dataframe and results to excel easily...
' Saving variables to excel: (uncomment the following):
' pagesave(type=excelxml) "C:\Users\YOURNAMEHERE\Tutorial3\Tutoutput.xlsx"
'===============================
'...back to the tutorial...
' Graph the level variables:
' first group the variables, and then use line for a graph...
group level_vars gold_index goldprice goldproduction
freeze(levelplots) level_vars.line
' You can now edit this graph in Eviews and save it, or conversely look at the Object Reference guide pdf from p.230 onwards (Eviews 9 documentation).
' Do log transformations:
series lindex = log(gold_index)
series lprice= log(goldprice)
series lprod = log(goldproduction)
'Fit a VAR:
var varmodel1.ls 1 2 lindex lprice lprod
' To now add exogenous variables, add it after a @ sign:
var Var_ex.ls 1 2 lindex lprice lprod @ rex
' Lag length criteria:
Var_ex.laglen(8)
' Block exog test:
freeze(GrangerCause) Var_ex.testexog
' To write out the representative form of the model:
Var_ex.spec
' Now you will see several options - the first is the code for fitting the model, followed by the parameters used, and lastly the parameter values replaced in the model (for writing out a model in a paper).
' The Estimation proc part is great for seeing how to call a model in code...
'########## AR Roots (testing model stationarity)
var_ex.arroots
' At the bottom it should say whether one or more of the roots are outside the inverted unit circle
'########## Correlation matrix
var_ex.residcor ' Consider contemporaneous correlations of variables..
'########## Correlograms
freeze(Correlogram_plot) var_ex.correl(12)
' ######### Portmanteau test: testing remaining serial autocorr in the system for all lags collectively..
' Note: this test is effectively a multivariate Ljung-Box Q stat test for residual serial autocorr up to a given order (below we choose 6):
freeze(Portmanteau) var_ex.qstats(6)
' Although, given we've seen some of the roots close to or exceeding 1 (arroots) - and from the documentation we see that this test becomes weak if roots near 1 - this is not a great test for remaining serial autocorr for this model... Also our deg of freedom is very low (making this test even weaker still...)
' Normality Testing: using a Jacque-Bera normality test - and using cholesky decomposition for isolating residuals...
freeze(Normality_Cholesky) var_ex.jbera(factor=col) ' using option: cholesky decomp..
' This tests whether the residuals are multivariate normal...
' Note that component 3 is strongly rejecting the normality hypothesis for the resids, and higher moments (skewness and kurtosis)...
' To see what component three is again, use:
' Var_ex.spec
' But we know that Cholesky is sensitive to ordering, so to get a measure not affected by order, we could use the following normality test:
freeze(Normality_Doornik) var_ex.jbera(factor=cor)
' See p. 649 for a deeper discussion of this normality test...
'################### Impulse Response test:
freeze(IR) var_ex.impulse(se=a) ' Option set to use analytic standard errors...
' Note that the residual persistence is rather high in our system - highlighting the non-stationarity of the var earlier...
'################## Forecasting:
' Let's now use our current VAR to forecast production...
' To forecast, we first need to make our var model a model:
var_ex.makemodel(mod1)
' Now we use this model to do a dynamic forecast for the in-sample period: 2012m06 @last
smpl 2012m06 @last
mod1.solve( "d=d") ' d is dynamics, and d=d implies dynamic forecasts, d=s static...
' Now in your workfile, you have lindex_0 (the baseline forecasts) and lindex_f (dynamic forecast of the variable). These can now easily be compared:
' Let's now plot the forecast and actual values for the index variable:
smpl @all
group FC_Comparison lindex lindex_0
freeze(fc_comparison_graph) FC_Comparison.line
fc_comparison_graph.draw(shade, bottom, color = "gray") 2012m01 @last ' Shade forecast area
fc_comparison_graph.draw(line, bottom, color = "red") 2012m01 ' adds vertical line.
fc_comparison_graph.addtext(t) "Forecasting Mining Index values"
fc_comparison_graph.addtext( 3.49, 2.67, X) "Dynamic FC"
show fc_comparison_graph
' HOMEWORK:
'===========
' Provide me with the following coding output:
'-------------------------------------------------------------
' Repeat the above, but now create the VAR in differences.
' Write a program that compares an in-sample dynamic forecast of this model, as well as a comparable Holt-Winters forecast.
' Calculate the Diebold Mariano statistic for the two forecasts for the production variable.
' Add a title to your plot, as well as drawing a vertical line where your forecast starts.