Abstract
This document describes the BGVAR
library to estimate Bayesian Global vector autoregressions (GVAR) with different prior specifications and stochastic volatility. The library offers a fully fledged toolkit to conduct impulse response functions, forecast error variance and historical error variance decompositions. To identify structural shocks in a given country model or joint regional shocks, the library offers simple Cholesky decompositions, generalized impulse response functions and zero and sign restrictions – the latter which can also be put on the cross-section. We also allow for different structures of the GVAR like including different weights for different variables or setting up additional country models that determine global variables such as oil prices. Last, we provide functions to conduct and evaluate out-of-sample forecasts as well as conditional forecasts that allow to set a future path for a particular variable of interest. The toolbox requires R>=3.5
.
This vignette describes the BGVAR package that allows to estimate Bayesian global vector autoregressions (GVARs). The focus of the vignette is to provide a range of examples that demonstrate the full functionality of the library. It is accompanied by a more technical description of the GVAR framework. Here, it suffices to briefly summarize the main idea of a GVAR, which is a large system of equations designed to analyze or control for interactions across units. Most often, these units refer to countries and the interactions between them arise through economic and financial interdependencies. Also in this document, the examples we provide contain cross-country data. In principle, however, the GVAR framework can be applied to other units, such as regions, firms, etc. The following examples show how the GVAR can be used to either estimate spillover effects from one country to another, or alternatively, to look at the effects of a domestic shock controlling for global factors.
In a nutshell, the GVAR consists of two stages. In the first, \(N\) vector autoregressive (VAR) models are estimated, one per unit. Each equation in a unit model is augmented with foreign variables, that control for global factors and allow to link the unit-specific models later. Typically, these foreign variables are constructed using exogenous, bilateral weights, stored in an \(N \times N\) weight matrix. The classical framework of Pesaran, Schuermann, and Weiner (2004) and Dees et al. (2007) proposes estimating these country models in vector error correction form, while in this package we take a Bayesian stance and estimation is carried out using VARs. The user can transform the data prior estimation into stationary form or estimate the model in levels. The BGVAR
package also allows to include a trend to get trend-stationary data. In the second step, the single country models are combined using the assumption that single models are linked via the exogenous weights, to yield a global representation of the model. This representation of the model is then used to carry out impulse response analysis and forecasting.
This vignette consists of four blocks: getting started and data handling, estimation, structural analysis and forecasting. In the next part, we discuss which data formats the bgvar
library can handle. We then proceed by showing examples of how to estimate a model using different Bayesian shrinkage priors – for references see Crespo Cuaresma, Feldkircher, and Huber (2016) and Feldkircher and Huber (2016). We also discuss how to run diagnostic and convergence checks and examine the main properties of the model. In the third section, we turn to structural analysis, either using recursive (Cholesky) identification or sign restrictions. We will also discuss structural and generalized forecast error variance decompositions and historical decompositions. In the last section, we show how to compute unconditional and conditional forecasts with the package.
We start by installing the package from CRAN and attaching it with
To ensure reproducibility of the examples that follow, we have set a particular seed (for R
s random number generator). As every R
library, the BGVAR
package provides built-in help files which can be accessed by typing ?
followed by the function / command of interest. It also comes along with four example data sets, two of them correspond to data the quarterly data set used in Feldkircher and Huber (2016) (eerData
, eerDataspf
), one is on monthly frequency (monthlyData
). For convenience we also include the data that come along with the Matlab GVAR toolbox of Smith, L.V. and Galesi, A. (2014), pesaranData
. We include the 2016 vintage (Mohaddes, K. and Raissi, Mehdi 2018).
We start illustrating the functionality of the BGVAR
package by using the eerData
data set from Feldkircher and Huber (2016). It contains 76 quarterly observations for 43 countries and the period from 1995Q1 to 2013Q4. The euro area (EA) is included as a regional aggregate.
We can load the data by typing
This loads two objects: eerData
, which is a list object of length \(N\) (i.e., the number of countries) and W.trade0012
, which is an \(N \times N\) weight matrix.
We can have a look at the names of the countries contained in eerData
## [1] "EA" "US" "UK" "JP" "CN" "CZ" "HU" "PL" "SI" "SK" "BG" "RO" "EE" "LT" "LV"
## [16] "HR" "AL" "RS" "RU" "UA" "BY" "GE" "AR" "BR" "CL" "MX" "PE" "KR" "PH" "SG"
## [31] "TH" "IN" "ID" "MY" "AU" "NZ" "TR" "CA" "CH" "NO" "SE" "DK" "IS"
and at the names of the variables contained in a particular country by
## [1] "y" "Dp" "rer" "stir" "ltir" "tb"
We can zoom in into each country by accessing the respective slot of the data list:
## y Dp rer stir ltir tb poil
## [1,] 4.260580 0.007173874 4.535927 0.0581 0.0748 -0.010907595 2.853950
## [2,] 4.262318 0.007341077 4.483116 0.0602 0.0662 -0.010637081 2.866527
## [3,] 4.271396 0.005394799 4.506013 0.0580 0.0632 -0.007689327 2.799958
## [4,] 4.278025 0.006218849 4.526343 0.0572 0.0589 -0.008163675 2.821479
## [5,] 4.287595 0.007719866 4.543933 0.0536 0.0591 -0.008277170 2.917315
## [6,] 4.301597 0.008467671 4.543933 0.0524 0.0672 -0.009359032 2.977115
Here, we see that the global variable, oil prices (poil
) is attached to the US country model. This corresponds to the classical GVAR set-up used among others in Pesaran, Schuermann, and Weiner (2004) and Dees et al. (2007). We also see that in general, each country model \(i\) can contain a different set of variables \(k_i\) as opposed to requirements in a balanced panel.
The GVAR toolbox relies on one important naming convention, though: It is assumed that neither the country names nor the variable names contain a .
The reason is that the program internally has to collect and separate the data more than once and in doing that, it uses the .
to separate countries / entities from variables. To give a concrete example, the slot in the eerData
list referring to the USA should not be labelled U.S.A.
, nor should any of the variable names contain a .
The toolbox also allows the user to submit the data as a \(T \times k\) data matrix, with \(k=\sum^N_{i=1} k_i\) denoting the sum of endogenous variables in the system. We can switch from data representation in list form to matrix from by using the function list_to_matrix
(and vice versa using matrix_to_list
).
To convert the eerData
we can type:
For users who want to submit data in matrix form, the above mentioned naming convention implies that the column names of the data matrix have to include the name of the country / entity and the variable name, separated by a .
For example, for the converted eerData
data set, the column names look like:
## [1] "EA.y" "EA.Dp" "EA.rer" "EA.stir" "EA.ltir" "EA.tb" "US.y"
## [8] "US.Dp" "US.rer" "US.stir"
In both cases, submitting data as list or as big matrix, the underlying data can be either of matrix
class or time series classes such as ts
or xts
.
Finally, we look at the second important ingredient to build our GVAR model, the weight matrix. Here, we use annual bilateral trade flows (including services), averaged over the period from 2000 to 2012. This implies that The \(ij^{th}\) element of \(W\) contains trade flows from unit \(i\) to unit \(j\). These weights can also be made symmetric by calculating \(\frac{(W_{ij}+W_{ji})}{2}\). Using trade weights to establish the links in the GVAR goes back to the early GVAR literature (Pesaran, Schuermann, and Weiner 2004) but is still used in the bulk of GVAR studies. Other weights, such as financial flows, have been proposed in Eickmeier and Ng (2015) and examined in Feldkircher and Huber (2016). Another approach is to use estimated weights as in Feldkircher and Siklos (2019). The weight matrix should have rownames
and colnames
that correspond to the \(N\) country names contained in Data
.
## EA US UK JP CN CZ
## EA 0.0000000 0.13815804 0.16278169 0.03984424 0.09084817 0.037423312
## US 0.1666726 0.00000000 0.04093296 0.08397042 0.14211997 0.001438531
## UK 0.5347287 0.11965816 0.00000000 0.02628600 0.04940218 0.008349458
## JP 0.1218515 0.21683444 0.02288576 0.00000000 0.22708532 0.001999762
## CN 0.1747925 0.19596384 0.02497009 0.15965721 0.00000000 0.003323641
## CZ 0.5839067 0.02012227 0.03978617 0.01174212 0.03192080 0.000000000
## HU PL SI SK BG RO
## EA 0.026315925 0.046355019 0.0088805499 0.0140525286 0.0054915888 0.0147268739
## US 0.001683935 0.001895003 0.0003061785 0.0005622383 0.0002748710 0.0007034870
## UK 0.006157917 0.012682611 0.0009454295 0.0026078946 0.0008369228 0.0031639564
## JP 0.002364775 0.001761420 0.0001650431 0.0004893263 0.0001181310 0.0004293428
## CN 0.003763771 0.004878752 0.0005769658 0.0015252866 0.0006429077 0.0019212312
## CZ 0.025980933 0.062535144 0.0058429207 0.0782762640 0.0027079942 0.0080760690
## EE LT LV HR AL
## EA 0.0027974288 3.361644e-03 1.857555e-03 0.0044360005 9.328127e-04
## US 0.0002678272 4.630261e-04 2.407372e-04 0.0002257508 2.057213e-05
## UK 0.0009922865 1.267497e-03 1.423142e-03 0.0004528439 3.931010e-05
## JP 0.0002038361 9.363053e-05 9.067431e-05 0.0001131534 4.025852e-06
## CN 0.0004410996 5.033345e-04 4.041432e-04 0.0006822057 1.435258e-04
## CZ 0.0009807047 2.196688e-03 1.107030e-03 0.0027393932 1.403948e-04
## RS RU UA BY GE AR
## EA 2.430815e-03 0.06112681 0.0064099317 1.664224e-03 0.0003655903 0.005088057
## US 7.079076e-05 0.01024815 0.0008939856 2.182909e-04 0.0001843835 0.004216105
## UK 2.730431e-04 0.01457675 0.0010429571 4.974630e-04 0.0001429294 0.001387324
## JP 2.951168e-05 0.01725841 0.0007768546 4.392221e-05 0.0001062724 0.001450359
## CN 1.701277e-04 0.02963668 0.0038451574 5.069157e-04 0.0001685347 0.005817928
## CZ 1.638111e-03 0.03839817 0.0082099438 1.447486e-03 0.0002651330 0.000482138
## BR CL MX PE KR PH
## EA 0.018938315 0.0051900915 0.011455138 0.0019463003 0.018602006 0.0034965601
## US 0.020711342 0.0064996880 0.140665729 0.0037375799 0.033586979 0.0076270807
## UK 0.007030657 0.0016970182 0.003832710 0.0005204850 0.010183755 0.0025279396
## JP 0.010910951 0.0077091104 0.011164908 0.0020779123 0.079512267 0.0190560664
## CN 0.025122526 0.0102797021 0.010960261 0.0040641411 0.103135774 0.0150847984
## CZ 0.001898955 0.0002425703 0.001538938 0.0001152155 0.005248484 0.0008790869
## SG TH IN ID MY AU
## EA 0.012319690 0.007743377 0.016629452 0.0065409485 0.009631702 0.010187442
## US 0.017449474 0.012410910 0.014898397 0.0079866535 0.017364286 0.011578348
## UK 0.011309096 0.006146707 0.013461838 0.0032341466 0.006768142 0.012811822
## JP 0.029885052 0.044438961 0.010319951 0.0369586674 0.035612054 0.047921306
## CN 0.029471018 0.024150334 0.024708981 0.0201353186 0.033336788 0.036066785
## CZ 0.002867306 0.003136170 0.003568422 0.0009949029 0.002695855 0.001291178
## NZ TR CA CH NO SE
## EA 0.0017250647 0.028935117 0.012886035 0.065444998 0.025593188 0.041186900
## US 0.0023769166 0.004969085 0.213004968 0.013343786 0.003975441 0.006793041
## UK 0.0022119334 0.013099627 0.020699895 0.026581778 0.033700469 0.022629934
## JP 0.0048098149 0.002560745 0.020149431 0.010014273 0.002895884 0.004375537
## CN 0.0030625686 0.006578520 0.019121787 0.007657646 0.002804712 0.005853722
## CZ 0.0001703354 0.006490032 0.001808704 0.013363416 0.003843421 0.013673889
## DK IS
## EA 0.025035373 1.163498e-03
## US 0.003135419 2.750740e-04
## UK 0.013518119 1.119179e-03
## JP 0.003207880 2.637944e-04
## CN 0.003990731 7.806304e-05
## CZ 0.007521567 1.490167e-04
The countries in the weight matrix should be in the same order as in the data list:
## [1] TRUE
The weight matrix should be row-standardized and the diagonal elements should be zero:
## EA US UK JP CN CZ HU PL SI SK BG RO EE LT LV HR AL RS RU UA BY GE AR BR CL MX
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## PE KR PH SG TH IN ID MY AU NZ TR CA CH NO SE DK IS
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## EA US UK JP CN CZ HU PL SI SK BG RO EE LT LV HR AL RS RU UA BY GE AR BR CL MX
## 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## PE KR PH SG TH IN ID MY AU NZ TR CA CH NO SE DK IS
## 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Note that through row-standardizing, the final matrix is typically not symmetric (even when using the symmetric weights as raw input).
In what follows, we restrict the dataset to contain only three countries, EA
, US
and RU
and adjust the weight matrix accordingly. We do this only for illustrational purposes to save time and storage in this document:
The main function of the BGVAR
package is its bgvar
function. The unique feature of this toolbox is that we use Bayesian shrinkage priors with optionally stochastic volatility to estimate the country models in the GVAR. In its current version, three priors for the country VARs are implemented:
MN
, Litterman 1986; Koop and Korobilis 2010)SSVS
, George, Sun, and Ni 2008)NG
, Huber and Feldkircher 2019)
The first two priors are described in more detail in Crespo Cuaresma, Feldkircher, and Huber (2016). For a more technical description of the Normal-Gamma prior see Huber and Feldkircher (2019) and for an application in the GVAR context Feldkircher and Siklos (2019). For the variances we can assume homoskedasticity or time variation (stochastic volatility). For the latter, the library relies on the stochvol
package of Kastner (2016).
We start with estimating our toy model using the NG
prior, the reduced eerData
data set and the adjusted W.trade0012
weight matrix:
model.1<-bgvar(Data=eerData,
W=W.trade0012,
saves=100,
burns=100,
plag=1,
prior="NG",
hyperpara=NULL,
SV=TRUE,
thin=1,
trend=TRUE,
h=0,
save.country.store=FALSE,
multithread=FALSE,
eigen=1.05
)
The default prior specification in bgvar
is to use the NG prior with stochastic volatility and one lag for both the endogenous and weakly exogenous variables (plag=1
). In general, due to its high cross-sectional dimension, the GVAR can allow for very complex univariate dynamics and it might thus not be necessary to increase the lag length considerably as in a standard VAR (Burriel and Galesi 2018).
The setting hyperpara=NULL
implies that we use the standard hyperparameter specification for the NG prior; see the helpfiles for more details. The slots saves
and burns
denote the posterior draws and the burn-in draws (i.e., the draws that are discarded). To ensure that the MCMC estimation has converged, a high-number of burn-ins is recommended (say 15,000 to 30,000). Saving the full set of posterior draws can eat up a lot of storage. To reduce this, we can use a thinning interval which stores only a thin\(^{th}\) draw of the global posterior output. For example, with thin=10
and saves=5000
posterior draws, the amount of MCMC draws stored is 500. TREND=TRUE
implies that the model is estimated using a trend. Note that regardless of the trend specification, each equation always automatically includes an intercept term. To speed up computation, it is possible to set multithread=TRUE
, which invokes parallel computing using the doParallel
package. The package then attempts to detect the number of available CPU cores which it can use for parallel computing; ideally this number would be equal to the number of entities \(N\). In addition to storing the global posterior, we could also be interested in inspecting the output of the \(N\) country models in more detail. To do so, we could set save.country.store=TRUE
allows to save the whole posterior distribution of the single country models, in the case we would like to inspect them. Due to storage reasons, the default is set to FALSE
and only the posterior medians of the single country models are reported. Note that even in case save.country.store=FALSE
, the whole posterior distribution of the is reported.
With have estimated the above model with stochastic volatility (SV=TRUE
). There are several reasons why one may want to let the residual variances change over time. First and foremost, most time periods used in macroeconometrics are nowadays rather volatile including severe recessions. Hence accounting for time variation might improve the fit of the model (Primiceri 2005; Sims and Zha 2006; Dovern, Feldkircher, and Huber 2016; Huber 2016). Second, the specification implemented in the toolbox nests the homoskedastic case. It is thus a good choice to start with the more general case when first confronting the model with the data. For structural analysis such as the calculation of impulse responses, we take the variance covariance matrix with the median volatilities (over the sample period) on its diagonal. If we want to look at the volatilities of the first equation (y
) in the euro area country model, we can type:
To discard explosive draws, we can compute the eigenvalues of the reduced form of the global model, written in its companion form. Unfortunately, this can only be done once the single models have been estimated and stacked together (and hence not directly built into the MCMC algorithm for the country models). To discard draws that lead to higher eigenvalues than 1.05 set eigen=1.05
. We can look at the 10 largest eigenvalues by typing:
## [1] 1.0082184 0.9637162 1.0415077 1.0226959 1.0122765 1.0189337 1.0249688
## [8] 1.0476392 1.0022260 1.0105412
Last, we have used the default option h=0
, which implies that we use the full sample period to estimate the GVAR. For the purpose of forecast evaluation, h
could be specified to a positive number, which then would imply that the last h
observations are reserved as a hold-out sample and not used to estimate the model.
Having estimated the model, we can have summarize the outcome in various ways.
First, we can use theprint
method
## ---------------------------------------------------------------------------------------
## Model Info:
## Prior: NG
## Nr. of lags: 1
## Nr. of posterior draws: 100/1=100
## Size of GVAR object: 1.1 Mb
## Trimming leads to 90 (90%) stable draws out of 100 total draws
## ---------------------------------------------------------------------------------------
## Model specification:
## $EA
## [1] "y, Dp, rer, stir, ltir, tb, y*, Dp*, rer*, stir*, ltir*, tb*, poil**, trend"
##
## $US
## [1] "y, Dp, rer, stir, ltir, tb, poil, y*, Dp*, rer*, stir*, ltir*, tb*, trend"
##
## $RU
## [1] "y, Dp, rer, stir, tb, y*, Dp*, rer*, stir*, ltir*, tb*, poil**, trend"
This just prints the submitted arguments of the bgvar
object along with the model specification for each unit. The asterisks indicate weakly exogenous variables, double asterisks exogenous variables and variables without asterisks the endogenous variables per unit.
The summary
method is a more enhanced way to analyze the output. It computes descriptive statistics like convergence properties of the MCMC chain, serial autocorrelation in the errors and the average pairwise autocorrelation of cross-unit residuals.
## ---------------------------------------------------------------------------------------
## Model Info:
## Prior: NG
## Nr. of lags: 1
## Nr. of posterior draws: 100/1=100
## Number of stable posterior draws: 90
## Number of countries: 3
## ---------------------------------------------------------------------------------------
## Convergence diagnostics
## Geweke statistic: 137 out of 1620 variables' z-values exceed the 1.96 threshold (8.46%)
## ---------------------------------------------------------------------------------------
## Global Likelihood: 4420.61
## F-test, first order serial autocorrelation of cross-country residuals
## Summary statistics:
## # p-values in %
## >0.1 "7" "38.89%"
## 0.05-0.1 "1" "5.56%"
## 0.01-0.05 "3" "16.67%"
## <0.01 "7" "38.89%"
## --------------------------------------------------------------------------------------
## Average pairwise cross-country correlation of country model residuals
## Summary statistics:
## y Dp rer stir ltir tb
## <0.1 "0 (0%)" "3 (100%)" "3 (100%)" "2 (66.67%)" "2 (100%)" "3 (100%)"
## 0.1-0.2 "0 (0%)" "0 (0%)" "0 (0%)" "1 (33.33%)" "0 (0%)" "0 (0%)"
## 0.2-0.5 "3 (100%)" "0 (0%)" "0 (0%)" "0 (0%)" "0 (0%)" "0 (0%)"
## >0.5 "0 (0%)" "0 (0%)" "0 (0%)" "0 (0%)" "0 (0%)" "0 (0%)"
## --------------------------------------------------------------------------------------
We can now have a closer look at the information provided by summary
. The header contains some basic information about the prior used to estimate the model, how many lags, posterior draws and countries. The next line shows Geweke’s CD statistic, which is calculated using the coda
package. Geweke’s CD assesses practical convergence of the MCMC algorithm. In a nutshell, the diagnostic is based on a test for equality of the means of the first and last part of a Markov chain (by default we use the first 10% and the last 50%). If the samples are drawn from the stationary distribution of the chain, the two means are equal and Geweke’s statistic has an asymptotically standard normal distribution.
The test statistic is a standard Z-score: the difference between the two sample means divided by its estimated standard error. The standard error is estimated from the spectral density at zero and so takes into account any autocorrelation. The test statistic shows that only a small fraction of all coefficients did not convergence. Increasing the number of burn ins can help decreasing this number further. The statistic can also be calculated by typing conv.diag(model.1)
.
The next model statistic is the likelihood of the global model. This statistic can be used for model comparison. Next and to assess, whether there is first order serial autocorrelation present, we provide the results of a simple F-test. The table shows the share of p-values that fall into different significance categories. Since the null hypothesis is that of no serial correlation, we would like to have as many large (\(>0.1\)) p-values as possible. The statistics show that already with one lag, serial correlation is modest in most equations’ residuals. This could be the case since we have estimated the unit models with stochastic volatility. To further decrease serial correlation in the errors, one could increase the number of lags via plag
.
The last part of the summary output contains a statistic of cross-unit correlation of (posterior median) residuals. One assumption of the GVAR framework is that of negligible, cross-unit correlation of the residuals. Significant correlations prohibit structural and spillover analysis (Dees et al. 2007). In this example, correlation is reasonably small.
Some other useful methods the bgvar
toolbox offers contain the coef
(or coefficients
as its alias) methods to extract the \(k \times k \times plag\) matrix of reduced form coefficients of the global model. Via the vcov
command, we can access the global variance covariance matrix and the logLik
function allows us to gather the global log likelihood (as provided by the summary
command).
Last, we can have a quick look at the in-sample fit using either the posterior median of the country models’ residuals (global=FALSE
) or those of the global solution of the GVAR (global=TRUE
). The in-sample fit can also be extracted by using fitted()
.
Here, we show the in-sample fit of the euro area model (global=FALSE
).
We can estimate the model with two further priors on the unit models, the SSVS prior and the Minnesota prior. To give a concrete example, the SSVS prior can be invoked by typing:
model.ssvs.1<-bgvar(Data=eerData,
W=W.trade0012,
saves=100,
burns=100,
plag=1,
prior="SSVS",
hyperpara=NULL,
SV=TRUE,
thin=1,
trend=TRUE,
h=0,
save.country.store=FALSE,
multithread=FALSE,
eigen=1.05
)
One feature of the SSVS prior is that it allows to look at the posterior inclusion probabilities to gauge the importance of particular variables. For example, we can have a look at the PIPs of the euro area model by typing:
## y Dp rer stir ltir tb
## y_lag1 1.00 0.17 0.28 0.06 0.09 0.69
## Dp_lag1 0.06 0.24 0.08 0.27 0.51 0.10
## rer_lag1 0.21 0.49 1.00 1.00 0.16 0.50
## stir_lag1 0.15 0.22 0.34 1.00 0.47 0.94
## ltir_lag1 0.13 1.00 0.34 0.42 1.00 0.96
## tb_lag1 0.62 0.30 0.85 0.08 0.36 1.00
## y* 1.00 0.17 0.18 0.09 0.08 0.12
## Dp* 0.15 0.31 0.17 0.20 0.16 0.18
## rer* 0.34 0.38 1.00 0.20 0.06 0.46
## stir* 0.08 0.92 0.13 0.36 0.09 0.13
## ltir* 0.48 0.16 0.19 0.96 0.04 0.24
## tb* 0.14 1.00 0.31 0.42 0.15 0.10
## poil** 0.40 1.00 0.24 0.09 0.30 0.28
## y*_lag1 0.61 0.05 0.02 0.11 0.12 0.49
## Dp*_lag1 0.54 0.23 0.41 0.15 0.71 0.15
## rer*_lag1 0.21 0.16 0.92 0.05 0.28 0.42
## stir*_lag1 0.20 0.26 0.11 0.53 0.19 0.04
## ltir*_lag1 1.00 0.76 0.35 1.00 0.17 0.28
## tb*_lag1 0.35 1.00 1.00 0.51 0.10 0.86
## poil**_lag1 0.85 0.53 0.13 0.10 0.07 0.27
## cons 0.40 0.27 1.00 0.06 0.10 1.00
## trend 0.30 0.91 0.04 0.03 0.08 0.97
This prints in the columns, the equations in the EA country model and in the rows the included variables. The example shows that besides other variables, the trade balance (tb
) is an important determinant of the real exchange rate (rer
).
We can also have a look at the average of the PIPs across all units:
## y Dp rer stir ltir tb poil
## y_lag1 1.0000000 0.19666667 0.28666667 0.35666667 0.075 0.6400000 0.04
## Dp_lag1 0.2833333 0.69000000 0.11000000 0.17000000 0.365 0.3100000 0.93
## rer_lag1 0.1733333 0.31333333 1.00000000 0.52333333 0.295 0.7566667 0.17
## stir_lag1 0.7100000 0.16000000 0.20333333 1.00000000 0.705 0.5600000 0.13
## ltir_lag1 0.2800000 0.54000000 0.41000000 0.35000000 1.000 0.6450000 0.15
## tb_lag1 0.2766667 0.29666667 0.39000000 0.25333333 0.255 1.0000000 0.09
## y* 0.4933333 0.13000000 0.42000000 0.17333333 0.100 0.4033333 0.13
## Dp* 0.2600000 0.50000000 0.40000000 0.09333333 0.325 0.3433333 1.00
## rer* 0.2333333 0.32666667 0.79000000 0.21333333 0.335 0.3466667 0.25
## stir* 0.3366667 0.39333333 0.14333333 0.23333333 0.235 0.3600000 0.43
## ltir* 0.2366667 0.24333333 0.10666667 0.43666667 0.170 0.1800000 0.12
## tb* 0.2466667 0.47666667 0.26333333 0.34333333 0.210 0.2400000 0.14
## poil** 0.7000000 0.55000000 0.20000000 0.28000000 0.300 0.1900000 NaN
## y*_lag1 0.2966667 0.05666667 0.41666667 0.34333333 0.115 0.3033333 0.03
## Dp*_lag1 0.2666667 0.23333333 0.34333333 0.10333333 0.495 0.4100000 0.14
## rer*_lag1 0.6266667 0.19000000 0.62333333 0.06666667 0.475 0.2900000 0.56
## stir*_lag1 0.5033333 0.20666667 0.07333333 0.49333333 0.330 0.2733333 0.31
## ltir*_lag1 0.4300000 0.39666667 0.29333333 0.40666667 0.145 0.2500000 0.20
## tb*_lag1 0.3200000 0.46333333 0.53000000 0.30666667 0.225 0.4566667 0.13
## poil**_lag1 0.4500000 0.35000000 0.25500000 0.19500000 0.070 0.1750000 NaN
## cons 0.3066667 0.18000000 0.64000000 0.18666667 0.130 0.7833333 0.05
## trend 0.3133333 0.42333333 0.36666667 0.51000000 0.105 0.4800000 0.95
## poil_lag1 0.3500000 0.26000000 0.09000000 1.00000000 0.140 0.3500000 1.00
This shows that the same determinants for the real exchange rate appear also as important regressors in other country models.
In this section we explore different specifications of the structure of the GVAR model. Other, specification choices that relate more to the time series properties of the data, such as specifying different lags and priors are left for the reader to explore. We will use the SSVS prior and judge the different specifications by examining the posterior inclusion probabilities.
As a first modification, we could use different weights for different variable classes as proposed in Eickmeier and Ng (2015). For example we could use financial weights to construct weakly exogenous variables of financial factors and trade weights for real variables.
The eerData
set provides us with a list of different weight matrices that are described in the help files.
Now we specify the sets of variables to be weighted:
eerData2<-eerData
variable.list<-list();variable.list$real<-c("y","Dp","tb");variable.list$fin<-c("stir","ltir","rer")
We can then re-estimate the model:
# weights for first variable set tradeW.0012, for second finW0711
model.ssvs.2<-bgvar(Data=eerData2,
W=W.list[c("tradeW.0012","finW0711")],
plag=1,
saves=100,
burns=100,
prior="SSVS",
SV=TRUE,
thin=1,
hyperpara=NULL,
eigen=TRUE,
variable.list=variable.list,
OE.weights=NULL,
Wex.restr=NULL,
trend=TRUE,
save.country.store=FALSE
)
Another specification would be to include a foreign variable only when its domestic counterpart is missing. For example, when working with nominal bilateral exchange rates we probably do not want to include also its weighted average (which corresponds to something like an effective exchange rate). Using the previous model we could place a restriction on long-term interest rates, since those are not contained in all country models.
# does include ltir* only when ltir is missing domestically
model.ssvs.3<-bgvar(Data=eerData,
W=W.trade0012,
plag=1,
saves=100,
burns=100,
prior="SSVS",
SV=TRUE,
thin=1,
hyperpara=NULL,
eigen=TRUE,
variable.list=NULL,
OE.weights=NULL,
Wex.restr="ltir",
trend=TRUE,
save.country.store=FALSE
)
## ---------------------------------------------------------------------------------------
## Model Info:
## Prior: SSVS
## Nr. of lags: 1
## Nr. of posterior draws: 100/1=100
## Size of GVAR object: 1 Mb
## Trimming leads to 80 (80%) stable draws out of 100 total draws
## ---------------------------------------------------------------------------------------
## Model specification:
## $EA
## [1] "y, Dp, rer, stir, ltir, tb, y*, Dp*, rer*, stir*, tb*, poil**, trend"
##
## $US
## [1] "y, Dp, rer, stir, ltir, tb, poil, y*, Dp*, rer*, stir*, tb*, trend"
##
## $RU
## [1] "y, Dp, rer, stir, tb, y*, Dp*, rer*, stir*, ltir*, tb*, poil**, trend"
Last, we could also use a different specification of oil prices in the model. Currently, the oil price is determined endogenously within the US model. Alternatively, one could set up an own standing oil price model with additional variables that feeds the oil price back into the other economies as exogenous variable (Mohaddes and Raissi 2019).
The model structure would then look something like in the Figure below:
For that purpose we have to remove oil prices from the US model and attach them to a separate slot in the data list. This slot has to have its own country label. We use ‘OC’ for “oil country”.
eerData2$OC<-eerData$US[,c("poil"),drop=FALSE] # move oil prices into own slot
eerData2$US<-eerData$US[,c("y","Dp", "rer" , "stir", "ltir","tb")] # exclude it from US m odel
Now we have to specify a list object that we label OC.weights
. The list has to consist of three slots with the following names weights
, variables
and exo
:
OC.weights<-list()
OC.weights$weights<-rep(1/3, 3)
names(OC.weights$weights)<-names(eerData2)[1:3] # last one is OC model, hence only until 3
OC.weights$variables<-c(colnames(eerData2$OC),"y") # first entry, endog. variables, second entry weighted average of y from the other countries to proxy demand
OC.weights$exo<-"poil"
The first slot, weights
, should be a vector of weights that sum up to unity. In the example above, we simply use \(1/N\), other weights could include purchasing power parities (PPP). The weights are used to aggregate specific variables that in turn enter the oil model as weakly exogenous. The second slot, variables
, should specify the names of the endogenous and weakly exogenous variables that are used in the model. In the oil price example, we include the oil price (poil
) as endogenous variable (not contained in any other country model); a weighted average using weights
of output (y
) to proxy world demand is included as weakly exogenous variable. Last, we specify via exo
which one of the endogenous variables of the oil price model are fed back into the other country models. In this example we specify poil
. Last, we include OC.weights
in a further list called OE.weights
(other entity weights):
where the list entry has to has the same name as the oil country model, in our example OC
. Note that specifying OE.weights
in principle allows to include further “extra” country models by simply adding additional list slots.
Now we can re-estimate the model by specifying OE.weights
, which is a list with its entries labelled exactly like the additional country models. The structure is designed to also allow for more than one other entity (henceforth labelled OE).
model.ssvs.4<-bgvar(Data=eerData2,
W=W.trade0012,
plag=1,
saves=100,
burns=100,
prior="SSVS",
SV=TRUE,
thin=1,
hyperpara=NULL,
variable.list=NULL,
OE.weights=OE.weights,
trend=TRUE,
save.country.store=FALSE
)
and can compare the results of the four models by e.g., looking at the average PIPs.
aux1<-model.ssvs.1$cc.results$PIP$PIP.avg;aux1<-aux1[-nrow(aux1),1:6]
aux2<-model.ssvs.2$cc.results$PIP$PIP.avg;aux2<-aux2[-nrow(aux2),1:6]
aux3<-model.ssvs.3$cc.results$PIP$PIP.avg;aux3<-aux3[-nrow(aux3),1:6]
aux4<-model.ssvs.4$cc.results$PIP$PIP.avg;aux4<-aux4[-nrow(aux4),1:6]
heatmap(aux1,Rowv=NA,Colv=NA, main="Model 1")
heatmap(aux2,Rowv=NA,Colv=NA, main="Model 2")
heatmap(aux3,Rowv=NA,Colv=NA, main="Model 3")
heatmap(aux4,Rowv=NA,Colv=NA, main="Model 4")
We could also compare the models based on their fit, the likelihood, information criteria such as the DIC, residual properties or their forecasting performance.
Structural analysis can be performed by invoking the IRF
function. This function calculates three alternative ways of dynamic responses, namely generalized impulse response functions (GIRFs) as in Pesaran and Shin (1998), orthogonalized impulse response functions using a Cholesky decomposition and finally impulse response functions given a set of user-specified sign restrictions.
Let us start by looking at a monetary policy shock in the US country model. We have to set up a list shock
that contains information about which variable we want to shock shocks$var
, in which country shocks$cN
and whether we want a recursive identification or GIRFs shocks$ident
. We can also scale the shock using shocks$scal
. If the argument multithread=TRUE
is set to TRUE, computation is sped up by using multiple CPU cores. The package attempts to detect the number of cores on the computer, which it uses then for parallel computing. Default is set to FALSE
to not slow down the computer.
# US monetary policy shock
shocks<-list();shocks$var="stir";shocks$cN<-"US";shocks$ident="chol";shocks$scal=-100
irf.chol.us.mp<-IRF(obj=model.ssvs.1,shock=shocks,nhor=24,save.store=TRUE)
The results are stored in irf.chol.us.mp
.
## [1] "posterior" "rot.nr" "shock" "sign.constr" "ident"
## [6] "struc.obj" "model.obj" "IRF_store"
irf.chol.us.mp$posterior
is a \(K \times nhor \times nr.of shocks \times 7\) object. The last slot contains the 50%, 68% and 95% credible intervals along the posterior median. If save.store=TRUE
, IRF_store
contains the full set of impulse response draws.
We can plot the complete responses of a particular country by typing:
The plot shows the posterior median response (solid, black line) along 50% (dark grey) and 68% (light grey) credible intervals.
We can also compare the Cholesky responses with GIRFs. For that purpose, let us look at a GDP shock.
# Recursive US GDP
shocks<-list();shocks$var="y";shocks$cN<-"US";shocks$ident="chol";shocks$scal=-1
irf.chol.us.y<-IRF(obj=model.ssvs.1,shock=shocks,nhor=24)
# GIRF US GDP
shocks<-list();shocks$var="y";shocks$cN<-"US";shocks$ident="girf";shocks$scal=-1
irf.girf.us.y<-IRF(obj=model.ssvs.1,shock=shocks,nhor=24)
plot(irf.chol.us.y,resp="US.y")
plot(irf.girf.us.y,resp="US.y")
plot(irf.chol.us.y,resp="US.rer")
plot(irf.girf.us.y,resp="US.rer")
We see that the responses are similar. This is not surprising since we have shocked the first variable in the US country model (y
) and there are no timing restrictions on the remaining variables (they are all affected without any lag). In that case, the orthogonal impulse responses and the GIRF coincide.
In this section, we identify the shocks applying sign restrictions. For that purpose, we need to load another example data set and estimate a GVAR. This data set contains one-year ahead GDP, inflation and short-term interest rate forecasts for the USA, which we are going to need when imposing rationality conditions. The forecasts are from the
survey of professional forecasters (SPF) data base.
data("eerDataspf")
eerDataspf<-eerDataspf[cN]
W.trade0012.spf<-W.trade0012.spf[cN,cN]
W.trade0012.spf<-apply(W.trade0012.spf,2,function(x)x/rowSums(W.trade0012.spf))
model.ssvs.eer<-bgvar(Data=eerDataspf,
W=W.trade0012.spf,
plag=1,
saves=100,
burns=100,
prior="SSVS",
SV=TRUE,
thin=1
)
For now, we start with an identification of two standard shocks in economics in the US model, namely an aggregate demand and aggregate supply shock. We specify the sign restrictions by setting up a list that has to contain the information of which variable to shock, on which IRFs to put the sign restrictions, the restrictions and the horizon how long these restrictions should hold. Increasing the number of restrictions (on the variables or the horizon) will lead to preciser inference; however, finding a suitable rotation matrix will become substantially harder.
sign.constr.eer<-list()
sign.constr.eer$shock1$shock<-"US.y" # Positive AD Shock, gdp goes up,
sign.constr.eer$shock1$restrictions$res1<-"US.Dp" #inflation up and interest rates as well
sign.constr.eer$shock1$sign<-c(">",">")
sign.constr.eer$shock1$rest.horz<-c(1,1)
sign.constr.eer$shock1$constr<-c(1,1)# no cross-country restrictions, set constr. to 1
sign.constr.eer$shock1$scal<-1 #+1% increase
sign.constr.eer$shock2$shock<-"US.Dp" # Negative AS shock, inflation up
sign.constr.eer$shock2$restrictions$res1<-"US.y"
sign.constr.eer$shock2$sign<-c(">","<")
sign.constr.eer$shock2$rest.horz<-c(1,1)
sign.constr.eer$shock2$constr<-c(1,1) # no cross-country restrictions, set constr. to 1
sign.constr.eer$shock2$scal<-1 #+1% increase
names(sign.constr.eer)<-c("AD","AS")
sign.constr.eer$MaxTries<-10000 # maximum number of rotations per draw
We then invoke the IRF
command to compute the impulse responses. The function draws rotation matrices using the algorithm of Rubio-Ramírez, Waggoner, and Zha (2010). In case we specify additional zero restrictions (see the next example below), we use the algorithm of Arias, Rubio-Ramírez, and Waggoner (2018). By default, we use one CPU core (multithread=FALSE
) and do not store the full set of responses (save.store=FALSE
). The maximum number of rotation matrices sampled per MCMC draw before we jump to the next draw can be specified by MaxTries
.
We can infer the number of successful rotation matrices by looking at
## [1] "For 100 draws out of 100 draws, a rotation matrix has been found."
plot(irf.sign,resp="US.y",shock.nr=1)
plot(irf.sign,resp="US.y",shock.nr=2)
plot(irf.sign,resp="US.Dp",shock.nr=1)
plot(irf.sign,resp="US.Dp",shock.nr=2)
plot(irf.sign,resp="US.rer",shock.nr=1)
plot(irf.sign,resp="US.rer",shock.nr=2)
Several recent papers advocate the inclusion of survey data in a VAR. Castelnuovo and Surico (2010) show that including inflation expectations mitigates the price puzzle (i.e., the counter intuitive positive movement of inflation in response to a monetary tightening). D’Amico and King (2015) go one step further and argue that expectations should always be included in a VAR model since they contain information that is not contained in standard macroeconomic data. They also show how to make inference with survey data in a VAR framework and propose so-called rationality conditions. In a nutshell, these conditions put restrictions on actual data to match the expectations either on average over (ratio.average
) or at the end of (ratio.H
) the forecast horizon. Let us look at a concrete example.
sign.constr<-list()
# shock to 4-step ahead expectation of US.stir
sign.constr$shock1$shock<-"US.stir_t+4"
sign.constr$shock1$restrictions$res1<-"US.Dp_t+4"
# zero restriction on US.stir
sign.constr$shock1$restrictions$res2<-"US.stir"
sign.constr$shock1$restrictions$res3<-"US.y_t+4"
# rationality condition: US.stir_t+4 on impact is equal to average of IRF of
# US.stir between horizon 1 and 4 (defined with rest.horz, but as period 5!)
sign.constr$shock1$restrictions$res4<-"US.stir_t+4"
# rationality condition: US.Dp_t+4 on impact is equal to H-step ahead IRF
# of US.Dp in horizon 4 (defined with rest.horz, but as period 5!)
sign.constr$shock1$restrictions$res5<-"US.Dp_t+4"
# rationality condition: US.y_t+4 on impact is equal to H-step ahead IRF
# of US.y in horizon 4 (defined with rest.horz, but as period 5!)
sign.constr$shock1$restrictions$res6<-"US.y_t+4"
sign.constr$shock1$sign<-c(">","<","0","<","ratio.avg","ratio.H","ratio.H")
sign.constr$shock1$rest.horz<-c(1,1,1,1,5,5,5)
sign.constr$shock1$constr<-c(1,1,1,1,1,1,1)
sign.constr$shock1$scal=1
sign.constr$MaxTries<-200
irf.sign.zero<-IRF(obj=model.ssvs.eer,nhor=20,sign.constr=sign.constr,save.store=TRUE)
The figure below shows the results for short term interest rates (stir
) and output (y
).
# rationality condition: US.stir_t+4 on impact is equal to average of IRF of
# US.stir between horizon 2 and 5
matplot(cbind(irf.sign$posterior["US.stir_t+4",,1,"median"],
irf.sign$posterior["US.stir",,1,"median"]),
type="l",ylab="",main="stir",lwd=2); legend("topright",lty=c(1,2),c("expected","actual"),lwd=2,bty="n",col=c("black","red"))
abline(h=mean(irf.sign$posterior["US.stir",2:5,1,"median"]),lwd=2)
abline(v=c(2,5),lty=3,col="grey",lwd=2)
# rationality condition: US.y_t+4 on impact is equal to H-step ahead IRF
# of US.y in horizon 5
matplot(cbind(irf.sign$posterior["US.y_t+4",,1,"median"],
irf.sign$posterior["US.y",,1,"median"]),
type="l",ylab="",main="y",lwd=2)
legend("topright",lty=c(1,2),c("expected","actual"),lwd=2,bty="n",col=c("black","red"))
abline(h=irf.sign$posterior["US.y_t+4",1,1,"median"],lwd=2)
abline(v=5,lty=3,col="grey",lwd=2)
Impulse responses that refer to observed data are in red (dashed), and the ones referring to expected data in black. The condition we have imposed on short-term interest rates (top panel) was that observed rates should equal the shock to expected rates on average over the forecast horizon (one year, i.e., on impact plus 4 quarters). The respective period is marked by the two vertical, grey lines. On output, shown in the bottom panel, by contrast, we have imposed a condition that has to hold exactly at the forecast horizon. The red line, the impulse response of observed output has to meet the impact response of expected output at \(h=5\).
The last example we look at is how to put restrictions on the cross-section. Chudik and Fidora (2011) and Cashin et al. (2014) argue that a major advantage of GVARs is that they allow to put restrictions also on variables from different countries, which should further sharpen inference. They apply cross-sectional restrictions to identify oil supply and demand shocks with the restrictions on oil importing countries’ GDP.
Here, we follow Feldkircher, Gruber, and Huber (2020) who use cross-sectional restrictions to identify a term spread shock in the euro area. Since they use separate country models for members of the euro area, the joint monetary policy has to be modeled. One idea that has been put forth in recent applications is to set up an additional country for the joint monetary policy in the euro area. In the next example, we follow Georgiadis (2015) and set up an ECB model that determines euro area interest rates according to a Taylor rule. This idea follows the set-up of the additional oil price model and can be summarized graphically in the picture below.
We can look at the data by typing:
## [1] "BG" "CZ" "DK" "HR" "HU" "PL" "RO" "SE" "GB" "AT" "BE" "DE" "EE" "GR" "ES"
## [16] "FI" "FR" "IE" "IT" "LT" "LV" "NL" "PT" "SI" "SK" "US" "CN" "JP" "RU" "TR"
## [31] "CA" "EB"
# list of weights of other entities with same name as additional country model
OE.weights = list(EB=EA.weights)
EA_countries <- c("AT", "BE", "DE","ES", "FI","FR")
# "IE", "IT", "NL", "PT","GR","SK","MT","CY","EE","LT","LV")
To estimate the GVAR with an ‘EB’ country model, we have to specify additional arguments similar to the example with the oil price model discussed above. The monthlyData
set already comes along with a pre-specified list EA.weights
with the mandatory slots weights
, variables
and exo
. The specification implies that the euro area monetary policy model (EB
) includes EAstir
, total.assets
, M3
, ciss
as endogenous variables (these are contained in monthlyData$EB
). We use PPP-weights contained in weights
to aggregate output (y
) and prices (p
) from euro area countries and include them as weakly exogenous variables. Euro area short-term interest rates (EAstir
) and the ciss indicator (ciss
), specified in exo
, are then passed on as exogenous variables to the remaining countries. Finally, we put EA.weights
into the OE.weights
list and label the slot EB
(as the name of the additional country model, names(monthlyData)
) and estimate the model:
monthlyData <- monthlyData[c(EA_countries,"EB")]
W<-W[EA_countries,EA_countries]
W<-apply(W,2,function(x)x/rowSums(W))
OE.weights$EB$weights <- OE.weights$EB$weights[names(OE.weights$EB$weights)%in%EA_countries]
# estimates the model
model.ssvs<-bgvar(Data=monthlyData,
W=W,
saves=100,
burns=100,
plag=1,
prior="SSVS",
thin=1,
eigen=1.05,
OE.weights=OE.weights)
##
## Start estimation of Bayesian Global Vector Autoregression.
##
## Prior: Stochastic Search Variable Selection prior.
## Lag order: 1
## Stochastic volatility: enabled.
## Thinning factor: 1. This means every draw is saved.
## Hyperparameter setup:
## No hyperparameters are chosen, default setting applied.
##
## Estimation of country models starts... took 0 mins 6 seconds.
## Start stacking:
##
## Stacking finished.
##
## Needed time for estimation of bgvar: 0 mins 6 seconds.
We can now impose a global shock and sign restrictions on the cross section by setting up a list object:
# imposes sign restrictions on the cross-section and for a global shock
# (long-term interest rates)
sign.constr<-list()
# the variable to shock, can be imposed for more than one country
# but should be the same variable for all of them
sign.constr$shock1$shock = c(paste0(EA_countries[-c(3,12)],".ltir"))
# restrictions (industrial production should decrease for selected countries)
sign.constr$shock1$restrictions$res1<-paste0(EA_countries,".y")
# another set of restrictions (inflation should decrease for selected countries)
sign.constr$shock1$restrictions$res2<-paste0(EA_countries,".p")
# first entry is for the shock, following entries for the restrictions
# (ltir should go up, y and p go down)
sign.constr$shock1$sign<-c(">","<","<")
# nr. of time periods restrictions are imposed, first entry is for the shock,
# following entries for the restrictions
sign.constr$shock1$rest.horz<-c(1,1,1)
# are constraints binding for all (1) countries specified or for at least 50% of
# the countries (0.5), or 75% (0.75)
sign.constr$shock1$constr<-c(1,0.5,0.5)
# a minus 100 bp shock to long-term interest rates (on average)
sign.constr$shock1$scal=-100
# rotation specification
sign.constr$MaxTries<-200
The slot sign.constr$shock1$shock
contains the countries and variables to shock and sign.constr$restrictions
the restrictions on the cross-section.
## $shock
## [1] "AT.ltir" "BE.ltir" "ES.ltir" "FI.ltir" "FR.ltir"
##
## $restrictions
## $restrictions$res1
## [1] "AT.y" "BE.y" "DE.y" "ES.y" "FI.y" "FR.y"
##
## $restrictions$res2
## [1] "AT.p" "BE.p" "DE.p" "ES.p" "FI.p" "FR.p"
##
##
## $sign
## [1] ">" "<" "<"
##
## $rest.horz
## [1] 1 1 1
##
## $constr
## [1] 1.0 0.5 0.5
##
## $scal
## [1] -100
## NULL
The restrictions on the cross-section can be specified to hold only for a percentage of countries We have used the following specification: sign.constr$shock1$constr<-c(1,0.5,0.5)
. This implies that the shock has to happen in all countries (first entry) and the restrictions on output and prices only for half of the countries. We could make the restrictions stricter by increasing the percentage. In the extreme, if we would specify sign.constr$shock1$constr<-c(1,1,1)
, restrictions would have to hold for all euro area countries.
We can now compute the impulse responses using the same function as before.
irf.sign.ssvs<-IRF(obj=model.ssvs, # gvar object
shock=NULL, # shockc is not needed
nhor=24, # impulse response horizon
sign.constr=sign.constr # list with sign-restrictions
)
To verify the sign restrictions, type:
## AT.ltir BE.ltir ES.ltir FI.ltir FR.ltir
## -45.79020 -45.07776 -47.83671 -42.85059 -46.36375
## AT.y BE.y DE.y ES.y FI.y FR.y
## 2.3205868 2.3783476 0.9802466 -0.7647924 -1.1891038 1.0202673
## AT.p BE.p DE.p ES.p FI.p FR.p
## 0.01819996 0.18996783 0.01155755 -0.09820379 0.02124311 0.05346097
The following plots the output responses for selected euro area countries.
plot(irf.sign.ssvs,resp="AT.y")
plot(irf.sign.ssvs,resp="BE.y")
plot(irf.sign.ssvs,resp="DE.y")
plot(irf.sign.ssvs,resp="ES.y")
Forecast error variance decompositions indicate the amount of information each variable contributes to the other variables in the autoregression. It is calculated by examining how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables. In a system with fully orthogonalized errors, the shares of FEVD sum up to 1. In the GVAR context, however, since we identify a shock only locally in particular country model and we still have a certain degree of residual correlation, shares typically exceed unity. By contrast, a fully orthogonalized system obtained for example by means of a Cholesky decomposition would yield shares that sum up to unity but inherits assumptions that are probably hard to defend. In the case of the Cholesky decomposition, this would imply timing restrictions, i.e., which variables in which units are immediately affected or affected only with a lag.
One way of fixing this is to use generalized forecast error variance decompositions. Like with GIRFs, these are independent of the ordering but, since the shocks are not orthogonalized, yield shares that exceed unity. Recently, proposed a way of scaling the GFEVDs, which has the nice property of shares summing up to 1 and results being independent of the ordering of the variables in the system. To calculate them, we can use the GFEVD.LN
command. We can either use a running mean (running=TRUE
) or the full set of posterior draws. The latter is computationally very expensive.
#calculates the LN GFEVD
gfevd.us.mp=gfevd.decomp(obj=model.ssvs.eer,nhor=24,running=TRUE,multithread=TRUE)$GFEVD
##
## Start computing generalized forecast error variance decomposition of Bayesian Global Vector Autoregression.
##
## Start computation on 8 cores (100 stable draws in total).
## Size of IRF object: 0.1 Mb
## Needed time for computation: 0 mins 0 seconds.
# get position of EA
idx<-which(grepl("EA.",dimnames(gfevd.us.mp)[[2]]))
own<-colSums(gfevd.us.mp["EA.y",idx,])
foreign<-colSums(gfevd.us.mp["EA.y",-idx,])
The plot above shows a typical pattern: On impact and in the first periods, EA variables (own) explain a large share of GFEVD. With time and through the lag structure in the model, other countries’ variables show up more strongly as important determinants of EA output error variance.
In case we want to focus on a single country, which we have fully identified either using a Cholesky decomposition or sign restrictions, we can compute a simple forecast error variance decomposition (FEVD). This can be done by using the command fevd.decomp
. Since the computation is very time consuming, the FEVDs are based on the posterior median only (as opposed to calculating FEVDs for each MCMC draw or using a running mean). In case the underlying shock has been identified via sign restrictions, the corresponding rotation matrix is the one that fulfills the sign restrictions at the point estimate of the posterior median of the reduced form coefficients (stored in irf.obj$struc.obj$Rmed
). Alternatively one can submit a rotation matrix using the option R
.
##
## Start computing forecast error variance decomposition of Bayesian Global Vector Autoregression.
##
## Identification scheme: Sign-restrictions provided.
## FEVD computed for the following variables: US.y.
## Start computing FEVDs...
##
## Size of FEVD object: 0 Mb
## Needed time for computation: 0 mins 0 seconds.
Historical decompositions allow to examine the relative importance of structural shocks in explaining deviations of a time series from its unconditional mean. This can be used to assess the hypothetical question of how data would have looked like if it was driven only by a particular structural shock (e.g., monetary policy shock) or a combination of structural shocks. It can be calculated using the function hd.decomp
. The function also allows to compute the structural error of the model. To save computational time as well as due to storage limits, we use the point estimate of the posterior median (as opposed to calculating HDs and the structural error for each draw of the MCMC chain). In case the shock has been identified via sign restrictions, a rotation matrix has to be selected. If not specified otherwise (via R
), the rotation matrix based on the posterior median of the reduced form coefficients (irf.obj$struc.obj$Rmed
) will be used.
##
## Start computing historical decomposition of Bayesian Global Vector Autoregression.
##
## Start computing HDs...
## Size of object: 0.3 Mb
## Needed time for computation: 0 mins 0 seconds.
# summing them up should get you back the original time series
org.ts<-apply(HD$hd_array,c(1,2),sum) # this sums up the contributions of all shocks + constant, initial conditions and residual component (last three entries in the third dimension of the array)
matplot(cbind(HD$x[,1],org.ts[,1]),type="l",ylab="",lwd=2)
legend("bottomright",c("hd series","original"),col=c("black","red"),lty=c(1,2),bty="n",cex=2)
In this section, we demonstrate how the package can be used for forecasting. We distinguish between unconditional and conditional forecasting. Typical applications of unconditional forecasting are to select a model from a range of candidate models or for out-of-sample forecasting. Conditional forecasts can be used for scenario analysis by comparing a forecast with a fixed future path of a variable of interest to its unconditional forecast.
Since the GVAR framework was developed to capture cross-country dependencies, it can handle a rich set of dynamics and interdependencies. This can also be useful for forecasting either global components (e.g., global output) or country-specific variables controlling for global factors. Pesaran, Schuermann, and Smith (2009) show that the GVAR yields competitive forecasts for a range of macroeconomic and financial variables. Crespo Cuaresma, Feldkircher, and Huber (2016) demonstrate that Bayesian shrinkage priors can help improving GVAR forecasts and Dovern, Feldkircher, and Huber (2016) and Huber (2016) yield evidence for further gains in forecast performance by using GVARs with stochastic volatility.
To compute forecasts with the BGVAR
package, we use the command predict
. To be able to evaluate the forecast, we have to specify the size of the hold-out sample when estimating the model. Here, we choose a hold-out-sample of 8 observations by setting h=8
(the default values is h=0
):
model.ssvs.h8<-bgvar(Data=eerData,W=W.trade0012,plag=1,saves=100,burns=100,prior="SSVS",
SV=TRUE,thin=1,eigen=1.05,trend=TRUE,h=8)
The forecasts can then be calculated using the predict
function. We calculate forecasts up to 8 periods ahead by setting fhorz=8
-step
The forecasts are stored in fcast$fcast
which contains also the credible intervals of the predictive posterior distribution. We can evaluate the forecasts with the retained observations looking at the root mean squared errors (RMSEs) or log-predictive scores (LPS).
The objects lps.h8
and rmse.h8
then each contain a \(8 \times k\) matrix with the LPS scores / RMSEs for each variable in the system over the forecast horizon.
Last, we can visualize the forecasts by typing
with Cut
denoting the number of realized data points that should be shown in the plot prior the forecasts start.
Similar to structural analysis, it is possible to use conditional forecasts, identified in a country model. For that purpose, we use the methodology outlined in Waggoner and Zha (1999) and applied in Feldkircher, Huber, and Moder (2015) in the GVAR context. The following lines set up a conditional forecast holding inflation in the US country model fixed for five periods to its last observed value in the sample. Make sure that the inputs to cond.predict
bgvar.obj
and pred.obj
belong to the same model.
# matrix with constraints
constr <- matrix(NA,nrow=fcast$fhorz,ncol=ncol(model.ssvs.1$xglobal))
colnames(constr) <- colnames(model.ssvs.h8$xglobal)
# set "US.Dp" for five periods on its last value
constr[1:5,"US.Dp"] <-model.ssvs.h8$xglobal[nrow(model.ssvs.h8$xglobal),"US.Dp"]
# compute conditional forecast (hard restriction)
cond_fcast <- cond.predict(constr=constr, bgvar.obj=model.ssvs.h8, pred.obj=fcast, constr_sd=NULL)
We could impose the same restrictions as “soft conditions” accounting for uncertainty by drawing from a Gaussian distribution with the conditional forecast in constr
as mean and standard deviations in the matrix constr_sd
of same size as constr
.
# add uncertainty to conditional forecasts
constr_sd <- matrix(NA,nrow=fcast$fhorz,ncol=ncol(model.ssvs.h8$xglobal))
colnames(constr_sd) <- colnames(model.ssvs.h8$xglobal)
constr_sd[1:5,"US.Dp"] <- 0.001
# compute conditional forecast with soft restrictions
cond_fcast2 <- cond.predict(constr=constr, bgvar.obj=model.ssvs.h8, pred.obj=fcast, constr_sd=constr_sd)
We can then compare the results
with Cut
denoting the number of realized data points that should be shown in the plot prior the conditioning starts.
bgvar
Main arguments and description of the function bgvar
.
Data
: Either a
.
.
The first part should denote the country / entity and the second part the name of the variable. Country and variable names are not allowed to contain a .
W
: An \(N \times N\) weight matrix with 0 elements on the diagonal and row sums that sum up to unity or a list of weight matrices. See the help files for getweights for more details.
plag
: Number of lags used (the same for domestic, exogenous and weakly exogenous variables). Default set to plag=1
.
saves
: Number of draws saved. Default set to saves=5000
.
burns
: Number of burn-ins. Default set to burns=5000
.
prior
: Either “SSVS”, “MN” or “NG”. See details below. Default set to prior=NG
.
SV
: If set to "TRUE"
, models are fitted with stochastic volatility using the stochvol
and GIGrvg
packages. Due to storage issues, not the whole history of the \(T\) variance covariance matrices are kept. Consequently, the BGVAR package shows only one set of impulse responses (with variance covariance matrix based on the median volatilities over the sample period) instead of \(T\) sets. Specify SV=FALSE
to turn SV off.
h
: Defines the hold-out sample. Default without hold-out sample, thus set to zero.
thin
: Is a thinning interval which grabs every ’thin’th draw from the posterior output. For example, thin=10
saves every tenth draw from the posterior. Default set to thin=1
.
hyperpara
: Is a list object that defines the hyperparameters when the prior is set to either "MN"
, "SSVS"
or "NG"
.
"miscellaneous:"
a_1
is the prior hyperparameter for the inverted gamma prior (shape) (set a_1 = b_1
to a small value for the standard uninformative prior). Default is set to a_1=0.01
.b_1
is the prior hyperparameter for the inverted gamma prior (rate). Default sit set to b_1=0.01
.prmean
is the prior mean on the first own lag of the autoregressive coefficient, standard value is prmean=1
for non-stationary data. The prior mean for the remaining autoregressive coefficients automatically set to 0.bmu
If SV=TRUE
, this is the prior hyperparameter for the mean of the log-volatilities. Default is bmu=0
.Bmu
If SV=TRUE
, this is the prior hyperparameter for the variance of the mean of the log-volatilities. Default is Bmu=0
.a0
If SV=TRUE
, this is the hyperparameter for the Beta prior on the persistence parameter of the log-volatilities. Default is a0=25
.
b0
If SV=TRUE
, this is the hyperparameter for the Beta prior on the persistence parameter of the log-volatilities. Default is b0=1.5
.Bsigma
If SV=TRUE
, this is the hyperparameter for the Gamma prior on the variance of the log-volatilities. Default is Bsigma=1
."MN"
shrink1
Starting value of shrink1
. Default set to 0.1.shrink2
Starting value of shrink2
. Default set to 0.2.shrink3
Hyperparameter of shrink3
. Default set to 100.shrink4
Starting value of shrink4
. Default set to 0.1."SSVS"
tau0
is the prior variance associated with the normal prior on the regression coefficients if a variable is NOT included (spike, tau0 should be close to zero).tau1
is the prior variance associated with the normal prior on the regression coefficients if a variable is included (slab, tau1 should be large).kappa0
is the prior variance associated with the normal prior on the covariances if a covariance equals zero (spike, kappa0 should be close to zero).kappa1
is the prior variance associated with the normal prior on the covariances if a covariance is unequal to zero (slab, kappa1 should be large).p_i
is the prior inclusion probability for each regression coefficient (default is 0.5).q_ij
is the prior inclusion probability for each covariance (default is 0.5)."NG"
e_lambda
Prior hyperparameter for the Gamma prior on the lag-specific shrinkage components, standard value is e_lambda=1.5
.d_lambda
Prior hyperparameter for the Gamma prior on the lag-specific shrinkage components, standard value is d_lambda=1
.a_start
Parameter of the Normal-Gamma prior that governs the heaviness of the tails of the prior distribution. A value of a_start=1
would lead to the Bayesian LASSO. Default value differs per entity and set to a_start=1/log(M)
, where M
is the number of endogenous variables per entity.sample_A
If set to TRUE
a_start
is sampled.eigen
Set to TRUE
if you want to compute the largest eigenvalue of the companion matrix for each posterior draw. If the modulus of the eigenvalue is significantly larger than unity, the model is unstable. Unstable draws exceeding an eigenvalue of one are then excluded. If eigen
is set to a numeric value, then this corresponds to the maximum eigenvalue. The default is set to \(1.05\) (which excludes all posterior draws for which the eigenvalue of the companion matrix was larger than \(1.05\) in modulus).
variable.list
In case W
is a list of weight matrices, specify here which set of variables should be weighted by which weight matrix. See the help file on getweights for details. Default is NULL
.
OE.weights
: Default value is NULL
. Can be used to provide information of how to handle additional country models (other entities). Additional country models can be used to endogenously determine variables that are (weakly) exogenous for the majority of the other country models. As examples, one could think of an additional oil price model (Mohaddes and Raissi 2019) or a model for the joint euro area monetary policy (Georgiadis 2015; Feldkircher, Gruber, and Huber 2020). The data for these additional country models has to be contained in Data
. The number of additional country models is unlimited. Each list entry of OE.weights
has to be named similar to the name of the additional country model contained in Data
. Each slot of OE.weights
has to contain the following information:
weights
a vector of weights with names relating to the countries for which data should be aggregated. Can also relate to a subset of countries contained in the data.variables
a vector of variable names that should be included in the additional country model. Variables that are not contained in the data slot of the extra country model are assumed to be weakly exogenous for the additional country model (aggregated with weights
).exo
a vector of variable names that should be fed into the other countries as (weakly) exogenous variables.Wex.restr
A character vector that contains variables that should only be specified as weakly exogenous if not contained as endogenous variable in a particular country. An example that has often been used in the literature is to place these restrictions on nominal exchange rates. Default is NULL
in which case all weakly exogenous variables are treated symmetrically. See function getweights for more details.
Ex
For including truly exogenous variables to the model. Either a + list object
of maximum length N
that contains the data. Each element of the list refers to a country/entity and has to match the country/entity names in Data
. If no truly exogenous variables are added to the respective country/entity model, omit the entry. The T
rows (i.e., number of time observations), however, need to be the same for each country. Country and variable names are not allowed to contain a dot .
(i.e., a dot) since this is our naming convention. + matrix object
of dimension T
times number of truly exogenous variables. The column names should consist of two parts, separated by a .
(i.e., a dot). The first part should denote the country / entity name and the second part the name of the variable. Country and variable names are not allowed to contain a .
(i.e., a dot).
trend
If set to TRUE
a deterministic trend is added to the country models.
save.country.store
If set to TRUE
then function also returns the container of all draws of the individual country models. Significantly raises object size of output and default is thus set to FALSE
.
multithread
If set to TRUE
parallel computing using the packages foreach
and doParallel
. The number of cores is set automatically to the maximum number of cores in the computer. Default is set to FALSE
and thus no parallelization.
verbose
If set to FALSE
it suppresses printing messages to the console.
Below, find some example code for all three priors.
# load dataset
data(eerData)
# Minnesota prior and two different weight matrices and no SV
# weights for first variable set tradeW.0012, for second finW0711
variable.list <- list()
variable.list$real <- c("y","Dp","tb")
variable.list$fin <- c("stir","ltir","rer")
Hyperparm.MN <- list(a_i = 0.01, # prior for the shape parameter of the IG
b_i = 0.01 # prior for the scale parameter of the IG
)
model.MN<-bgvar(Data=eerData,
W=W.list[c("tradeW.0012","finW0711")],
saves=200,
burns=200,
plag=1,
hyperpara=Hyperparm.MN,
prior="MN",
thin=1,
eigen=TRUE,
SV=TRUE,
variable.list=variable.list)
# SSVS prior
Hyperparm.ssvs <- list(tau0 = 0.1, # coefficients: prior variance for the spike
# (tau0 << tau1)
tau1 = 3, # coefficients: prior variance for the slab
# (tau0 << tau1)
kappa0 = 0.1, # covariances: prior variance for the spike
# (kappa0 << kappa1)
kappa1 = 7, # covariances: prior variance for the slab
# (kappa0 << kappa1)
a_i = 0.01, # prior for the shape parameter of the IG
b_i = 0.01, # prior for the scale parameter of the IG
p_i = 0.5, # prior inclusion probability of coefficients
q_ij = 0.5 # prior inclusion probability of covariances
)
model.ssvs<-bgvar(Data=eerData,
W=W.trade0012,
saves=100,
burns=100,
plag=1,
hyperpara=Hyperparm.ssvs,
prior="SSVS",
thin=1,
eigen=TRUE)
# Normal Gamma prior
data(monthlyData)
monthlyData$OC<-NULL
Hyperparm.ng<-list(c_tau = 0.01, # covariances: prior hyperparameter for the NG-prior
d_tau = 0.01, # covariances: prior hyperparameter for the NG-prior
e_lambda = 1.5, # coefficients: prior hyperparameter for the NG-prior
d_lambda = 1, # coefficients: prior hyperparameter for the NG-prior
prmean = 0, # prior mean for the first lag of the AR coefficients
a_i = 0.01, # prior for the shape parameter of the IG
b_i = 0.01, # prior for the scale parameter of the IG
a_start = .6, # (hyper-)parameter for the NG
sample_A = FALSE # estimate a?
)
model.ng<-bgvar(Data=monthlyData,
W=W,
saves=200,
burns=100,
plag=1,
hyperpara=Hyperparm.ng,
prior="NG",
thin=2,
eigen=TRUE,
SV=TRUE,
OE.weights=list(EB=EA.weights))
IRF
obj
: An objected fitted by function bgvar
.
nhor
: Forecasting horizon.
shock
: This is a list object. It should contain the following entries:
var
: Character that contains the name of the variable to be shocked.cN
: Character (or character vector) of the country (countries) in which the variable should be shocked.ident
: Character to specify either chol
if the shock is based on a Cholesky identification or girf
if generalized impulse responses should be calculated.scal
: Optional entry in case impulses should be normalized (e.g., a +100bp increase in the interest rate).sign.constr
: The user should submit a list containing the following entries:
shock1
Is a list object that defines sign restrictions for a particular shock.
shock
: Character vector containing the country and variable to shock separated by a dot. Example, AT.ltir
(long-term interest rates in Austria).restrictions
: This is a list containing the variables to restrict. Can have several sub-list restrictions, e.g., sign.constr$shock1$restrictions$rest1=c("DE.y","AT.y")
, sign.constr$shock1$restricionts$rest2=c("NL.p","AT.p")
, putting restrictions on GDP in Germany and Austria and a second set of restrictions on prices in the Netherlands and Austria.sign
: Character vector of length of set of restrictions + 1, specifying the signs to impose. Use either \(>\), \(<\) or \(0\). The latter implements zero restrictions according to Arias, Rubio-Ramírez, and Waggoner (2018). First entry is for the shock, say AT.ltir
should go up, the following entries refer to the restrictions. sign.constr$shock1$sign=c(">", "<", "<")
would impose AT.ltir
to increase, and variables specified in sign.constr$shock1$restricionts$rest1 and sign.constr$shock1$restrictions$rest2
to decrease.rest.horz
: Is a vector with same length as slot sign above and specifies the length of periods the restrictions are imposed. If rest.horz
is 1, only impact restrictions are considered.constr
: Is a vector with same length as slot sign
above with elements lying between 0 and 1. It specifies the percentage of countries for which cross-country restrictions have to hold. If no cross-country restrictions are supplied, set all elements of constr
to 1.scal
: Optional numeric. If specified then the shock will be calibrated to match scal
on impact.MaxTries
: Optional numeric corresponding to the maximum tries to search for a rotation matrix that fulfills the user-specified restrictions. Default is set to 7500
. After MaxTries
unsuccessful tries the algorithm sets the impulse response for that specific posterior draw to NA
.shock2
Define a second list with the same entries as above to identity a second shock. Can be used iteratively to identify multiple shocks.save.store
: If TRUE
the full posterior is returned. Default is FALSE
in order to save storage.
multithread
: If set to TRUE
parallel computing using the packages foreach
and doParallel
. Number of cores is set to maximum number of cores in the computer. This option is recommended when working with sign restrictions to speed up computations. Default is set to FALSE
and thus no parallelization.
verbose
If set to FALSE
it suppresses printing messages to the console.
Below, find some further examples.
# First example, a US monetary policy shock, quarterly data
library(BGVAR)
data(eerData)
model.ssvs.eer<-bgvar(Data=eerData,W=W.trade0012,saves=500,burns=500,plag=1,prior="SSVS",thin=10,eigen=TRUE,trend=TRUE)
# US monetary policy shock
shocks<-list();shocks$var="stir";shocks$cN<-"US";shocks$ident="chol";shocks$scal=-100
irf.chol.us.mp<-IRF(obj=model.ssvs.eer,shock=shocks,nhor=24)
# plots an impulse response function
plot(irf.chol.us.mp,resp="US.y")
# calculates generalized impulse response functions for the same shock as above
shocks$ident="girf"
irf.girf.ssvs<-IRF(obj=model.ssvs.eer,shock=shocks,nhor=24)
plot(irf.girf.ssvs,resp="US.y")
# Shock to first ordered variable yields same responses of cholesky and GIRF
shocks<-list();shocks$var="y";shocks$cN<-"US";shocks$ident="chol";shocks$scal=+1
irf.chol<-IRF(model.ssvs.eer,shock=shocks,nhor=24)
shocks$ident<-"girf"
irf.girf<-IRF(model.ssvs.eer,shock=shocks,nhor=24)
matplot(cbind(irf.chol$posterior["US.y",,1,"median"],irf.girf$posterior["US.y",,1,"median"]),type="l",ylab="")
matplot(cbind(irf.chol$posterior["US.Dp",,1,"median"],irf.girf$posterior["US.Dp",,1,"median"]),type="l",ylab="")
matplot(cbind(irf.chol$posterior["EA.y",,1,"median"],irf.girf$posterior["EA.y",,1,"median"]),type="l",ylab="")
# second example, cross-country restrictions, multiple shocks and ECB country model
library(BGVAR)
data(monthlyData);monthlyData$OC<-NULL
OE.weights = list(EB=EA.weights)
# estimates the model
model.ssvs<-bgvar(Data=monthlyData,W=W,saves=500,burns=500,plag=1,prior="SSVS",thin=10 ,eigen=TRUE,OE.weights=OE.weights)
EA_countries <- c("AT", "BE", "DE","ES", "FI","FR", "IE", "IT", "NL", "PT","GR","SK") #,"MT","CY","EE","LT","LV")
# A simultaneous Cholesky shock to long-term interest rates in the euro area countries, scaled to amount to -100 basis points (on average over the EA countries).
# Note that the ordering of the variables influences the response, the ordering is exactly as in the country models, to use a different order you have re-estimate
# the model (by bgvar)
shocks<-list();shocks$var="ltir";shocks$cN<-EA_countries;shocks$ident="chol";shocks$scal=-100
irf.chol.ssvs<-IRF(model.ssvs,shock=shocks,nhor=24)
# imposes sign restrictions on the cross-section and for a global shock (long-term interest rates)
sign.constr<-list()
sign.constr$shock1$shock<-c(paste0(EA_countries[-c(3,12)],".ltir")) # the variable to shock, can be imposed for more than one country
#but should be the same variable for all of them
sign.constr$shock1$restrictions$res1<-paste0(EA_countries,".y") # restrictions (industrial production should decrease for selected countries)
sign.constr$shock1$restrictions$res2<-paste0(EA_countries,".p") # another set of restrictions (inflation should decrease for selected countries)
sign.constr$shock1$sign<-c(">","<","<") # first entry is for the shock, following entries for the restrictions (ltir should go up, y and p go down)
sign.constr$shock1$rest.horz<-c(1,1,1) # nr. of time periods restrictions are imposed, first entry is for the shock, following entries for the restrictions
sign.constr$shock1$constr<-c(1,0.5,0.5) # are constraints binding for all (1) countries specified or for at least 50% of the countries (0.5), or 75% (0.75)
sign.constr$shock1$scal=-100 # a minus 100 bp shock to long-term interest rates (on average)
sign.constr$MaxTries<-200
irf.sign.ssvs<-IRF(obj=model.ssvs,shock=NULL,nhor=24,sign.constr=sign.constr)
# Same example but using a local (German) shock and cross-country restrictions.
# Note that the ordering of the variables influences the response, the ordering is exactly as in the country models, to use a different order you have re-estimate
# the model (by bgvar)
shocks<-list();shocks$var="ltir";shocks$cN<-EA_countries;shocks$ident="chol";shocks$scal=-100
irf.chol.ssvs<-IRF(model.ssvs,shock=shocks,nhor=24)
# imposes sign restrictions on the cross-section and for a global shock (long-term interest rates)
sign.constr<-list()
sign.constr$shock1$shock<-c("DE.ltir") # the variable to shock, can be imposed for more than one country
#but should be the same variable for all of them
sign.constr$shock1$restrictions$res1<-paste0(EA_countries,".y") # restrictions (industrial production should decrease for selected countries)
sign.constr$shock1$restrictions$res2<-paste0(EA_countries,".p") # another set of restrictions (inflation should decrease for selected countries)
sign.constr$shock1$sign<-c(">","<","<") # first entry is for the shock, following entries for the restrictions (ltir should go up, y and p go down)
sign.constr$shock1$rest.horz<-c(2,2,1) # nr. of time periods restrictions are imposed, first entry is for the shock, following entries for the restrictions
sign.constr$shock1$constr<-c(1,0.5,0.5) # are constraints binding for all (1) countries specified or for at least 50% of the countries (0.5), or 75% (0.75)
sign.constr$shock1$scal=-100 # a minus 100 bp shock to long-term interest rates (on average)
sign.constr$MaxTries<-200
irf.sign.ssvs<-IRF(model.ssvs,shock=NULL,nhor=24,sign.constr=sign.constr)
# Example with zero restriction (Arias et al., 2018) and rationality conditions (D'Amico and King, 2017).
data("eerDataspf")
model.ssvs.eer<-bgvar(Data=eerDataspf,
W=W.trade0012.spf,
plag=1,
saves=500,
burns=500,
prior="SSVS",
SV=TRUE,
thin=10)
sign.constr<-list()
# shock to 4-step ahead expectation of US.stir
sign.constr$shock1$shock<-"US.stir_t+4"
sign.constr$shock1$restrictions$res1<-"US.Dp_t+4"
# zero restriction on US.stir
sign.constr$shock1$restrictions$res2<-"US.stir"
sign.constr$shock1$restrictions$res3<-"US.y_t+4"
# rationality condition: US.stir_t+4 on impact is equal to average of IRF of
# US.stir between horizon 1 and 4 (defined with rest.horz, but as period 5!)
sign.constr$shock1$restrictions$res4<-"US.stir_t+4"
# rationality condition: US.Dp_t+4 on impact is equal to H-step ahead IRF
# of US.Dp in horizon 4 (defined with rest.horz, but as period 5!)
sign.constr$shock1$restrictions$res5<-"US.Dp_t+4"
# rationality condition: US.y_t+4 on impact is equal to H-step ahead IRF
# of US.y in horizon 4 (defined with rest.horz, but as period 5!)
sign.constr$shock1$restrictions$res6<-"US.y_t+4"
sign.constr$shock1$sign<-c(">","<","0","<","ratio.avg","ratio.H","ratio.H")
sign.constr$shock1$rest.horz<-c(1,1,1,1,5,5,5)
sign.constr$shock1$constr<-c(1,1,1,1,1,1,1)
sign.constr$shock1$scal=1
sign.constr$MaxTries<-200
irf.sign<-IRF(obj=model.ssvs.eer,
nhor=20,
sign.constr=sign.constr,
save.store=TRUE)
par(mfrow=c(4,1))
# rationality condition: US.stir_t+4 on impact is equal to average of IRF of US.stir between horizon 1 and 4
matplot(cbind(irf.sign$posterior["US.stir_t+4",,1,"median"],irf.sign$posterior["US.stir",,1,"median"]),type="l",ylab="",main="stir")
abline(h=mean(irf.sign$posterior["US.stir",2:5,1,"median"]));abline(v=c(1,5),lty=3,col="grey")
# rationality condition: US.y_t+4 on impact is equal to H-step ahead IRF of US.y in horizon 4
matplot(cbind(irf.sign$posterior["US.y_t+4",,1,"median"],irf.sign$posterior["US.y",,1,"median"]),type="l",ylab="",main="y")
abline(h=irf.sign$posterior["US.y_t+4",1,1,"median"]);abline(v=5,lty=3,col="grey")
# rationality condition: US.Dp_t+4 on impact is equal to H-step ahead IRF of US.Dp in horizon 4
matplot(cbind(irf.sign$posterior["US.Dp_t+4",,1,"median"],irf.sign$posterior["US.Dp",,1,"median"]),type="l",ylab="",main="Dp")
abline(h=irf.sign$posterior["US.Dp_t+4",1,1,"median"]);abline(v=5,lty=3,col="grey")
par(mar=rep(0,4))
plot("1",type="n",axes=FALSE)
legend("center",c("expectation","actual"),lty=1:2,col=c("black","red"),bty="n",ncol=2)
par(mar=c(5.1, 4.1, 4.1, 2.1))
Arias, Jonas E., Juan F. Rubio-Ramírez, and Daniel F. Waggoner. 2018. “Inference Based on Structural Vector Autoregressions Identified with Sign and Zero Restrictions: Theory and Applications.” Econometrica 86 (2): 685–720. https://doi.org/10.3982/ECTA14468.
Burriel, Pablo, and Alessandro Galesi. 2018. “Uncovering the heterogeneous effects of ecb unconventional monetary policies across euro area countries.” Working Papers. European Economic Review 101: 201–29. https://ideas.repec.org/p/bde/wpaper/1631.html.
Cashin, Paul, Kamiar Mohaddes, Maziar Raissi, and Mehdi Raissi. 2014. “The differential effects of oil demand and supply shocks on the global economy.” Energy Economics 44 (C): 113–34. https://ideas.repec.org/a/eee/eneeco/v44y2014icp113-134.html.
Castelnuovo, Efrem, and Paolo Surico. 2010. “Monetary Policy, Inflation Expectations and the Price Puzzle*.” The Economic Journal 120 (549): 1262–83. https://doi.org/10.1111/j.1468-0297.2010.02368.x.
Chudik, Alexander, and Michael Fidora. 2011. “Using the global dimension to identify shocks with sign restrictions.” Working Paper Series No. 1318. European Central Bank.
Crespo Cuaresma, Jesús, Martin Feldkircher, and Florian Huber. 2016. “Forecasting with Global Vector Autoregressive Models: A Bayesian Approach.” Journal of Applied Econometrics Vol. 31 (7): 1371–91. https://doi.org/10.1002/jae.2504.
D’Amico, Stefania, and Thomas B. King. 2015. “What Does Anticipated Monetary Policy Do?” Working Paper Series WP-2015-10. Federal Reserve Bank of Chicago. https://ideas.repec.org/p/fip/fedhwp/wp-2015-10.html.
Dees, Stephane, Filippo di Mauro, Hashem M. Pesaran, and L. Vanessa Smith. 2007. “Exploring the international linkages of the euro area: a global VAR analysis.” Journal of Applied Econometrics 22 (1).
Dovern, Jonas, Martin Feldkircher, and Florian Huber. 2016. “Does Joint Modelling of the World Economy Pay Off? Evaluating Global Forecasts from a Bayesian Gvar.” Journal of Economic Dynamics and Control 70: 86–100. https://doi.org/http://dx.doi.org/10.1016/j.jedc.2016.06.006.
Eickmeier, Sandra, and Tim Ng. 2015. “How Do Us Credit Supply Shocks Propagate Internationally? A Gvar Approach.” European Economic Review 74: 128–45. https://doi.org/http://dx.doi.org/10.1016/j.euroecorev.2014.11.011.
Feldkircher, Martin, Thomas Gruber, and Florian Huber. 2020. “International effects of a compression of euro area yield curves.” Journal of Banking & Finance 113: 11–14.
Feldkircher, Martin, and Florian Huber. 2016. “The international transmission of US shocks – Evidence from Bayesian global vector autoregressions.” European Economic Review 81 (C): 167–88. https://ideas.repec.org/a/eee/eecrev/v81y2016icp167-188.html.
Feldkircher, Martin, and Pierre Siklos. 2019. “Global inflation dynamics and inflation expectations.” International Review of Economics & Finance 64: 217–41.
Feldkircher, M., F. Huber, and I. Moder. 2015. “Towards a New Normal: How Different Paths of Us Monetary Policy Affect the World Economy.” Economic Notes 44 (3): 409–18. https://doi.org/10.1111/ecno.12041.
George, Edward I, Dongchu Sun, and Shawn Ni. 2008. “Bayesian stochastic search for VAR model restrictions.” Journal of Econometrics 142 (1): 553–80.
Georgiadis, Georgios. 2015. “Examining Asymmetries in the Transmission of Monetary Policy in the Euro Area: Evidence from a Mixed Cross-Section Global Var Model.” European Economic Review 75 (C): 195–215. http://EconPapers.repec.org/RePEc:eee:eecrev:v:75:y:2015:i:c:p:195-215.
Huber, Florian. 2016. “Density Forecasting Using Bayesian Global Vector Autoregressions with Stochastic Volatility.” International Journal of Forecasting 32 (3): 818–37.
Huber, Florian, and Martin Feldkircher. 2019. “Adaptive Shrinkage in Bayesian Vector Autoregressive Models.” Journal of Business & Economic Statistics 37 (1): 27–39. https://doi.org/10.1080/07350015.2016.1256217.
Kastner, Gregor. 2016. “Dealing with Stochastic Volatility in Time Series Using the R Package stochvol.” Journal of Statistical Software 69 (1): 1–30. https://doi.org/10.18637/jss.v069.i05.
Koop, Gary, and Dimitris Korobilis. 2010. “Bayesian Multivariate Time Series Methods for Empirical Macroeconomics.” Foundations and Trends(R) in Econometrics 3 (4): 267–358. https://doi.org/10.1561/0800000013.
Litterman, R. B. 1986. “Forecasting with Bayesian vector autoregressions - Five years of experience.” Journal of Business and Economic Statistics 5: 25–38.
Mohaddes, Kamiar, and Mehdi Raissi. 2019. “The US oil supply revolution and the global economy.” Empirical Economics 57: 515–1546.
Mohaddes, K. and Raissi, Mehdi. 2018. “Compilation, Revision and Updating of the Global Var (Gvar) Database, 1979Q2-2016Q4,” University of Cambridge: Faculty of Economics (mimeo). https://sites.google.com/site/gvarmodelling/data.
Pesaran, M. Hashem, Til Schuermann, and L. Vanessa Smith. 2009. “Forecasting economic and financial variables with global VARs.” International Journal of Forecasting 25 (4): 642–75.
Pesaran, M. Hashem, Til Schuermann, and S. M. Weiner. 2004. “Modeling Regional Interdependencies Using a Global Error-Correcting Macroeconometric Model.” Journal of Business and Economic Statistics, American Statistical Association 22: 129–62.
Pesaran, M. Hashem, and Yongcheol Shin. 1998. “Generalized Impulse Response Analysis in Linear Multivariate Models.” Economics Letters 58 (1): 17–29. http://ideas.repec.org/a/eee/ecolet/v58y1998i1p17-29.html.
Primiceri, Giorgio E. 2005. “Time Varying Structural Vector Autoregressions and Monetary Policy.” The Review of Economic Studies 72 (3): 821–52.
Rubio-Ramírez, Juan F., Daniel F. Waggoner, and Tao Zha. 2010. “Structural Vector Autoregressions: Theory of Identification and Algorithms for Inference.” Review of Economic Studies 77 (2): 665–96. http://ideas.repec.org/a/oup/restud/v77y2010i2p665-696.html.
Sims, Christopher A, and Tao Zha. 2006. “Were There Regime Switches in Us Monetary Policy?” The American Economic Review 96 (1): 54–81.
Smith, L.V. and Galesi, A. 2014. “GVAR Toolbox.” https://sites.google.com/site/gvarmodelling/gvar-toolbox.
Waggoner, Daniel F, and Tao Zha. 1999. “Conditional Forecasts in Dynamic Multivariate Models.” Review of Economics and Statistics 81 (4): 639–51.