Publisher's Synopsis
Econo-magic and Economic tricks are two of the pejorative terms its detractors use to describe the art and science of econometrics. No doubt, these terms are well deserved in many instances. Some of the problems stem from econometrics connection with statistics, which had its origin in the analysis of experimental data. In the typical experiment, the analyst can hold the levels of variables not of interest constant, alter the levels of treatment variables, and measure both the treatment variables and the outcome with high accuracy. With some confidence, the statistician can assert that changes in the treatment variable cause changes in the outcome and can quantify the relationship. Analysts began to apply the statistical tools appropriate to this experimental setting to economic and business data that were clearly not the outcome of any experiment. The question of cause and effect became murky. Statisticians relied on economic theory to guide them; they had few other choices. So was born econometrics: the use of statistical analysis, combined with economic theory, to analyze economic data. Even today, the basic workhorse tool for forecasting in economics is the large structural econometric model. These models are developed in specialized institutions, government agencies, and banks. Several principles are useful for econometric forecasters: keep the model simple, use all the data you can get, and use theory as a guide to selecting causal variables. But theory gives little guidance on dynamics, that is, on which lagged values of the selected variables to use. Early econometric models failed in comparison with extrapolative methods because they paid too little attention to dynamic structure. In a fairly simple way, the vector autoregression (VAR) approach that first appeared in the 1980s resolved the problem by shifting emphasis towards dynamics and away from collecting many causal variables. The VAR approach also resolves the question of how to make long-term forecasts where the causal variables themselves must be forecast. When the analyst does not need to forecast causal variables or can use other sources, he or she can use a single equation with the same dynamic structure. Ordinary least squares is a perfectly adequate estimation method. Evidence supports estimating the initial equation in levels, whether the variables are stationary or not. Evidence on the value of further simplification is mixed. If cointegration among variables, then error-correction models (ECMs) will do worse than equations in levels. But ECMs are only sometimes an improvement even when variables are cointegrated. Evidence is even less clear on whether or not to difference variables that are nonstationary on the basis of unit root tests. While some authors recommend applying a battery of misspecification tests, few econometricians use (or at least report using) more than the familiar Durbin-Watson test. Consequently, there is practically no evidence on whether model selection based on these tests will improve forecast performance. Limited evidence on the superiority of varying parameter models hints that tests for parameter constancy are likely to be the most important. Finally, econometric models do appear to be gaining over extrapolative or judgmental methods, even for shortterm forecasts, though much more slowly than their proponents had hoped. Econometric Modeling and Forecasting describes some recent advances and contributions to understanding economic forecasting. The theoretical framework adopted explains the findings of forecasting competitions and the prevalence of forecast failure, so constitutes a general theoretical background against which recent results can be judged. Econo-magic and Economic tricks are two of the pejorative terms its detractors use to describe the art and science of econometrics. No doubt, these terms are well deserved in many instances. Some of the problems stem from econometrics connection with statistics, which had its origin in the analysis of experimental data. In the typical experiment, the analyst can hold the levels of variables not of interest constant, alter the levels of treatment variables, and measure both the treatment variables and the outcome with high accuracy. With some confidence, the statistician can assert that changes in the treatment variable cause changes in the outcome and can quantify the relationship. Analysts began to apply the statistical tools appropriate to this experimental setting to economic and business data that were clearly not the outcome of any experiment. The question of cause and effect became murky. Statisticians relied on economic theory to guide them; they had few other choices. So was born econometrics: the use of statistical analysis, combined with economic theory, to analyze economic data. Even today, the basic workhorse tool for forecasting in economics is the large structural econometric model. These models are developed in specialized institutions, government agencies, and banks. Several principles are useful for econometric forecasters: keep the model simple, use all the data you can get, and use theory as a guide to selecting causal variables. But theory gives little guidance on dynamics, that is, on which lagged values of the selected variables to use. Early econometric models failed in comparison with extrapolative methods because they paid too little attention to dynamic structure. In a fairly simple way, the vector autoregression (VAR) approach that first appeared in the 1980s resolved the problem by shifting emphasis towards dynamics and away from collecting many causal variables. The VAR approach also resolves the question of how to make long-term forecasts where the causal variables themselves must be forecast. When the analyst does not need to forecast causal variables or can use other sources, he or she can use a single equation with the same dynamic structure. Ordinary least squares is a perfectly adequate estimation method. Evidence supports estimating the initial equation in levels, whether the variables are stationary or not. Evidence on the value of further simplification is mixed. If cointegration among variables, then error-correction models (ECMs) will do worse than equations in levels. But ECMs are only sometimes an improvement even when variables are cointegrated. Evidence is even less clear on whether or not to difference variables that are nonstationary on the basis of unit root tests. While some authors recommend applying a battery of misspecification tests, few econometricians use (or at least report using) more than the familiar Durbin-Watson test. Consequently, there is practically no evidence on whether model selection based on these tests will improve forecast performance. Limited evidence on the superiority of varying parameter models hints that tests for parameter constancy are likely to be the most important. Finally, econometric models do appear to be gaining over extrapolative or judgmental methods, even for shortterm forecasts, though much more slowly than their proponents had hoped. Econometric Modeling and Forecasting describes some recent advances and contributions to understanding economic forecasting. The theoretical framework adopted explains the findings of forecasting competitions and the prevalence of forecast failure, so constitutes a general theoretical background against which recent results can be judged. Econo-magic and Economic tricks are two of the pejorative terms its detractors use to describe the art and science of econometrics. No doubt, these terms are well deserved in many instances. Some of the problems stem from econometrics connection with statistics, which had its origin in the analysis of experimental data. In the typical experiment, the analyst can hold the levels of variables not of interest constant, alter the levels of treatment variables, and measure both the treatment variables and the outcome with high accuracy. With some confidence, the statistician can assert that changes in the treatment variable cause changes in the outcome and can quantify the relationship. Analysts began to apply the statistical tools appropriate to this experimental setting to economic and business data that were clearly not the outcome of any experiment. The question of cause and effect became murky. Statisticians relied on economic theory to guide them; they had few other choices. So was born econometrics: the use of statistical analysis, combined with economic theory, to analyze economic data. Even today, the basic workhorse tool for forecasting in economics is the large structural econometric model. These models are developed in specialized institutions, government agencies, and banks. Several principles are useful for econometric forecasters: keep the model simple, use all the data you can get, and use theory as a guide to selecting causal variables. But theory gives little guidance on dynamics, that is, on which lagged values of the selected variables to use. Early econometric models failed in comparison with extrapolative methods because they paid too little attention to dynamic structure. In a fairly simple way, the vector autoregression (VAR) approach that first appeared in the 1980s resolved the problem by shifting emphasis towards dynamics and away from collecting many causal variables. The VAR approach also resolves the question of how to make long-term forecasts where the causal variables themselves must be forecast. When the analyst does not need to forecast causal variables or can use other sources, he or she can use a single equation with the same dynamic structure. Ordinary least squares is a perfectly adequate estimation method. Evidence supports estimating the initial equation in levels, whether the variables are stationary or not. Evidence on the value of further simplification is mixed. If cointegration among variables, then error-correction models (ECMs) will do worse than equations in le