Neural nets for indirect inference

Authors: Michael Creel

Econometrics and Statistics, Vol. 2, 36-49, April, 2017

For simulable models, neural networks are used to approximate the limited information posterior mean, which conditions on a vector of statistics, rather than on the full sample. Because the model is simulable, training and testing samples may be generated with sizes large enough to train well a net that is large enough, in terms of number of hidden layers and neurons, to learn the limited information posterior mean with good accuracy. Targeting the limited information posterior mean using neural nets is simpler, faster, and more successful than is targeting the full information posterior mean, which conditions on the observed sample. The output of the trained net can be used directly as an estimator of the model's parameters, or as an input to subsequent classical or Bayesian indirect inference estimation. The methods are illustrated with applications to a small dynamic stochastic general equilibrium model and a continuous time jump-diffusion model for stock index returns.

This paper originally appeared as Barcelona GSE Working Paper 942