Robust inference and model selection using bagged posteriors
Standard Bayesian inference is known to be sensitive to model misspecification, leading to unreliable uncertainty quantification and poor predictive performance. However, finding generally applicable and computationally feasible methods for robust Bayesian inference under misspecification has proven to be a difficult challenge. An intriguing approach is to use bagging on the Bayesian posterior ("BayesBag"); that is, averaging the posterior over many bootstrapped datasets. We provide theoretical results characterizing the asymptotic behavior of the BayesBag posterior under misspecification, and we empirically assess the BayesBag approach on synthetic and real-world data using a variety of models. Overall, our results demonstrate that BayesBag provides an easy-to-use and widely applicable approach that improves upon standard Bayesian inference by making it more stable, accurate, and reproducible.