Training independent subnetworks for robust prediction

M Havasi, R Jenatton, S Fort, JZ Liu, J Snoek… - arXiv preprint arXiv …, 2020 - arxiv.org
arXiv preprint arXiv:2010.06610, 2020arxiv.org
Recent approaches to efficiently ensemble neural networks have shown that strong
robustness and uncertainty performance can be achieved with a negligible gain in
parameters over the original network. However, these methods still require multiple forward
passes for prediction, leading to a significant computational cost. In this work, we show a
surprising result: the benefits of using multiple predictions can be achievedfor free'under a
single model's forward pass. In particular, we show that, using a multi-input multi-output …
Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network. However, these methods still require multiple forward passes for prediction, leading to a significant computational cost. In this work, we show a surprising result: the benefits of using multiple predictions can be achieved `for free' under a single model's forward pass. In particular, we show that, using a multi-input multi-output (MIMO) configuration, one can utilize a single model's capacity to train multiple subnetworks that independently learn the task at hand. By ensembling the predictions made by the subnetworks, we improve model robustness without increasing compute. We observe a significant improvement in negative log-likelihood, accuracy, and calibration error on CIFAR10, CIFAR100, ImageNet, and their out-of-distribution variants compared to previous methods.
arxiv.org