Invariant information bottleneck for domain generalization

B Li, Y Shen, Y Wang, W Zhu, D Li, K Keutzer… - Proceedings of the …, 2022 - ojs.aaai.org
Proceedings of the AAAI Conference on Artificial Intelligence, 2022ojs.aaai.org
Invariant risk minimization (IRM) has recently emerged as a promising alternative for domain
generalization. Nevertheless, the loss function is difficult to optimize for nonlinear classifiers
and the original optimization objective could fail when pseudo-invariant features and
geometric skews exist. Inspired by IRM, in this paper we propose a novel formulation for
domain generalization, dubbed invariant information bottleneck (IIB). IIB aims at minimizing
invariant risks for nonlinear classifiers and simultaneously mitigating the impact of pseudo …
Abstract
Invariant risk minimization (IRM) has recently emerged as a promising alternative for domain generalization. Nevertheless, the loss function is difficult to optimize for nonlinear classifiers and the original optimization objective could fail when pseudo-invariant features and geometric skews exist. Inspired by IRM, in this paper we propose a novel formulation for domain generalization, dubbed invariant information bottleneck (IIB). IIB aims at minimizing invariant risks for nonlinear classifiers and simultaneously mitigating the impact of pseudo-invariant features and geometric skews. Specifically, we first present a novel formulation for invariant causal prediction via mutual information. Then we adopt the variational formulation of the mutual information to develop a tractable loss function for nonlinear classifiers. To overcome the failure modes of IRM, we propose to minimize the mutual information between the inputs and the corresponding representations. IIB significantly outperforms IRM on synthetic datasets, where the pseudo-invariant features and geometric skews occur, showing the effectiveness of proposed formulation in overcoming failure modes of IRM. Furthermore, experiments on DomainBed show that IIB outperforms 13 baselines by 0.9% on average across 7 real datasets.
ojs.aaai.org