{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,15]],"date-time":"2025-11-15T10:32:53Z","timestamp":1763202773109},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"7","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Deep neural networks (DNNs) for supervised learning can be viewed as a pipeline of the feature extractor (i.e., last hidden layer) and a linear classifier (i.e., output layer) that are trained jointly with stochastic gradient descent (SGD) on the loss function (e.g., cross-entropy). In each epoch, the true gradient of the loss function is estimated using a mini-batch sampled from the training set and model parameters are then updated with the mini-batch gradients. Although the latter provides an unbiased estimation of the former, they are subject to substantial variances derived from the size and number of sampled mini-batches, leading to noisy and jumpy updates. To stabilize such undesirable variance in estimating the true gradients, we propose In-Training Representation Alignment (ITRA) that explicitly aligns feature distributions of two different mini-batches with a matching loss in the SGD training process. We also provide a rigorous analysis of the desirable effects of the matching loss on feature representation learning: (1) extracting compact feature representation; (2) reducing over-adaption on mini-batches via an adaptively weighting mechanism; and (3) accommodating to multi-modalities. Finally, we conduct large-scale experiments on both image and text classifications to demonstrate its superior performance to the strong baselines.<\/jats:p>","DOI":"10.1609\/aaai.v37i7.26044","type":"journal-article","created":{"date-parts":[[2023,6,27]],"date-time":"2023-06-27T17:16:03Z","timestamp":1687886163000},"page":"8675-8683","source":"Crossref","is-referenced-by-count":1,"title":["Learning Compact Features via In-Training Representation Alignment"],"prefix":"10.1609","volume":"37","author":[{"given":"Xin","family":"Li","sequence":"first","affiliation":[]},{"given":"Xiangrui","family":"Li","sequence":"additional","affiliation":[]},{"given":"Deng","family":"Pan","sequence":"additional","affiliation":[]},{"given":"Yao","family":"Qiang","sequence":"additional","affiliation":[]},{"given":"Dongxiao","family":"Zhu","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2023,6,26]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/26044\/25816","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/26044\/25816","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,6,27]],"date-time":"2023-06-27T17:16:03Z","timestamp":1687886163000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/26044"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,26]]},"references-count":0,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2023,6,27]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v37i7.26044","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2023,6,26]]}}}