{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,4]],"date-time":"2025-12-04T10:08:05Z","timestamp":1764842885892,"version":"3.37.3"},"reference-count":36,"publisher":"Springer Science and Business Media LLC","issue":"10","license":[{"start":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T00:00:00Z","timestamp":1715904000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T00:00:00Z","timestamp":1715904000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U22B2017","62076105"],"award-info":[{"award-number":["U22B2017","62076105"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012326","name":"International Science and Technology Cooperation Programme","doi-asserted-by":"publisher","award":["2021EHB011","62076213"],"award-info":[{"award-number":["2021EHB011","62076213"]}],"id":[{"id":"10.13039\/501100012326","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017610","name":"Shenzhen Science and Technology Innovation Program","doi-asserted-by":"publisher","award":["GXWD20201231105722002-20200901175001001"],"award-info":[{"award-number":["GXWD20201231105722002-20200901175001001"]}],"id":[{"id":"10.13039\/501100017610","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2024,10]]},"DOI":"10.1007\/s11263-024-02103-w","type":"journal-article","created":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T15:01:58Z","timestamp":1715958118000},"page":"4510-4520","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Regional Adversarial Training for Better Robust Generalization"],"prefix":"10.1007","volume":"132","author":[{"given":"Chuanbiao","family":"Song","sequence":"first","affiliation":[]},{"given":"Yanbo","family":"Fan","sequence":"additional","affiliation":[]},{"given":"Aoyang","family":"Zhou","sequence":"additional","affiliation":[]},{"given":"Baoyuan","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Yiming","family":"Li","sequence":"additional","affiliation":[]},{"given":"Zhifeng","family":"Li","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7627-4604","authenticated-orcid":false,"given":"Kun","family":"He","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,5,17]]},"reference":[{"key":"2103_CR1","unstructured":"Athalye, A., Carlini, N., & Wagner, D.A. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: ICML, pp 274\u2013283"},{"key":"2103_CR2","unstructured":"Buckman, J., Roy, A., Raffel, C., et\u00a0al. (2018). Thermometer encoding: One hot way to resist adversarial examples. In: ICLR"},{"key":"2103_CR3","doi-asserted-by":"crossref","unstructured":"Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In: IEEE S &P, pp 39\u201357","DOI":"10.1109\/SP.2017.49"},{"key":"2103_CR4","doi-asserted-by":"crossref","unstructured":"Dabouei, A., Soleymani, S., Taherkhani, F., et\u00a0al. (2020). Exploiting joint robustness to adversarial perturbations. In: CVPR, pp 1122\u20131131","DOI":"10.1109\/CVPR42600.2020.00120"},{"key":"2103_CR5","unstructured":"Deng, Z., Dong, Y., Pang, T., et\u00a0al. (2020). Adversarial distributional training for robust deep learning. In: NeurIPS, pp 8270\u20138283"},{"key":"2103_CR6","doi-asserted-by":"crossref","unstructured":"Dong, Y., Fu, Q.A., Yang, X., et\u00a0al. (2020). Benchmarking adversarial robustness on image classification. In: CVPR, pp 321\u2013331","DOI":"10.1109\/CVPR42600.2020.00040"},{"key":"2103_CR7","unstructured":"Gilmer, J., Adams, R.P., Goodfellow, I.J., et\u00a0al. (2018). Motivating the rules of the game for adversarial example research. arXiv:1807.06732"},{"key":"2103_CR8","unstructured":"Gilmer, J., Ford, N., Carlini, N., et\u00a0al. (2019). Adversarial examples are a natural consequence of test error in noise. In: ICML, pp 2280\u20132289"},{"key":"2103_CR9","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015). Fast R-CNN. In: ICCV, pp 1440\u20131448","DOI":"10.1109\/ICCV.2015.169"},{"key":"2103_CR10","unstructured":"Goodfellow. I.J, Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In: ICLR"},{"key":"2103_CR11","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., et\u00a0al. (2016a). Deep residual learning for image recognition. In: CVPR, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"2103_CR12","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., et\u00a0al. (2016b). Identity mappings in deep residual networks. In: ECCV, pp 630\u2013645","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"2103_CR13","unstructured":"Hendrycks, D., & Dietterich, T.G. (2019). Benchmarking neural network robustness to common corruptions and perturbations. In: ICLR"},{"key":"2103_CR14","unstructured":"Howard, F.J. (2019). The ImageNette dataset. https:\/\/github.com\/fastai\/imagenette"},{"key":"2103_CR15","unstructured":"Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Technical Report"},{"key":"2103_CR16","doi-asserted-by":"crossref","unstructured":"Lee, S., Lee, H., & Yoon, S. (2020). Adversarial vertex mixup: Toward better adversarially robust generalization. In: CVPR, pp 272\u2013281","DOI":"10.1109\/CVPR42600.2020.00035"},{"key":"2103_CR17","unstructured":"Li, Y., Li, L., Wang, L., et\u00a0al. (2019). NATTACK: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In: ICML, pp 3866\u20133876"},{"key":"2103_CR18","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., et\u00a0al. (2014). Microsoft COCO: Common objects in context. In: ECCV, pp 740\u2013755","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"2103_CR19","unstructured":"Madry, A., Makelov, A., Schmidt, L., et\u00a0al. (2018). Towards deep learning models resistant to adversarial attacks. In: ICLR"},{"key":"2103_CR20","unstructured":"Montasser, O., Hanneke, S., & Srebro, N. (2019). VC classes are adversarially robustly learnable, but only improperly. In: COLT, pp 2512\u20132530"},{"key":"2103_CR21","unstructured":"Pang, T., Xu, K., Du, C., et\u00a0al. (2019). Improving adversarial robustness via promoting ensemble diversity. In: ICML, pp 4970\u20134979"},{"key":"2103_CR22","unstructured":"Papernot, N., McDaniel, P.D., & Goodfellow, I.J. (2016). Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:1807.06732"},{"key":"2103_CR23","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P.D., Goodfellow, I.J., et\u00a0al. (2017). Practical black-box attacks against machine learning. In: AsiaCCS, pp 506\u2013519","DOI":"10.1145\/3052973.3053009"},{"key":"2103_CR24","unstructured":"Rice, L., Wong, E., & Kolter, J.Z. (2020). Overfitting in adversarially robust deep learning. In: ICML, pp 8093\u20138104"},{"key":"2103_CR25","unstructured":"Schmidt, L., Santurkar, S., Tsipras, D., et\u00a0al. (2018). Adversarially robust generalization requires more data. In: NeurIPS, pp 5019\u20135031"},{"key":"2103_CR26","unstructured":"Song, C., He, K., Wang, L., et\u00a0al. (2019). Improving the generalization of adversarial training with domain adaptation. In: ICLR"},{"key":"2103_CR27","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., et\u00a0al. (2014). Intriguing properties of neural networks. In: ICLR"},{"key":"2103_CR28","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., et\u00a0al. (2016). Rethinking the inception architecture for computer vision. In: CVPR, pp 2818\u20132826","DOI":"10.1109\/CVPR.2016.308"},{"key":"2103_CR29","unstructured":"Tram\u00e8r, F., Kurakin, A., Papernot, N., et\u00a0al. (2018). Ensemble adversarial training: Attacks and defenses. In: ICLR"},{"key":"2103_CR30","unstructured":"Wang, Y., Zou, D., Yi, J., et\u00a0al. (2019). Improving adversarial robustness requires revisiting misclassified examples. In: ICLR"},{"key":"2103_CR31","unstructured":"Yang, Y., Zhang, G., Xu, Z., et\u00a0al. (2019). ME-Net: Towards effective adversarial robustness with matrix estimation. In: ICML, pp 7025\u20137034"},{"key":"2103_CR32","unstructured":"Yin, D., Lopes, R.G., Shlens, J., et\u00a0al. (2019a). A fourier perspective on model robustness in computer vision. In: NeurIPS, pp 13,255\u201313,265"},{"key":"2103_CR33","unstructured":"Yin, D., Ramchandran, K., & Bartlett, P.L. (2019b). Rademacher complexity for adversarially robust generalization. In: ICML, pp 7085\u20137094"},{"key":"2103_CR34","doi-asserted-by":"crossref","unstructured":"Zagoruyko, S., & Komodakis, N. (2016). Wide residual networks. In: BMVC","DOI":"10.5244\/C.30.87"},{"key":"2103_CR35","unstructured":"Zhang, H., Yu, Y., Jiao, J., et\u00a0al. (2019). Theoretically principled trade-off between robustness and accuracy. In: ICML, pp 7472\u20137482"},{"key":"2103_CR36","unstructured":"Zhang, J., Xu, X., Han, B., et\u00a0al. (2020). Attacks which do not kill training make adversarial learning stronger. In: ICML, pp 11,278\u201311,287"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-024-02103-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-024-02103-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-024-02103-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,4]],"date-time":"2024-10-04T06:23:37Z","timestamp":1728023017000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-024-02103-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,5,17]]},"references-count":36,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2024,10]]}},"alternative-id":["2103"],"URL":"https:\/\/doi.org\/10.1007\/s11263-024-02103-w","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"type":"print","value":"0920-5691"},{"type":"electronic","value":"1573-1405"}],"subject":[],"published":{"date-parts":[[2024,5,17]]},"assertion":[{"value":"15 November 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 April 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 May 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}