{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,4,12]],"date-time":"2025-04-12T04:11:21Z","timestamp":1744431081967,"version":"3.40.4"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"22","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Opponent Modeling (OM) aims to enhance decision-making by modeling other agents in multi-agent environments. Existing works typically learn opponent models against a pre-designated fixed set of opponents during training. However, this will cause poor generalization when facing unknown opponents during testing, as previously unseen opponents can exhibit out-of-distribution (OOD) behaviors that the learned opponent models cannot handle. To tackle this problem, we introduce a novel Open-Ended Opponent Modeling (OEOM) framework, which continuously generates opponents with diverse strengths and styles to reduce the possibility of OOD situations occurring during testing. Founded on population-based training and information-theoretic trajectory space diversity regularization, OEOM generates a dynamic set of opponents. This set is then fed to any OM approaches to train a potentially generalizable opponent model. Upon this, we further propose a simple yet effective OM approach that naturally fits within the OEOM framework. This approach is based on in-context reinforcement learning and learns a Transformer that dynamically recognizes and responds to opponents based on their trajectories. Extensive experiments in cooperative, competitive, and mixed environments demonstrate that OEOM is an approach-agnostic framework that improves generalizability compared to training against a fixed set of opponents, regardless of OM approaches or testing opponent settings. The results also indicate that our proposed approach generally outperforms existing OM baselines.<\/jats:p>","DOI":"10.1609\/aaai.v39i22.34488","type":"journal-article","created":{"date-parts":[[2025,4,11]],"date-time":"2025-04-11T13:25:10Z","timestamp":1744377910000},"page":"23222-23230","source":"Crossref","is-referenced-by-count":0,"title":["An Open-Ended Learning Framework for Opponent Modeling"],"prefix":"10.1609","volume":"39","author":[{"given":"Yuheng","family":"Jing","sequence":"first","affiliation":[]},{"given":"Kai","family":"Li","sequence":"additional","affiliation":[]},{"given":"Bingyun","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Haobo","family":"Fu","sequence":"additional","affiliation":[]},{"given":"Qiang","family":"Fu","sequence":"additional","affiliation":[]},{"given":"Junliang","family":"Xing","sequence":"additional","affiliation":[]},{"given":"Jian","family":"Cheng","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2025,4,11]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/34488\/36643","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/34488\/36643","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,11]],"date-time":"2025-04-11T13:25:11Z","timestamp":1744377911000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/34488"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,11]]},"references-count":0,"journal-issue":{"issue":"22","published-online":{"date-parts":[[2025,4,11]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v39i22.34488","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2025,4,11]]}}}