default search action
41st ICML 2024: Vienna, Austria
- Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net 2024
Accept (Oral)
- Stephen Zhao, Rob Brekelmans, Alireza Makhzani, Roger Baker Grosse:
Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo. - Edward Hughes, Michael D. Dennis, Jack Parker-Holder, Feryal M. P. Behbahani, Aditi Mavalankar, Yuge Shi, Tom Schaul, Tim Rocktäschel:
Position: Open-Endedness is Essential for Artificial Superhuman Intelligence. - Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taïga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, Aviral Kumar, Rishabh Agarwal:
Stop Regressing: Training Value Functions via Classification for Scalable Deep RL. - Da Xiao, Qingye Meng, Shengping Li, Xingyuan Yuan:
Improving Transformers with Dynamically Composable Multi-Head Attention. - Vincent Herrmann, Francesco Faccio, Jürgen Schmidhuber:
Learning Useful Representations of Recurrent Neural Network Weight Matrices. - Fei Liu, Xialiang Tong, Mingxuan Yuan, Xi Lin, Fu Luo, Zhenkun Wang, Zhichao Lu, Qingfu Zhang:
Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model. - Ziniu Hu, Ahmet Iscen, Aashi Jain, Thomas Kipf, Yisong Yue, David A. Ross, Cordelia Schmid, Alireza Fathi:
SceneCraft: An LLM Agent for Synthesizing 3D Scenes as Blender Code. - Weilin Chen, Ruichu Cai, Zeqin Yang, Jie Qiao, Yuguang Yan, Zijian Li, Zhifeng Hao:
Doubly Robust Causal Effect Estimation under Networked Interference via Targeted Learning. - Jan E. Gerken, Pan Kessel:
Emergent Equivariance in Deep Ensembles. - Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung:
Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks. - Younghyo Park, Gabriel B. Margolis, Pulkit Agrawal:
Position: Automatic Environment Shaping is the Next Frontier in RL. - Qiankun Zhang, Aocheng Shen, Boyu Zhang, Hanrui Jiang, Bingqian Du:
Online Matching with Stochastic Rewards: Provable Better Bound via Adversarial Reinforcement Learning. - Jessica Dai:
Position: Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis. - Juno Kim, Taiji Suzuki:
Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape. - Weixin Liang, Zachary Izzo, Yaohui Zhang, Haley Lepp, Hancheng Cao, Xuandong Zhao, Lingjiao Chen, Haotian Ye, Sheng Liu, Zhi Huang, Daniel A. McFarland, James Y. Zou:
Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews. - Siqi Miao, Zhiyuan Lu, Mia Liu, Javier M. Duarte, Pan Li:
Locality-Sensitive Hashing-Based Efficient Point Transformer with Applications in High-Energy Physics. - Gerald Woo, Chenghao Liu, Akshat Kumar, Caiming Xiong, Silvio Savarese, Doyen Sahoo:
Unified Training of Universal Time Series Forecasting Transformers. - Lucas Spangher, Allen M. Wang, Andrew Maris, Myles Stapelberg, Viraj Mehta, Alex Saperstein, Stephen Lane-Walsh, Akshata Kishore Moharir, Alessandro Pau, Cristina Rea:
Position: Opportunities Exist for Machine Learning in Magnetic Fusion Energy. - Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein:
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models. - Yujie Xing, Xiao Wang, Yibo Li, Hai Huang, Chuan Shi:
Less is More: on the Over-Globalizing Problem in Graph Transformers. - Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian:
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. - Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar:
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs. - Can Yaras, Peng Wang, Laura Balzano, Qing Qu:
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation. - Anka Reuel, Lisa Soder, Benjamin Bucknall, Trond Arne Undheim:
Position: Technical Research and Talent is Needed for Effective AI Governance. - Xin Du, Lixin Xiu, Kumiko Tanaka-Ishii:
Bottleneck-Minimal Indexing for Generative Document Retrieval. - Bob Junyi Zou, Matthew E. Levine, Dessi P. Zaharieva, Ramesh Johari, Emily B. Fox:
Hybrid2 Neural ODE Causal Modeling and an Application to Glycemic Response. - Jiayi Chen, Aidong Zhang:
FedMBridge: Bridgeable Multimodal Federated Learning. - Ryan Liu, Theodore R. Sumers, Ishita Dasgupta, Thomas L. Griffiths:
How do Large Language Models Navigate Conflicts between Honesty and Helpfulness? - Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, Jeffrey Wu:
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision. - Tianying Ji, Yongyuan Liang, Yan Zeng, Yu Luo, Guowei Xu, Jiawei Guo, Ruijie Zheng, Furong Huang, Fuchun Sun, Huazhe Xu:
ACE: Off-Policy Actor-Critic with Causality-Aware Entropy Regularization. - Tom Wollschläger, Niklas Kemper, Leon Hetzel, Johanna Sommer, Stephan Günnemann:
Expressivity and Generalization: Fragment-Biases for Molecular GNNs. - Sepanta Zeighami, Cyrus Shahabi:
Theoretical Analysis of Learned Database Operations under Distribution Shift through Distribution Learnability. - Shayne Longpre, Sayash Kapoor, Kevin Klyman, Ashwin Ramaswami, Rishi Bommasani, Borhane Blili-Hamelin, Yangsibo Huang, Aviya Skowron, Zheng Xin Yong, Suhas Kotha, Yi Zeng, Weiyan Shi, Xianjun Yang, Reid Southen, Alexander Robey, Patrick Chao, Diyi Yang, Ruoxi Jia, Daniel Kang, Sandy Pentland, Arvind Narayanan, Percy Liang, Peter Henderson:
Position: A Safe Harbor for AI Evaluation and Red Teaming. - Kiarash Banihashem, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh:
A Dynamic Algorithm for Weighted Submodular Cover Problem. - Jiachen T. Wang, Tianji Yang, James Zou, Yongchan Kwon, Ruoxi Jia:
Rethinking Data Shapley for Data Selection Tasks: Misleads and Merits. - Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, Yi Wu:
Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study. - Julien Ferry, Ricardo Fukasawa, Timothée Pascal, Thibaut Vidal:
Trained Random Forests Completely Reveal your Dataset. - Uijeong Jang, Jason D. Lee, Ernest K. Ryu:
LoRA Training in the NTK Regime has No Spurious Local Minima. - Jayesh Singla, Ananye Agarwal, Deepak Pathak:
SAPG: Split and Aggregate Policy Gradients. - Lynn Chua, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Amer Sinha, Chiyuan Zhang:
How Private are DP-SGD Implementations? - Qianlong Wen, Mingxuan Ju, Zhongyu Ouyang, Chuxu Zhang, Yanfang Ye:
From Coarse to Fine: Enable Comprehensive Graph Self-supervised Learning with Multi-granular Semantic Ensemble. - Aaron Lou, Chenlin Meng, Stefano Ermon:
Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution. - Mohammad Pezeshki, Diane Bouchacourt, Mark Ibrahim, Nicolas Ballas, Pascal Vincent, David Lopez-Paz:
Discovering Environments with XRM. - Ilias Diakonikolas, Mingchen Ma, Lisheng Ren, Christos Tzamos:
Fast Co-Training under Weak Dependence via Stream-Based Active Learning. - Masahiro Kato, Akihiro Oga, Wataru Komatsubara, Ryo Inokuchi:
Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choice. - Francesco Paissan, Mirco Ravanelli, Cem Subakan:
Listenable Maps for Audio Classifiers. - Barna Saha, Christopher Ye:
I/O Complexity of Attention, or How Optimal is FlashAttention? - Bairu Hou, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, Yang Zhang:
Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling. - Allen Tran, Aurélien Bibaut, Nathan Kallus:
Inferring the Long-Term Causal Effects of Long-Term Treatments from Short-Term Experiments. - Ta Duy Nguyen, Alina Ene:
Multiplicative Weights Update, Area Convexity and Random Coordinate Descent for Densest Subgraph Problems. - Manuel Glöckler, Michael Deistler, Christian Dietrich Weilbach, Frank Wood, Jakob H. Macke:
All-in-one simulation-based inference. - Thomas Kleine Buening, Victor Villin, Christos Dimitrakakis:
Environment Design for Inverse Reinforcement Learning. - Jonah Brown-Cohen, Geoffrey Irving, Georgios Piliouras:
Scalable AI Safety via Doubly-Efficient Debate. - Yu Luo, Tianying Ji, Fuchun Sun, Jianwei Zhang, Huazhe Xu, Xianyuan Zhan:
OMPO: A Unified Framework for RL under Policy and Dynamics Shifts. - Chendi Wang, Yuqing Zhu, Weijie J. Su, Yu-Xiang Wang:
Neural Collapse meets Differential Privacy: Curious behaviors of NoisyGD with Near-Perfect Representation Learning. - Ziyad Oulhaj, Mathieu Carrière, Bertrand Michel:
Differentiable Mapper for Topological Optimization of Data Representation. - Wenshuo Li, Xinghao Chen, Han Shu, Yehui Tang, Yunhe Wang:
ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking. - Haoran Li, Zicheng Zhang, Wang Luo, Congying Han, Yudong Hu, Tiande Guo, Shichen Liao:
Towards Optimal Adversarial Robust Q-learning with Bellman Infinity-error. - Mina Dalirrooyfard, Konstantin Makarychev, Slobodan Mitrovic:
Pruned Pivot: Correlation Clustering Algorithm for Dynamic, Parallel, and Local Computation Models. - Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, Lichao Sun:
MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. - Qinlin Zhao, Jindong Wang, Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, Xing Xie:
CompeteAI: Understanding the Competition Dynamics of Large Language Model-based Agents. - Simon Buchholz, Bernhard Schölkopf:
Robustness of Nonlinear Representation Learning. - Mustapha Bounoua, Giulio Franzese, Pietro Michiardi:
SΩI: Score-based O-INFORMATION Estimation. - Uri Sherman, Alon Cohen, Tomer Koren, Yishay Mansour:
Rate-Optimal Policy Optimization for Linear Markov Decision Processes. - Danni Yang, Jiayi Ji, Yiwei Ma, Tianyu Guo, Haowei Wang, Xiaoshuai Sun, Rongrong Ji:
SAM as the Guide: Mastering Pseudo-Label Refinement in Semi-Supervised Referring Expression Segmentation. - Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schröder de Witt, Fabio Pizzati, Katherine Elkins, Supratik Mukhopadhyay, Adel Bibi, Botos Csaba, Fabro Steibel, Fazl Barez, Genevieve Smith, Gianluca Guadagni, Jon Chun, Jordi Cabot, Joseph Marvin Imperial, Juan A. Nolazco-Flores, Lori Landay, Matthew Thomas Jackson, Paul Röttger, Philip H. S. Torr, Trevor Darrell, Yong Suk Lee, Jakob N. Foerster:
Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI. - Di Wu, Wasi Uddin Ahmad, Dejiao Zhang, Murali Krishna Ramanathan, Xiaofei Ma:
Repoformer: Selective Retrieval for Repository-Level Code Completion. - Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Robin Rombach:
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. - Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen K. Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, Rumman Chowdhury, Alex Engler, Peter Henderson, Yacine Jernite, Seth Lazar, Stefano Maffulli, Alondra Nelson, Joelle Pineau, Aviya Skowron, Dawn Song, Victor Storchan, Daniel Zhang, Daniel E. Ho, Percy Liang, Arvind Narayanan:
Position: On the Societal Impact of Open Foundation Models. - Zachary Novack, Julian J. McAuley, Taylor Berg-Kirkpatrick, Nicholas J. Bryan:
DITTO: Diffusion Inference-Time T-Optimization for Music Generation. - Woojin Cho, Minju Jo, Haksoo Lim, Kookjin Lee, Dongeun Lee, Sanghyun Hong, Noseong Park:
Parameterized Physics-informed Neural Networks for Parameterized PDEs. - Yifan Xia, Xianliang Yang, Zichuan Liu, Zhihao Liu, Lei Song, Jiang Bian:
Position: Rethinking Post-Hoc Search-Based Neural Approaches for Solving Large-Scale Traveling Salesman Problems. - Gauthier Guinet, Behrooz Omidvar-Tehrani, Anoop Deoras, Laurent Callot:
Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam Generation. - Ruijie Zheng, Ching-An Cheng, Hal Daumé III, Furong Huang, Andrey Kolobov:
PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in Control. - Jia Shi, Gautam Rajendrakumar Gare, Jinjin Tian, Siqi Chai, Zhiqiu Lin, Arun Balajee Vasudevan, Di Feng, Francesco Ferroni, Shu Kong:
LCA-on-the-Line: Benchmarking Out of Distribution Generalization with Class Taxonomies. - Tijana Zrnic, Emmanuel J. Candès:
Active Statistical Inference. - Yujia Huang, Adishree Ghatare, Yuanzhe Liu, Ziniu Hu, Qinsheng Zhang, Chandramouli Shama Sastry, Siddharth Gururani, Sageev Oore, Yisong Yue:
Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion. - Zeqian Ju, Yuancheng Wang, Kai Shen, Xu Tan, Detai Xin, Dongchao Yang, Eric Liu, Yichong Leng, Kaitao Song, Siliang Tang, Zhizheng Wu, Tao Qin, Xiangyang Li, Wei Ye, Shikun Zhang, Jiang Bian, Lei He, Jinyu Li, Sheng Zhao:
NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and Diffusion Models. - Jiachun Li, Kaining Shi, David Simchi-Levi:
Privacy Preserving Adaptive Experiment Design. - Lingfeng Shen, Aayush Mishra, Daniel Khashabi:
Position: Do pretrained Transformers Learn In-Context by Gradient Descent? - Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner, Pieter Abbeel, Dan Klein, Anca D. Dragan:
Learning to Model the World With Language. - Riley Simmons-Edler, Ryan Paul Badman, Shayne Longpre, Kanaka Rajan:
Position: AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research. - Vassilis Papadopoulos, Jérémie Wenger, Clément Hongler:
Arrows of Time for Large Language Models. - Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, Brian Ichter:
Chain of Code: Reasoning with a Language Model-Augmented Code Emulator. - Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, Madeleine Udell:
Challenges in Training PINNs: A Loss Landscape Perspective. - Ryan Greenblatt, Buck Shlegeris, Kshitij Sachan, Fabien Roger:
AI Control: Improving Safety Despite Intentional Subversion. - Bowen Zhao, Hannaneh Hajishirzi, Qingqing Cao:
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference. - Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin A. Riedmiller:
Offline Actor-Critic Reinforcement Learning Scales to Large Models. - Zach Evans, CJ Carr, Josiah Taylor, Scott H. Hawley, Jordi Pons:
Fast Timing-Conditioned Latent Audio Diffusion. - Kai Zhang, Yi Luan, Hexiang Hu, Kenton Lee, Siyuan Qiao, Wenhu Chen, Yu Su, Ming-Wei Chang:
MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions. - Max Dupré la Tour, Monika Henzinger, David Saulpic:
Making Old Things New: A Unified Algorithm for Differentially Private Clustering. - Wei Zhang, Chaoqun Wan, Yonggang Zhang, Yiu-ming Cheung, Xinmei Tian, Xu Shen, Jieping Ye:
Interpreting and Improving Large Language Models in Arithmetic Calculation. - Shengsheng Lin, Weiwei Lin, Wentai Wu, Haojun Chen, Junjie Yang:
SparseTSF: Modeling Long-term Time Series Forecasting with *1k* Parameters. - Zijian Liu, Zhengyuan Zhou:
On the Last-Iterate Convergence of Shuffling Gradient Methods. - Bowen Jing, Bonnie Berger, Tommi S. Jaakkola:
AlphaFold Meets Flow Matching for Generating Protein Ensembles. - Sajjad Zarifzadeh, Philippe Liu, Reza Shokri:
Low-Cost High-Power Membership Inference Attacks. - Letian Fu, Gaurav Datta, Huang Huang, William Chung-Ho Panitch, Jaimyn Drake, Joseph Ortiz, Mustafa Mukadam, Mike Lambeta, Roberto Calandra, Ken Goldberg:
A Touch, Vision, and Language Dataset for Multimodal Alignment. - Dora Zhao, Jerone T. A. Andrews, Orestis Papakyriakopoulos, Alice Xiang:
Position: Measure Dataset Diversity, Don't Just Claim It. - Zoe Piran, Michal Klein, James Thornton, Marco Cuturi:
Contrasting Multiple Representations with the Multi-Marginal Matching Gap. - Jiahan Zhang, Qi Wei, Feng Liu, Lei Feng:
Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data. - Haonan Wang, Qianli Shen, Yao Tong, Yang Zhang, Kenji Kawaguchi:
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright BreachesWithout Adjusting Finetuning Pipeline. - Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing part of a production language model. - Hyunin Lee, Ming Jin, Javad Lavaei, Somayeh Sojoudi:
Pausing Policy Learning in Non-stationary Reinforcement Learning. - Akbir Khan, John Hughes, Dan Valentine, Laura Ruis, Kshitij Sachan, Ansh Radhakrishnan, Edward Grefenstette, Samuel R. Bowman, Tim Rocktäschel, Ethan Perez:
Debating with More Persuasive LLMs Leads to More Truthful Answers. - Sanyam Agarwal, Markus Bläser:
Probabilistic Generating Circuits - Demystified. - Mikel Malagon, Josu Ceberio, José Antonio Lozano:
Self-Composing Policies for Scalable Continual Reinforcement Learning. - Jian Xu, Delu Zeng, John W. Paisley:
Sparse Inducing Points in Deep Gaussian Processes: Enhancing Modeling with Denoising Diffusion Variational Inference. - Feihu Huang:
Optimal Hessian/Jacobian-Free Nonconvex-PL Bilevel Optimization. - Li Sun, Zhenhao Huang, Hao Peng, Yujie Wang, Chunyang Liu, Philip S. Yu:
LSEnet: Lorentz Structural Entropy Neural Network for Deep Graph Clustering. - Dan Kondratyuk, Lijun Yu, Xiuye Gu, José Lezama, Jonathan Huang, Grant Schindler, Rachel Hornung, Vighnesh Birodkar, Jimmy Yan, Ming-Chang Chiu, Krishna Somandepalli, Hassan Akbari, Yair Alon, Yong Cheng, Joshua V. Dillon, Agrim Gupta, Meera Hahn, Anja Hauth, David Hendon, Alonso Martinez, David Minnen, Mikhail Sirotenko, Kihyuk Sohn, Xuan Yang, Hartwig Adam, Ming-Hsuan Yang, Irfan Essa, Huisheng Wang, David A. Ross, Bryan Seybold, Lu Jiang:
VideoPoet: A Large Language Model for Zero-Shot Video Generation. - Yang Jin, Zhicheng Sun, Kun Xu, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, Kun Gai, Yadong Mu:
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization. - Florian Tramèr, Gautam Kamath, Nicholas Carlini:
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining. - Ce Liu, Suryansh Kumar, Shuhang Gu, Radu Timofte, Yao Yao, Luc Van Gool:
Stereo Risk: A Continuous Modeling Approach to Stereo Matching. - Zhuanghua Liu, Cheng Chen, Luo Luo, Bryan Kian Hsiang Low:
Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization. - Idan Attias, Gintare Karolina Dziugaite, Mahdi Haghifam, Roi Livni, Daniel M. Roy:
Information Complexity of Stochastic Convex Optimization: Applications to Generalization, Memorization, and Tracing. - Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Jianping Fan, Xi Peng:
Image Clustering with External Guidance. - Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, Rada Mihalcea:
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity. - Yang Zhang, Zhewei Wei, Ye Yuan, Chongxuan Li, Wenbing Huang:
EquiPocket: an E(3)-Equivariant Geometric Graph Neural Network for Ligand Binding Site Prediction. - Heting Gao, Kaizhi Qian, Junrui Ni, Chuang Gan, Mark A. Hasegawa-Johnson, Shiyu Chang, Yang Zhang:
Speech Self-Supervised Learning Using Diffusion Model Synthetic Data. - Minyoung Huh, Brian Cheung, Tongzhou Wang, Phillip Isola:
Position: The Platonic Representation Hypothesis. - Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai:
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks. - Sucheng Ren, Zeyu Wang, Hongru Zhu, Junfei Xiao, Alan L. Yuille, Cihang Xie:
Rejuvenating image-GPT as Strong Visual Representation Learners. - Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise. - Zhengyang Hu, Song Kang, Qunsong Zeng, Kaibin Huang, Yanchao Yang:
InfoNet: Neural Estimation of Mutual Information without Test-Time Optimization. - Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, Tat-Seng Chua:
NExT-GPT: Any-to-Any Multimodal LLM. - Yeonhong Park, Jake Hyun, SangLyul Cho, Bonggeun Sim, Jae W. Lee:
Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs. - Uri Stemmer:
Private Truly-Everlasting Robust-Prediction. - Yaodong Yu, Maziar Sanjabi, Yi Ma, Kamalika Chaudhuri, Chuan Guo:
ViP: A Differentially Private Foundation Model for Computer Vision. - Jake Bruce, Michael D. Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Bechtle, Feryal M. P. Behbahani, Stephanie C. Y. Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott E. Reed, Jingwei Zhang, Konrad Zolna, Jeff Clune, Nando de Freitas, Satinder Singh, Tim Rocktäschel:
Genie: Generative Interactive Environments. - Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov, Aladin Virmaux, Giuseppe Paolo, Themis Palpanas, Ievgen Redko:
SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention. - Zhengqi Pei, Anran Zhang, Shuhui Wang, Xiangyang Ji, Qingming Huang:
Data-free Neural Representation Compression with Riemannian Neural Dynamics. - Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Meishan Zhang, Mong-Li Lee, Wynne Hsu:
Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition. - Shuaicheng Niu, Chunyan Miao, Guohao Chen, Pengcheng Wu, Peilin Zhao:
Test-Time Model Adaptation with Only Forward Passes. - Nianzu Yang, Kaipeng Zeng, Haotian Lu, Yexin Wu, Zexin Yuan, Danni Chen, Shengdian Jiang, Jiaxiang Wu, Yimin Wang, Junchi Yan:
MorphGrower: A Synchronized Layer-by-layer Growing Approach for Plausible Neuronal Morphology Generation. - Ruisi Cai, Saurav Muralidharan, Greg Heinrich, Hongxu Yin, Zhangyang Wang, Jan Kautz, Pavlo Molchanov:
Flextron: Many-in-One Flexible Large Language Model. - Sungwoo Park, Dongjun Kim, Ahmed Alaa:
Mean-field Chaos Diffusion Models. - Wenjie Xu, Wenbin Wang, Yuning Jiang, Bratislav Svetozarevic, Colin N. Jones:
Principled Preferential Bayesian Optimization. - Songtao Liu, Hanjun Dai, Yue Zhao, Peng Liu:
Preference Optimization for Molecule Synthesis with Conditional Residual Energy-based Models. - Haotong Qin, Xudong Ma, Xingyu Zheng, Xiaoyang Li, Yang Zhang, Shouda Liu, Jie Luo, Xianglong Liu, Michele Magno:
Accurate LoRA-Finetuning Quantization of LLMs via Information Retention. - Hanting Chen, Liuzhi Cheng, Xutao Wang, Yuchuan Tian, Yunhe Wang:
DiJiang: Efficient Large Language Models through Compact Kernelization. - Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, Jürgen Schmidhuber:
GPTSwarm: Language Agents as Optimizable Graphs. - Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen:
DoRA: Weight-Decomposed Low-Rank Adaptation. - Florian Karl, Lukas Malte Kemeter, Gabriel Dax, Paulina Sierak:
Position: Embracing Negative Results in Machine Learning. - Xiuwen Gong, Nitin Bisht, Guandong Xu:
Does Label Smoothing Help Deep Partial Label Learning?
Accept (Spotlight)
- Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, James Zou, Atri Rudra, Christopher Ré:
Simple linear attention language models balance the recall-throughput tradeoff. - Ali Shirali, Rediet Abebe, Moritz Hardt:
Allocation Requires Prediction Only if Inequality Is Low. - Hongbin Pei, Yu Li, Huiqi Deng, Jingxin Hai, Pinghui Wang, Jie Ma, Jing Tao, Yuheng Xiong, Xiaohong Guan:
Multi-Track Message Passing: Tackling Oversmoothing and Oversquashing in Graph Learning via Preventing Heterophily Mixing. - Haotian Sun, Yuchen Zhuang, Wei Wei, Chao Zhang, Bo Dai:
BBox-Adapter: Lightweight Adapting for Black-Box Large Language Models. - Patrik Reizinger, Szilvia Ujváry, Anna Mészáros, Anna Kerekes, Wieland Brendel, Ferenc Huszár:
Position: Understanding LLMs Requires More Than Statistical Generalization. - Baoying Chen, Jishen Zeng, Jianquan Yang, Rui Yang:
DRCT: Diffusion Reconstruction Contrastive Training towards Universal Detection of Diffusion Generated Images. - Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias W. Seeger:
Explaining Probabilistic Models with Distributional Values. - Haoxuan Li, Chunyuan Zheng, Shuyi Wang, Kunhan Wu, Eric Hao Wang, Peng Wu, Zhi Geng, Xu Chen, Xiao-Hua Zhou:
Relaxing the Accurate Imputation Assumption in Doubly Robust Learning for Debiased Collaborative Filtering. - Yuanbiao Gou, Haiyu Zhao, Boyun Li, Xinyan Xiao, Xi Peng:
Test-Time Degradation Adaptation for Open-Set Image Restoration. - Anqi Mao, Mehryar Mohri, Yutao Zhong:
Regression with Multi-Expert Deferral. - Yuxuan Yin, Yu Wang, Peng Li:
High-Dimensional Bayesian Optimization via Semi-Supervised Learning with Optimized Unlabeled Data Sampling. - Som Sagar, Aditya Taparia, Ransalu Senanayake:
Failures Are Fated, But Can Be Faded: Characterizing and Mitigating Unwanted Behaviors in Large-Scale Vision and Language Models. - Weiyu Chen, James T. Kwok:
Efficient Pareto Manifold Learning with Low-Rank Structure. - Jun-Peng Jiang, Han-Jia Ye, Leye Wang, Yang Yang, Yuan Jiang, De-Chuan Zhan:
Tabular Insights, Visual Impacts: Transferring Expertise from Tables to Images. - Václav Vorácek, Tomás Werner:
Convergence of Some Convex Message Passing Algorithms to a Fixed Point. - Kevin Frans, Seohong Park, Pieter Abbeel, Sergey Levine:
Unsupervised Zero-Shot Reinforcement Learning via Functional Reward Encodings. - Shuai Zhang, Chuan Zhou, Yang Aron Liu, Peng Zhang, Xixun Lin, Zhi-Ming Ma:
Neural Jump-Diffusion Temporal Point Processes. - Louis Sharrock, Jack Simons, Song Liu, Mark Beaumont:
Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models. - Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, Douwe Kiela:
Model Alignment as Prospect Theoretic Optimization. - Guillaume Sanchez, Alexander Spangher, Honglu Fan, Elad Levi, Stella Biderman:
Stay on Topic with Classifier-Free Guidance. - Haocheng Xi, Yuxiang Chen, Kang Zhao, Kai Jun Teh, Jianfei Chen, Jun Zhu:
Jetfire: Efficient and Accurate Transformer Pretraining with INT8 Data Flow and Per-Block Quantization. - Alexander Wettig, Aatmik Gupta, Saumya Malik, Danqi Chen:
QuRating: Selecting High-Quality Data for Training Language Models. - Ahmed Khaled, Chi Jin:
Tuning-Free Stochastic Optimization. - Yusuf Sale, Viktor Bengs, Michele Caprio, Eyke Hüllermeier:
Second-Order Uncertainty Quantification: A Distance-Based Approach. - Jaewook Lee, Hanseul Cho, Chulhee Yun:
Fundamental Benefit of Alternating Updates in Minimax Optimization. - Xiaolong Zou, Xingxing Cao, Xiaojiao Yang, Bo Hong:
Leveraging Attractor Dynamics in Spatial Navigation for Better Language Parsing. - Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A. Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin:
Differentially Private Synthetic Data via Foundation Model APIs 2: Text. - Yinjun Wu, Mayank Keoliya, Kan Chen, Neelay Velingker, Ziyang Li, Emily J. Getzen, Qi Long, Mayur Naik, Ravi B. Parikh, Eric Wong:
DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation. - Haitao Mao, Zhikai Chen, Wenzhuo Tang, Jianan Zhao, Yao Ma, Tong Zhao, Neil Shah, Mikhail Galkin, Jiliang Tang:
Position: Graph Foundation Models Are Already Here. - Zipeng Xiao, Zhongkai Hao, Bokai Lin, Zhijie Deng, Hang Su:
Improved Operator Learning by Orthogonal Attention. - Kexin Pei, Weichen Li, Qirui Jin, Shuyang Liu, Scott Geng, Lorenzo Cavallaro, Junfeng Yang, Suman Jana:
Exploiting Code Symmetries for Learning Program Semantics. - Lang Feng, Pengjie Gu, Bo An, Gang Pan:
Resisting Stochastic Risks in Diffusion Planners with the Trajectory Aggregation Tree. - Michael S. Albergo, Mark Goldstein, Nicholas Matthew Boffi, Rajesh Ranganath, Eric Vanden-Eijnden:
Stochastic Interpolants with Data-Dependent Couplings. - Amit Attia, Tomer Koren:
How Free is Parameter-Free Stochastic Optimization? - Clayton Sanford, Daniel Hsu, Matus Telgarsky:
Transformers, parallel computation, and logarithmic depth. - Weike Fang, Zhejian Zhou, Junzhou He, Weihang Wang:
StackSight: Unveiling WebAssembly through Large Language Models and Neurosymbolic Chain-of-Thought Decompilation. - Muhammad Qasim Elahi, Lai Wei, Murat Kocaoglu, Mahsa Ghasemi:
Adaptive Online Experimental Design for Causal Discovery. - Sungyoon Kim, Mert Pilanci:
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial Time. - Aditya Gangrade, Aditya Gopalan, Venkatesh Saligrama, Clayton Scott:
Testing the Feasibility of Linear Programs with Bandit Feedback. - Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles Margossian, Robert M. Gower, David M. Blei, Lawrence K. Saul:
Batch and match: black-box variational inference with a score-based divergence. - Harley Wiltzer, Jesse Farebrother, Arthur Gretton, Yunhao Tang, André Barreto, Will Dabney, Marc G. Bellemare, Mark Rowland:
A Distributional Analogue to the Successor Representation. - Jiarong Pan, Stefan Falkner, Felix Berkenkamp, Joaquin Vanschoren:
MALIBO: Meta-learning for Likelihood-free Bayesian Optimization. - Sina Akbari, Negar Kiyavash:
Triple Changes Estimator for Targeted Policies. - Sai Shankar Narasimhan, Shubhankar Agarwal, Oguzhan Akcin, Sujay Sanghavi, Sandeep P. Chinchali:
Time Weaver: A Conditional Time Series Generation Model. - Subbarao Kambhampati, Karthik Valmeekam, Lin Guan, Mudit Verma, Kaya Stechly, Siddhant Bhambri, Lucas Saldyt, Anil Murthy:
Position: LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks. - Michael T. Matthews, Michael Beukman, Benjamin Ellis, Mikayel Samvelyan, Matthew Thomas Jackson, Samuel Coward, Jakob Nicolaus Foerster:
Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning. - Maude Lizaire, Michael Rizvi-Martel, Marawan Gamal Abdel Hameed, Guillaume Rabusseau:
A Tensor Decomposition Perspective on Second-order RNNs. - Zalan Fabian, Berk Tinaz, Mahdi Soltanolkotabi:
Adapt and Diffuse: Sample-adaptive Reconstruction via Latent Diffusion Models. - Zeyuan Allen-Zhu, Yuanzhi Li:
Physics of Language Models: Part 3.1, Knowledge Storage and Extraction. - William J. Swartworth, David P. Woodruff:
Fast Sampling-Based Sketches for Tensors. - Jongha Jon Ryu, Gregory W. Wornell:
Gambling-Based Confidence Sequences for Bounded Random Vectors. - Xing Han Lu, Zdenek Kasner, Siva Reddy:
WebLINX: Real-World Website Navigation with Multi-Turn Dialogue. - Shayne Longpre, Robert Mahari, Naana Obeng-Marnu, William Brannon, Tobin South, Katy Ilonka Gero, Alex Pentland, Jad Kabbara:
Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them? - Michael Sun, Minghao Guo, Weize Yuan, Veronika Thost, Crystal Elaine Owens, Aristotle Franklin Grosz, Sharvaa Selvan, Katelyn Zhou, Hassan Mohiuddin, Benjamin J. Pedretti, Zachary P. Smith, Jie Chen, Wojciech Matusik:
Representing Molecules as Random Walks Over Interpretable Grammars. - Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas, Felix Zhou:
Replicable Learning of Large-Margin Halfspaces. - Samuel Pfrommer, Brendon G. Anderson, Somayeh Sojoudi:
Transport of Algebraic Structure to Latent Embeddings. - Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen R. McKeown:
Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations. - Ning Liu, Yiming Fan, Xianyi Zeng, Milan Klöwer, Lu Zhang, Yue Yu:
Harnessing the Power of Neural Operators with Automatically Encoded Conservation Laws. - Louis Grenioux, Maxence Noble, Marylou Gabrié, Alain Oliviero Durmus:
Stochastic Localization via Iterative Posterior Sampling. - Nina Vesseron, Marco Cuturi:
On a Neural Implementation of Brenier's Polar Factorization. - Zhiyao Luo, Yangchen Pan, Peter J. Watkinson, Tingting Zhu:
Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination. - Tianlin Liu, Shangmin Guo, Leonardo Bianco, Daniele Calandriello, Quentin Berthet, Felipe Llinares-López, Jessica Hoffmann, Lucas Dixon, Michal Valko, Mathieu Blondel:
Decoding-time Realignment of Language Models. - Anastasios Tsiamis, Aren Karapetyan, Yueshan Li, Efe C. Balta, John Lygeros:
Predictive Linear Online Tracking for Unknown Targets. - Vincent Cohen-Addad, Tommaso d'Orsi, Alessandro Epasto, Vahab Mirrokni, Peilin Zhong:
Perturb-and-Project: Differentially Private Similarities and Marginals. - Karthik Abinav Sankararaman, Aravind Srinivasan, Pan Xu:
Promoting External and Internal Equities Under Ex-Ante/Ex-Post Metrics in Online Resource Allocation. - Tianci Liu, Haoyu Wang, Shiyang Wang, Yu Cheng, Jing Gao:
LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models. - Matías Altamirano, François-Xavier Briol, Jeremias Knoblauch:
Robust and Conjugate Gaussian Process Regression. - Logan Engstrom, Axel Feldmann, Aleksander Madry:
DsDm: Model-Aware Dataset Selection with Datamodels. - Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu:
LLM Maybe LongLM: SelfExtend LLM Context Window Without Tuning. - Liangzu Peng, Wotao Yin:
Block Acceleration Without Momentum: On Optimal Stepsizes of Block Gradient Descent for Least-Squares. - Shahaf Bassan, Guy Amir, Guy Katz:
Local vs. Global Interpretability: A Computational Complexity Perspective. - Congyu Qiao, Ning Xu, Yihao Hu, Xin Geng:
ULAREF: A Unified Label Refinement Framework for Learning with Inaccurate Supervision. - Alessandro Montenegro, Marco Mussi, Alberto Maria Metelli, Matteo Papini:
Learning Optimal Deterministic Policies with Stochastic Policy Gradients. - Davide Legacci, Panayotis Mertikopoulos, Bary S. R. Pradelski:
A Geometric Decomposition of Finite Games: Convergence vs. Recurrence under Exponential Weights. - Fengdi Che, Chenjun Xiao, Jincheng Mei, Bo Dai, Ramki Gummadi, Oscar A. Ramirez, Christopher K. Harris, A. Rupam Mahmood, Dale Schuurmans:
Target Networks and Over-parameterization Stabilize Off-policy Bootstrapping with Function Approximation. - Aaditya K. Singh, Ted Moskovitz, Felix Hill, Stephanie C. Y. Chan, Andrew M. Saxe:
What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation. - Shikun Liu, Deyu Zou, Han Zhao, Pan Li:
Pairwise Alignment Improves Graph Domain Adaptation. - Liam Hodgson, Danilo Bzdok:
Estimating Unknown Population Sizes Using the Hypergeometric Distribution. - Andrés Altieri, Marco Romanelli, Georg Pichler, Florence Alberge, Pablo Piantanida:
Beyond the Norms: Detecting Prediction Errors in Regression Models. - Ivana Balazevic, Yuge Shi, Pinelopi Papalampidi, Rahma Chaabouni, Skanda Koppula, Olivier J. Hénaff:
Memory Consolidation Enables Long-Context Video Understanding. - Yufei Huang, Odin Zhang, Lirong Wu, Cheng Tan, Haitao Lin, Zhangyang Gao, Siyuan Li, Stan Z. Li:
Re-Dock: Towards Flexible and Realistic Molecular Docking with Diffusion Bridge. - Catalin Mitelut, Benjamin J. Smith, Peter Vamplew:
Position: Intent-aligned AI Systems Must Optimize for Agency Preservation. - Maciej Wolczyk, Bartlomiej Cupial, Mateusz Ostaszewski, Michal Bortkiewicz, Michal Zajac, Razvan Pascanu, Lukasz Kucinski, Piotr Milos:
Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem. - Yangfan Liu, Jiaqi Lv, Xin Geng, Ning Xu:
Learning with Partial-Label and Unlabeled Data: A Uniform Treatment for Supervision Redundancy and Insufficiency. - Adrian Müller, Pragnya Alatur, Volkan Cevher, Giorgia Ramponi, Niao He:
Truly No-Regret Learning in Constrained MDPs. - Umberto M. Tomasini, Matthieu Wyart:
How Deep Networks Learn Sparse and Hierarchical Data: the Sparse Random Hierarchy Model. - Esther Rolf, Konstantin Klemmer, Caleb Robinson, Hannah Kerner:
Position: Mission Critical - Satellite Data is a Distinct Modality in Machine Learning. - Aaron Archer, Matthew Fahrbach, Kuikui Liu, Prakash Prabhu:
Practical Performance Guarantees for Pipelined DNN Inference. - Lucas Theis:
Position: What makes an image realistic? - Ruidong Wu, Ruihan Guo, Rui Wang, Shitong Luo, Yue Xu, Jiahan Li, Jianzhu Ma, Qiang Liu, Yunan Luo, Jian Peng:
FAFE: Immune Complex Modeling with Geodesic Distance Loss on Noisy Group Frames. - Hao Di, Haishan Ye, Yueling Zhang, Xiangyu Chang, Guang Dai, Ivor W. Tsang:
Double Variance Reduction: A Smoothing Trick for Composite Optimization Problems without First-Order Gradient. - Zhihai Wang, Lei Chen, Jie Wang, Yinqi Bai, Xing Li, Xijun Li, Mingxuan Yuan, Jianye Hao, Yongdong Zhang, Feng Wu:
A Circuit Domain Generalization Framework for Efficient Logic Synthesis in Chip Design. - Ingvar M. Ziemann, Stephen Tu, George J. Pappas, Nikolai Matni:
Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss. - Sotiris Anagnostidis, Gregor Bachmann, Imanol Schlag, Thomas Hofmann:
Navigating Scaling Laws: Compute Optimality in Adaptive Model Training. - Dake Zhang, Boxiang Lyu, Shuang Qiu, Mladen Kolar, Tong Zhang:
Pessimism Meets Risk: Risk-Sensitive Offline Reinforcement Learning. - Chen Xu, Hanyang Jiang, Yao Xie:
Conformal prediction for multi-dimensional time series by ellipsoidal sets. - Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue M. Lu, Lenka Zdeborová, Bruno Loureiro:
Asymptotics of feature learning in two-layer networks after one gradient-step. - Minji Lee, Luiz Felipe Vecchietti, Hyunkyu Jung, Hyun Joo Ro, Meeyoung Cha, Ho Min Kim:
Robust Optimization in Protein Fitness Landscapes Using Reinforcement Learning in Latent Space. - Yulong Huang, Xiaopeng Lin, Hongwei Ren, Haotian Fu, Yue Zhou, Zunchang Liu, Biao Pan, Bojun Cheng:
CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks. - Xu Yang, Huaxiu Yao, Ying Wei:
One Meta-tuned Transformer is What You Need for Few-shot Learning. - Johan Samir Obando-Ceron, Ghada Sokar, Timon Willi, Clare Lyle, Jesse Farebrother, Jakob Nicolaus Foerster, Gintare Karolina Dziugaite, Doina Precup, Pablo Samuel Castro:
Mixtures of Experts Unlock Parameter Scaling for Deep RL. - Yunyan Bai, Yuxing Liu, Luo Luo:
On the Complexity of Finite-Sum Smooth Optimization under the Polyak-Łojasiewicz Condition. - Sascha Xu, Nils Philipp Walter, Janis Kalofolias, Jilles Vreeken:
Learning Exceptional Subgroups by End-to-End Maximizing KL-Divergence. - Wang Chi Cheung, Lixing Lyu:
Leveraging (Biased) Information: Multi-armed Bandits with Offline Data. - Anpeng Wu, Haoxuan Li, Kun Kuang, Keli Zhang, Fei Wu:
Learning Causal Relations from Subsampled Time Series with Two Time-Slices. - Yuchao Lin, Jacob Helwig, Shurui Gui, Shuiwang Ji:
Equivariance via Minimal Frame Averaging for More Symmetries and Efficiency. - Sayan Bhattacharya, Gramoz Goranci, Shaofeng H.-C. Jiang, Yi Qian, Yubo Zhang:
Dynamic Facility Location in High Dimensional Euclidean Spaces. - Xisen Jin, Xiang Ren:
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement. - Sangmin Lee, Abbas Mammadov, Jong Chul Ye:
Defining Neural Network Architecture through Polytope Structures of Datasets. - Siwei Wei, Xudong Zhang, Zhiyang Zhou, Yan Cai:
Extending Test-Time Augmentation with Metamorphic Relations for Combinatorial Problems. - Xiyu Wang, Baijiong Lin, Daochang Liu, Ying-Cong Chen, Chang Xu:
Bridging Data Gaps in Diffusion Models with Adversarial Noise-Based Transfer Learning. - Yu-Hu Yan, Jing Wang, Peng Zhao:
Handling Heterogeneous Curvatures in Bandit LQR Control. - Pratik Patil, Jin-Hong Du, Ryan J. Tibshirani:
Optimal Ridge Regularization for Out-of-Distribution Prediction. - Caixing Wang, Xingdong Feng:
Optimal Kernel Quantile Learning with Random Features. - Zelei Cheng, Xian Wu, Jiahao Yu, Sabrina Yang, Gang Wang, Xinyu Xing:
RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation. - Jeongyeol Kwon, Yonathan Efroni, Shie Mannor, Constantine Caramanis:
Prospective Side Information for Latent MDPs. - Vincent Cohen-Addad, Silvio Lattanzi, Andreas Maggiori, Nikos Parotsidis:
Dynamic Correlation Clustering in Sublinear Update Time. - Boqi Li, Weiwei Liu:
A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks. - Shiming Ge, Weijia Guo, Chenyu Li, Junzheng Zhang, Yong Li, Dan Zeng:
Masked Face Recognition with Generative-to-Discriminative Representations. - Meysam Alishahi, Jeff M. Phillips:
No Dimensional Sampling Coresets for Classification. - Huy Tran, Yikun Bai, Abihith Kothapalli, Ashkan Shahbazi, Xinran Liu, Rocio Diaz Martin, Soheil Kolouri:
Stereographic Spherical Sliced Wasserstein Distances. - Feihu Huang, Jianyu Zhao:
Faster Adaptive Decentralized Learning Algorithms. - Kaihong Zhang, Heqi Yin, Feng Liang, Jingbo Liu:
Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions. - Isha Garg, Deepak Ravikumar, Kaushik Roy:
Memorization Through the Lens of Curvature of Loss Function Around Samples. - Saul José Rodrigues dos Santos, Vlad Niculae, Daniel C. McNamee, André F. T. Martins:
Sparse and Structured Hopfield Networks. - Nikhil Vyas, Depen Morwani, Rosie Zhao, Gal Kaplun, Sham M. Kakade, Boaz Barak:
Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning. - Chao Wang, Xin Bing, Xin He, Caixing Wang:
Towards Theoretical Understanding of Learning Large-scale Dependent Data via Random Features. - Feng Xie, Zhengming Chen, Shanshan Luo, Wang Miao, Ruichu Cai, Zhi Geng:
Automating the Selection of Proxy Variables of Unmeasured Confounders. - David Venuto, Mohammad Sami Nur Islam, Martin Klissarov, Doina Precup, Sherry Yang, Ankit Anand:
Code as Reward: Empowering Reinforcement Learning with VLMs. - Shengjie Wang, Shaohuai Liu, Weirui Ye, Jiacheng You, Yang Gao:
EfficientZero V2: Mastering Discrete and Continuous Control with Limited Data. - Caixing Wang, Ziliang Shen:
Distributed High-Dimensional Quantile Regression: Estimation Efficiency and Support Recovery. - Changlong Wu, Yifan Wang, Ananth Grama:
A Theory of Fault-Tolerant Learning. - Andreas Madsen, Siva Reddy, Sarath Chandar:
Faithfulness Measurable Masked Language Models. - Lirui Luo, Guoxi Zhang, Hongming Xu, Yaodong Yang, Cong Fang, Qing Li:
End-to-End Neuro-Symbolic Reinforcement Learning with Textual Explanations. - Adel Javanmard, Matthew Fahrbach, Vahab Mirrokni:
PriorBoost: An Adaptive Algorithm for Learning from Aggregate Responses. - Jacob Yoke Hong Si, Wendy Yusi Cheng, Michael Cooper, Rahul G. Krishnan:
InterpreTabNet: Distilling Predictive Signals from Tabular Data by Salient Feature Interpretation. - Francesco Bertolotti, Walter Cazzola:
By Tying Embeddings You Are Assuming the Distributional Hypothesis. - Shikai Fang, Qingsong Wen, Yingtao Luo, Shandian Zhe, Liang Sun:
BayOTIDE: Bayesian Online Multivariate Time Series Imputation with Functional Decomposition. - Ziao Guo, Yang Li, Chang Liu, Wenli Ouyang, Junchi Yan:
ACM-MILP: Adaptive Constraint Modification via Grouping and Selection for Hardness-Preserving MILP Instance Generation. - Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson, Alon Brutzkus, Nathan Srebro, Daniel Soudry:
How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers. - Taeho Yoon, Jaeyeon Kim, Jaewook J. Suh, Ernest K. Ryu:
Optimal Acceleration for Minimax and Fixed-Point Problems is Not Unique. - Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Côme Fiegel, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J. Mankowitz, Doina Precup, Bilal Piot:
Nash Learning from Human Feedback. - Feiran Li, Qianqian Xu, Shilong Bao, Zhiyong Yang, Runmin Cong, Xiaochun Cao, Qingming Huang:
Size-invariance Matters: Rethinking Metrics and Losses for Imbalanced Multi-object Salient Object Detection. - Mudit Gaur, Amrit S. Bedi, Di Wang, Vaneet Aggarwal:
Closing the Gap: Achieving Global Convergence (Last Iterate) of Actor-Critic under Markovian Sampling with Neural Network Parametrization. - Yongqiang Cai:
Vocabulary for Universal Approximation: A Linguistic Perspective of Mapping Compositions. - Jianyu Xu, Yu-Xiang Wang:
Pricing with Contextual Elasticity and Heteroscedastic Valuation. - Lu Bai, Lixin Cui, Ming Li, Yue Wang, Edwin R. Hancock:
QBMK: Quantum-based Matching Kernels for Un-attributed Graphs. - Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu:
Sample-specific Masks for Visual Reprogramming-based Prompting. - Xianghe Pang, Shuo Tang, Rui Ye, Yuxin Xiong, Bolun Zhang, Yanfeng Wang, Siheng Chen:
Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation. - Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, Yuichi Ike:
Learning Decision Trees and Forests with Algorithmic Recourse. - Ari Karchmer:
On Stronger Computational Separations Between Multimodal and Unimodal Machine Learning. - Yuanbang Liang, Jing Wu, Yu-Kun Lai, Yipeng Qin:
Efficient Precision and Recall Metrics for Assessing Generative Models using Hubness-aware Sampling. - Zhen Huang, Jiajin Sun, Yian Huang:
Quasi-Monte Carlo Features for Kernel Approximation. - Idan Attias, Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi:
Agnostic Sample Compression Schemes for Regression. - Seungjae Lee, Yibin Wang, Haritheja Etukuru, H. Jin Kim, Nur Muhammad (Mahi) Shafiullah, Lerrel Pinto:
Behavior Generation with Latent Actions. - Kamesh Munagala, Govind S. Sankar:
Individual Fairness in Graph Decomposition. - Cynthia Rudin, Chudi Zhong, Lesia Semenova, Margo I. Seltzer, Ronald Parr, Jiachang Liu, Srikar Katta, Jon Donnelly, Harry Chen, Zachery Boner:
Position: Amazing Things Come From Having Many Good Models. - Guy Ohayon, Tomer Michaeli, Michael Elad:
The Perception-Robustness Tradeoff in Deterministic Image Restoration. - Weixi Song, Zuchao Li, Lefei Zhang, Hai Zhao, Bo Du:
Sparse is Enough in Fine-tuning Pre-trained Large Language Models. - Micah Goldblum, Marc Anton Finzi, Keefer Rowan, Andrew Gordon Wilson:
Position: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning. - Daniel Dodd, Louis Sharrock, Christopher Nemeth:
Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds. - Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, Yu Su:
TravelPlanner: A Benchmark for Real-World Planning with Language Agents. - Daniel Barzilai, Ohad Shamir:
Generalization in Kernel Regression Under Realistic Assumptions. - Ziqing Fan, Shengchao Hu, Jiangchao Yao, Gang Niu, Ya Zhang, Masashi Sugiyama, Yanfeng Wang:
Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization. - Eleni Straitouri, Manuel Gomez Rodriguez:
Designing Decision Support Systems using Counterfactual Prediction Sets. - Zeyu Lu, Zidong Wang, Di Huang, Chengyue Wu, Xihui Liu, Wanli Ouyang, Lei Bai:
FiT: Flexible Vision Transformer for Diffusion Model. - Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Hongxin Wei, Tongliang Liu:
Refined Coreset Selection: Towards Minimal Coreset Size under Model Performance Constraints. - Loh Sher En Jessica, Naheed Anjum Arafat, Wei Xian Lim, Wai Lee Chan, Adams Wai-Kin Kong:
Finite Volume Features, Global Geometry Representations, and Residual Training for Deep Learning-based CFD Simulation. - Zhiheng Zhang:
Tight Partial Identification of Causal Effects with Marginal Distribution of Unmeasured Confounders. - Shaojie Li, Yong Liu:
Concentration Inequalities for General Functions of Heavy-Tailed Random Variables. - Konstantinos A. Oikonomidis, Emanuel Laude, Puya Latafat, Andreas Themelis, Panagiotis Patrinos:
Adaptive Proximal Gradient Methods Are Universal Without Approximation. - Yuesong Shen, Nico Daheim, Bai Cong, Peter Nickl, Gian Maria Marconi, Clement Bazan, Rio Yokota, Iryna Gurevych, Daniel Cremers, Mohammad Emtiyaz Khan, Thomas Möllenhoff:
Variational Learning is Effective for Large Deep Networks. - Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang:
Improving Interpretation Faithfulness for Vision Transformers. - Jiakui Hu, Man Yao, Xuerui Qiu, Yuhong Chou, Yuxuan Cai, Ning Qiao, Yonghong Tian, Bo Xu, Guoqi Li:
High-Performance Temporal Reversible Spiking Neural Networks with O(L) Training Memory and O(1) Inference Cost. - Ankit Pensia:
A Subquadratic Time Algorithm for Robust Sparse Mean Estimation. - Deyi Ji, Feng Zhao, Lanyun Zhu, Wenwei Jin, Hongtao Lu, Jieping Ye:
Discrete Latent Perspective Learning for Segmentation and Detection. - Marco Mussi, Alessandro Montenegro, Francesco Trovò, Marcello Restelli, Alberto Maria Metelli:
Best Arm Identification for Stochastic Rising Bandits. - Sangjun Park, JinYeong Bak:
Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture. - Xudong Li, Runze Hu, Jingyuan Zheng, Yan Zhang, Shengchuan Zhang, Xiawu Zheng, Ke Li, Yunhang Shen, Yutao Liu, Pingyang Dai, Rongrong Ji:
Integrating Global Context Contrast and Local Sensitivity for Blind Image Quality Assessment. - Yunshan Zhong, Jiawei Hu, You Huang, Yuxin Zhang, Rongrong Ji:
ERQ: Error Reduction for Post-Training Quantization of Vision Transformers. - Shengju Yu, Zhibin Dong, Siwei Wang, Xinhang Wan, Yue Liu, Weixuan Liang, Pei Zhang, Wenxuan Tu, Xinwang Liu:
Towards Resource-friendly, Extensible and Stable Incomplete Multi-view Clustering. - Jie Cheng, Gang Xiong, Xingyuan Dai, Qinghai Miao, Yisheng Lv, Fei-Yue Wang:
RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences. - Xingyu Wan, Chengquan Zhang, Pengyuan Lyu, Sen Fan, Zihan Ni, Kun Yao, Errui Ding, Jingdong Wang:
Towards Unified Multi-granularity Text Detection with Interactive Attention. - Tian-Zuo Wang, Wen-Bo Du, Zhi-Hua Zhou:
An Efficient Maximal Ancestral Graph Listing Algorithm. - Haixu Wu, Huakun Luo, Haowen Wang, Jianmin Wang, Mingsheng Long:
Transolver: A Fast Transformer Solver for PDEs on General Geometries. - Yuzhu Wang, Lechao Cheng, Chaowei Fang, Dingwen Zhang, Manni Duan, Meng Wang:
Revisiting the Power of Prompt for Visual Tuning. - Jian Liang, Lijun Sheng, Zhengbo Wang, Ran He, Tieniu Tan:
Realistic Unsupervised CLIP Fine-tuning with Universal Entropy Optimization. - Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clément Farabet, Shane Legg:
Position: Levels of AGI for Operationalizing Progress on the Path to AGI. - Duc Nguyen, Anderson Ye Zhang:
Novel Spectral Algorithms for the Partial Credit Model. - Anurag Singh, Siu Lun Chau, Shahine Bouabid, Krikamol Muandet:
Domain Generalisation via Imprecise Learning. - Kaihang Pan, Siliang Tang, Juncheng Li, Zhaoyu Fan, Wei Chow, Shuicheng Yan, Tat-Seng Chua, Yueting Zhuang, Hanwang Zhang:
Auto-Encoding Morph-Tokens for Multimodal LLM.
Accept (Poster)
- Ruochen Wang, Ting Liu, Cho-Jui Hsieh, Boqing Gong:
On Discrete Prompt Optimization for Diffusion Models. - Hao Dai, Yang Liu, Peng Su, Hecheng Cai, Shudong Huang, Jiancheng Lv:
Multi-View Clustering by Inter-cluster Connectivity Guided Reward. - Michal Nauman, Michal Bortkiewicz, Piotr Milos, Tomasz Trzcinski, Mateusz Ostaszewski, Marek Cygan:
Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning. - Matthieu Meeus, Igor Shilov, Manuel Faysse, Yves-Alexandre de Montjoye:
Copyright Traps for Large Language Models. - Dmitrii Krasheninnikov, Egor Krasheninnikov, Bruno Kacper Mlodozeniec, Tegan Maharaj, David Krueger:
Implicit meta-learning may lead language models to trust more reliable sources. - Hany Hamed, Subin Kim, Dongyeong Kim, Jaesik Yoon, Sungjin Ahn:
Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming. - Yang Zhao, Hao Zhang, Xiuyuan Hu:
When Will Gradient Regularization Be Harmful? - Dingzhi Yu, Yunuo Cai, Wei Jiang, Lijun Zhang:
Efficient Algorithms for Empirical Group Distributionally Robust Optimization and Beyond. - Novin Shahroudi, Mihkel Lepson, Meelis Kull:
Evaluation of Trajectory Distribution Predictions with Energy Score. - Kirankumar Shiragur, Jiaqi Zhang, Caroline Uhler:
Causal Discovery with Fewer Conditional Independence Tests. - Xuantong Liu, Tianyang Hu, Wenjia Wang, Kenji Kawaguchi, Yuan Yao:
Referee Can Play: An Alternative Approach to Conditional Generation via Model Inversion. - Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee, Nathaniel Li, Steven Basart, Bo Li, David A. Forsyth, Dan Hendrycks:
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal. - Rishab Balasubramanian, Jiawei Li, Prasad Tadepalli, Huazheng Wang, Qingyun Wu, Haoyu Zhao:
Adversarial Attacks on Combinatorial Multi-Armed Bandits. - Brian K. Chen, Tianyang Hu, Hui Jin, Hwee Kuan Lee, Kenji Kawaguchi:
Exact Conversion of In-Context Learning to Model Weights in Linearized-Attention Transformers. - Bonan Zhang, Chia-Yu Chen, Naveen Verma:
Reshape and Adapt for Output Quantization (RAOQ): Quantization-aware Training for In-memory Computing Systems. - Qinghua Tao, Francesco Tonin, Alex Lambert, Yingyi Chen, Panagiotis Patrinos, Johan A. K. Suykens:
Learning in Feature Spaces via Coupled Covariances: Asymmetric Kernel SVD and Nyström method. - Yingru Li, Jiawei Xu, Lei Han, Zhi-Quan Luo:
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgent. - Khashayar Gatmiry, Zhiyuan Li, Sashank J. Reddi, Stefanie Jegelka:
Simplicity Bias via Global Convergence of Sharpness Minimization. - Narun Krishnamurthi Raman, Taylor Lundy, Samuel Joseph Amouyal, Yoav Levine, Kevin Leyton-Brown, Moshe Tennenholtz:
STEER: Assessing the Economic Rationality of Large Language Models. - Nikhil Sardana, Jacob P. Portes, Sasha Doubov, Jonathan Frankle:
Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws. - Ruochen Wang, Sohyun An, Minhao Cheng, Tianyi Zhou, Sung Ju Hwang, Cho-Jui Hsieh:
One Prompt is not Enough: Automated Construction of a Mixture-of-Expert Prompts. - Kyuwon Kim, Donghwan Kim:
Double-Step Alternating Extragradient with Increasing Timescale Separation for Finding Local Minimax Points: Provable Improvements. - Zhongzhi Yu, Zheng Wang, Yonggan Fu, Huihong Shi, Khalid Shaikh, Yingyan Celine Lin:
Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration. - Zhuo Chen, Jacob McCarran, Esteban Vizcaino, Marin Soljacic, Di Luo:
TENG: Time-Evolving Natural Gradient for Solving PDEs With Deep Neural Nets Toward Machine Precision. - Yongxin Li, Mengyuan Liu, You Wu, Xucheng Wang, Xiangyang Yang, Shuiwang Li:
Learning Adaptive and View-Invariant Vision Transformer for Real-Time UAV Tracking. - Ziwei Jiang, Murat Kocaoglu:
Conditional Common Entropy for Instrumental Variable Testing and Partial Identification. - Chenlu Ye, Jiafan He, Quanquan Gu, Tong Zhang:
Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption. - Amrith Setlur, Saurabh Garg, Virginia Smith, Sergey Levine:
Prompting is a Double-Edged Sword: Improving Worst-Group Robustness of Foundation Models. - Elvis Dohmatob, Meyer Scetbon:
Precise Accuracy / Robustness Tradeoffs in Regression: Case of General Norms. - Myungsik Cho, Jongeui Park, Suyoung Lee, Youngchul Sung:
Hard Tasks First: Multi-Task Reinforcement Learning Through Task Scheduling. - Tehila Dahan, Kfir Yehuda Levy:
Fault Tolerant ML: Efficient Meta-Aggregation and Synchronous Training. - Jiaqi Zhai, Lucy Liao, Xing Liu, Yueming Wang, Rui Li, Xuan Cao, Leon Gao, Zhaojie Gong, Fangda Gu, Jiayuan He, Yinghai Lu, Yu Shi:
Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations. - Zongmeng Zhang, Yufeng Shi, Jinhua Zhu, Wengang Zhou, Xiang Qi, Peng Zhang, Houqiang Li:
Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning. - Jingxuan He, Mark Vero, Gabriela Krasnopolska, Martin T. Vechev:
Instruction Tuning for Secure Code Generation. - Saeid Naderiparizi, Xiaoxuan Liang, Setareh Cohan, Berend Zwartsenberg, Frank Wood:
Don't be so Negative! Score-based Generative Modeling with Oracle-assisted Guidance. - Suyuan Zhao, Jiahuan Zhang, Yushuai Wu, Yizhen Luo, Zaiqing Nie:
LangCell: Language-Cell Pre-training for Cell Identity Understanding. - Kun Zhang, Shaoan Xie, Ignavier Ng, Yujia Zheng:
Causal Representation Learning from Multiple Distributions: A General Setting. - Mingyuan Bai, Wei Huang, Tenghui Li, Andong Wang, Junbin Gao, Cesar F. Caiafa, Qibin Zhao:
Diffusion Models Demand Contrastive Guidance for Adversarial Purification to Advance. - Lokesh Nagalapatti, Pranava Singhal, Avishek Ghosh, Sunita Sarawagi:
PairNet: Training with Observed Pairs to Estimate Individual Treatment Effect. - Jonas Schweisthal, Dennis Frauen, Mihaela van der Schaar, Stefan Feuerriegel:
Meta-Learners for Partially-Identified Treatment Effects Across Multiple Environments. - Li Du, Afra Amini, Lucas Torroba Hennigen, Xinyan Velocity Yu, Holden Lee, Jason Eisner, Ryan Cotterell:
Principled Gradient-Based MCMC for Conditional Sampling of Text. - Samuel Horváth, Stefanos Laskaridis, Shashank Rajput, Hongyi Wang:
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition. - Taylan Kargin, Joudi Hajar, Vikrant Malik, Babak Hassibi:
Infinite-Horizon Distributionally Robust Regret-Optimal Control. - Drew Prinster, Samuel Don Stanton, Anqi Liu, Suchi Saria:
Conformal Validity Guarantees Exist for Any Data Distribution (and How to Find Them). - Hyunjong Lee, Yedarm Seong, Sungdong Lee, Joong-Ho Won:
StrWAEs to Invariant Representations. - Xinyu Zhang, Wenjie Qiu, Yi-Chen Li, Lei Yuan, Chengxing Jia, Zongzhang Zhang, Yang Yu:
Debiased Offline Representation Learning for Fast Online Adaptation in Non-stationary Dynamics. - Neil Band, Xuechen Li, Tengyu Ma, Tatsunori Hashimoto:
Linguistic Calibration of Long-Form Generations. - Naman Jain, Manish Shetty, Tianjun Zhang, King Han, Koushik Sen, Ion Stoica:
R2E: Turning any Github Repository into a Programming Agent Environment. - Seunghyun Kim, Seohyeon Jung, Seonghyeon Kim, Juho Lee:
Learning to Explore for Stochastic Gradient MCMC. - Yizhe Huang, Anji Liu, Fanqi Kong, Yaodong Yang, Song-Chun Zhu, Xue Feng:
Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning. - Riccardo Zamboni, Duilio Cirino, Marcello Restelli, Mirco Mutti:
How to Explore with Belief: State Entropy Maximization in POMDPs. - Piyushi Manupriya, Pratik Jawanpuria, Karthik S. Gurumoorthy, Saketha Nath Jagarlapudi, Bamdev Mishra:
Submodular framework for structured-sparse optimal transport. - Gianmarco Genalti, Marco Mussi, Nicola Gatti, Marcello Restelli, Matteo Castiglioni, Alberto Maria Metelli:
Graph-Triggered Rising Bandits. - Youngsik Yoon, Gangbok Lee, Sungsoo Ahn, Jungseul Ok:
Breadth-First Exploration on Adaptive Grid for Reinforcement Learning. - Kazusato Oko, Shunta Akiyama, Denny Wu, Tomoya Murata, Taiji Suzuki:
SILVER: Single-loop variance reduction and application to federated learning. - Bao Nguyen, Binh Nguyen, Hieu Trung Nguyen, Viet Anh Nguyen:
Generative Conditional Distributions by Neural (Entropic) Optimal Transport. - Daniel LeVine, Syed Asad Rizvi, Sacha Lévy, Nazreen Pallikkavaliyaveetil, David Zhang, Xingyu Chen, Sina Ghadermarzi, Ruiming Wu, Zihe Zheng, Ivan Vrkic, Anna Zhong, Daphne Raskin, Insu Han, Antonio Henrique de Oliveira Fonseca, Josue Ortega Caro, Amin Karbasi, Rahul Madhav Dhodapkar, David van Dijk:
Cell2Sentence: Teaching Large Language Models the Language of Biology. - Nikola Jovanovic, Robin Staab, Martin T. Vechev:
Watermark Stealing in Large Language Models. - Tae Hong Moon, Moonseok Choi, EungGu Yun, Jongmin Yoon, Gayoung Lee, Jaewoong Cho, Juho Lee:
A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models. - Billy Joe Franks, Christopher Morris, Ameya Velingker, Floris Geerts:
Weisfeiler-Leman at the margin: When more expressivity matters. - Harshay Shah, Andrew Ilyas, Aleksander Madry:
Decomposing and Editing Predictions by Modeling Model Computation. - Jaemoo Choi, Jaewoong Choi, Myungjoo Kang:
Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced Optimal Transport. - Luca Beurer-Kellner, Marc Fischer, Martin T. Vechev:
Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation. - Brian Cho, Kyra Gan, Nathan Kallus:
Peeking with PEAK: Sequential, Nonparametric Composite Hypothesis Tests for Means of Multiple Data Streams. - Zhaoru Ke, Hang Yu, Jianguo Li, Haipeng Zhang:
DUPLEX: Dual GAT for Complex Embedding of Directed Graphs. - Zixi Wei, Yuzhou Cao, Lei Feng:
Exploiting Human-AI Dependence for Learning to Defer. - Zhiyu Zhu, Huaming Chen, Xinyi Wang, Jiayu Zhang, Zhibo Jin, Jason Xue, Jun Shen:
Iterative Search Attribution for Deep Neural Networks. - Zitao Song, Chao Yang, Chaojie Wang, Bo An, Shuang Li:
Latent Logic Tree Extraction for Event Sequence Explanation from LLMs. - Gaurav Rohit Ghosal, Tatsunori Hashimoto, Aditi Raghunathan:
Understanding Finetuning for Factual Knowledge Extraction. - Peijie Dong, Lujun Li, Zhenheng Tang, Xiang Liu, Xinglin Pan, Qiang Wang, Xiaowen Chu:
Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models. - Felipe Maia Polo, Lucas Weber, Leshem Choshen, Yuekai Sun, Gongjun Xu, Mikhail Yurochkin:
tinyBenchmarks: evaluating LLMs with fewer examples. - Jiajun Ma, Shuchen Xue, Tianyang Hu, Wenjia Wang, Zhaoqiang Liu, Zhenguo Li, Zhi-Ming Ma, Kenji Kawaguchi:
The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling. - Zhankun Luo, Abolfazl Hashemi:
Unveiling the Cycloid Trajectory of EM Iterations in Mixed Linear Regression. - Qi Lv, Hao Li, Xiang Deng, Rui Shao, Michael Y. Wang, Liqiang Nie:
RoboMP2: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models. - Giannis Daras, Alex Dimakis, Constantinos Daskalakis:
Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data. - Yongcheng Zeng, Guoqing Liu, Weiyu Ma, Ning Yang, Haifeng Zhang, Jun Wang:
Token-level Direct Preference Optimization. - Kakei Yamamoto, Kazusato Oko, Zhuoran Yang, Taiji Suzuki:
Mean Field Langevin Actor-Critic: Faster Convergence and Global Optimality beyond Lazy Learning. - Nikita Durasov, Doruk Öner, Jonathan Donier, Hieu Le, Pascal Fua:
Enabling Uncertainty Estimation in Iterative Neural Networks. - Maksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré:
Clifford-Steerable Convolutional Neural Networks. - Sibylle Marcotte, Rémi Gribonval, Gabriel Peyré:
Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows. - Nayeong Kim, Juwon Kang, Sungsoo Ahn, Jungseul Ok, Suha Kwak:
Improving Robustness to Multiple Spurious Correlations by Multi-Objective Optimization. - Sergei Shumilin, Alexander Ryabov, Nikolay B. Yavich, Evgeny Burnaev, Vladimir Vanovskiy:
Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation. - Zhonghang Li, Lianghao Xia, Yong Xu, Chao Huang:
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic Prediction. - Utsav Singh, Wesley A. Suttle, Brian M. Sadler, Vinay P. Namboodiri, Amrit S. Bedi:
PIPER: Primitive-Informed Preference-based Hierarchical Reinforcement Learning via Hindsight Relabeling. - Soroush H. Zargarbashi, Mohammad Sadegh Akhondzadeh, Aleksandar Bojchevski:
Robust Yet Efficient Conformal Prediction Sets. - Qian Tang, Yikai Zhang, Boxiang Wang:
Finite Smoothing Algorithm for High-Dimensional Support Vector Machines and Quantile Regression. - Giovanni Barbarani, Francesco Vaccarino, Gabriele Trivigno, Marco Guerra, Gabriele Moreno Berton, Carlo Masone:
Scale-Free Image Keypoints Using Differentiable Persistent Homology. - Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Fei Liu, Zhenkun Wang, Qingfu Zhang:
Smooth Tchebycheff Scalarization for Multi-Objective Optimization. - Yara Shamshoum, Nitzan Hodos, Yuval Sieradzki, Assaf Schuster:
DNCs Require More Planning Steps. - Hyunki Seong, David Hyunchul Shim:
Self-Supervised Interpretable End-to-End Learning via Latent Functional Modularity. - Ke Xue, Rong-Xi Tan, Xiaobin Huang, Chao Qian:
Offline Multi-Objective Optimization. - Kishan Panaganti, Adam Wierman, Eric Mazumdar:
Model-Free Robust ϕ-Divergence Reinforcement Learning Using Both Offline and Online Data. - Yang Liu, Deyu Bo, Chuan Shi:
Graph Distillation with Eigenbasis Matching. - Florent Bouchard, Ammar Mian, Malik Tiomoko, Guillaume Ginolhac, Frédéric Pascal:
Random matrix theory improved Fréchet mean of symmetric positive definite matrices. - Jian Wang, Xin Lan, Yuxin Tian, Jiancheng Lv:
MS3D: A RG Flow-Based Regularization for GAN Training with Limited Data. - Moritz Stephan, Alexander Khazatsky, Eric Mitchell, Annie S. Chen, Sheryl Hsu, Archit Sharma, Chelsea Finn:
RLVF: Learning from Verbal Feedback without Overgeneralization. - Zhuowen Yuan, Zidi Xiong, Yi Zeng, Ning Yu, Ruoxi Jia, Dawn Song, Bo Li:
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content. - Praneeth Kacham, Vahab Mirrokni, Peilin Zhong:
PolySketchFormer: Fast Transformers via Sketching Polynomial Kernels. - Ren-Jian Wang, Ke Xue, Cong Guan, Chao Qian:
Quality-Diversity with Limited Resources. - Eros Fanì, Raffaello Camoriano, Barbara Caputo, Marco Ciccone:
Accelerating Heterogeneous Federated Learning with Closed-form Classifiers. - Xudong Gong, Dawei Feng, Kele Xu, Yuanzhao Zhai, Chengkang Yao, Weijia Wang, Bo Ding, Huaimin Wang:
Iterative Regularized Policy Optimization with Imperfect Demonstrations. - Ansong Ni, Miltiadis Allamanis, Arman Cohan, Yinlin Deng, Kensen Shi, Charles Sutton, Pengcheng Yin:
NExT: Teaching Large Language Models to Reason about Code Execution. - Nahyuk Lee, Juhong Min, Junha Lee, Seungwook Kim, Kanghee Lee, Jaesik Park, Minsu Cho:
3D Geometric Shape Assembly via Efficient Point Cloud Matching. - He Jia:
Simulation-Based Inference with Quantile Regression. - Hengkai Tan, Songming Liu, Kai Ma, Chengyang Ying, Xingxing Zhang, Hang Su, Jun Zhu:
Fourier Controller Networks for Real-Time Decision-Making in Embodied Learning. - Bingheng Li, Linxin Yang, Yupeng Chen, Senmiao Wang, Haitao Mao, Qian Chen, Yao Ma, Akang Wang, Tian Ding, Jiliang Tang, Ruoyu Sun:
PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming. - Yuhui Ding, Antonio Orvieto, Bobby He, Thomas Hofmann:
Recurrent Distance Filtering for Graph Representation Learning. - Kenshi Abe, Kaito Ariu, Mitsuki Sakamoto, Atsushi Iwasaki:
Adaptively Perturbed Mirror Descent for Learning in Games. - Yujun Zhou, Yufei Han, Haomin Zhuang, Hongyan Bao, Xiangliang Zhang:
Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data. - Ashwinee Panda, Xinyu Tang, Saeed Mahloujifar, Vikash Sehwag, Prateek Mittal:
A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization. - Nathan H. Ng, Roger Baker Grosse, Marzyeh Ghassemi:
Measuring Stochastic Data Complexity with Boltzmann Influence Functions. - Chang Chen, Junyeob Baek, Fei Deng, Kenji Kawaguchi, Caglar Gulcehre, Sungjin Ahn:
PlanDQ: Hierarchical Plan Orchestration via D-Conductor and Q-Performer. - Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, Yoon Kim:
Gated Linear Attention Transformers with Hardware-Efficient Training. - Wenzhe Li, Zihan Ding, Seth Karten, Chi Jin:
FightLadder: A Benchmark for Competitive Multi-Agent Reinforcement Learning. - Dimitri von Rütte, Sotiris Anagnostidis, Gregor Bachmann, Thomas Hofmann:
A Language Model's Guide Through Latent Space. - Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi, Stefanie Jegelka, Sanjiv Kumar:
Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning? - Andreas René Geist, Jonas Frey, Mikel Zhobro, Anna Levina, Georg Martius:
Learning with 3D rotations, a hitchhiker's guide to SO(3). - Alfred Nilsson, Klas Wijk, Sai Bharath Chandra Gutha, Erik Englesson, Alexandra Hotti, Carlo Saccardi, Oskar Kviman, Jens Lagergren, Ricardo Vinuesa, Hossein Azizpour:
Indirectly Parameterized Concrete Autoencoders. - Haokun Gui, Xiucheng Li, Xinyang Chen:
Vector Quantization Pretraining for EEG Time Series with Random Projection and Phase Alignment. - Hiren Madhu, Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri:
Unsupervised Parameter-free Simplicial Representation Learning with Scattering Transforms. - Young D. Kwon, Rui Li, Stylianos I. Venieris, Jagmohan Chauhan, Nicholas Donald Lane, Cecilia Mascolo:
TinyTrain: Resource-Aware Task-Adaptive Sparse Training of DNNs at the Data-Scarce Edge. - Kang You, Zekai Xu, Chen Nie, Zhijie Deng, Qinghai Guo, Xiang Wang, Zhezhi He:
SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN. - Onur Celik, Aleksandar Taranovic, Gerhard Neumann:
Acquiring Diverse Skills using Curriculum Reinforcement Learning with Mixture of Experts. - Huilai Chen, Yuanbo Wen, Limin Cheng, Shouxu Kuang, Yumeng Liu, Weijia Li, Ling Li, Rui Zhang, Xinkai Song, Wei Li, Qi Guo, Yunji Chen:
AutoOS: Make Your OS More Powerful by Exploiting Large Language Models. - Frédéric Zheng, Alexandre Proutière:
Conformal Predictions under Markovian Data. - Luca Franco, Paolo Mandica, Konstantinos Kallidromitis, Devin Guillory, Yu-Teng Li, Trevor Darrell, Fabio Galasso:
Hyperbolic Active Learning for Semantic Segmentation under Domain Shift. - David Ruhe, Jonathan Heek, Tim Salimans, Emiel Hoogeboom:
Rolling Diffusion Models. - Haoyang Li, Xin Wang, Zeyang Zhang, Haibo Chen, Ziwei Zhang, Wenwu Zhu:
Disentangled Graph Self-supervised Learning for Out-of-Distribution Generalization. - Herilalaina Rakotoarison, Steven Adriaensen, Neeratyoy Mallik, Samir Garibov, Eddie Bergman, Frank Hutter:
In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization. - Kui Zhang, Hang Zhou, Jie Zhang, Wenbo Zhou, Weiming Zhang, Nenghai Yu:
Transferable Facial Privacy Protection against Blind Face Restoration via Domain-Consistent Adversarial Obfuscation. - Haowei Lin, Baizhou Huang, Haotian Ye, Qinyu Chen, Zihao Wang, Sujian Li, Jianzhu Ma, Xiaojun Wan, James Zou, Yitao Liang:
Selecting Large Language Model to Fine-tune via Rectified Scaling Law. - Xingwu Chen, Difan Zou:
What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks. - Dipendra Misra, Aldo Pacchiano, Robert E. Schapire:
Provable Interactive Learning with Hindsight Instruction Feedback. - Katie E. Everett, Lechao Xiao, Mitchell Wortsman, Alexander A. Alemi, Roman Novak, Peter J. Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, Jeffrey Pennington:
Scaling Exponents Across Parameterizations and Optimizers. - Mingyu Kim, Jun-Seong Kim, Se-Young Yun, Jin-Hwa Kim:
Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs. - Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, Wei He, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, Xuanjing Huang:
Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning. - Ke Wang, Nikolaos Dimitriadis, Guillermo Ortiz-Jiménez, François Fleuret, Pascal Frossard:
Localizing Task Information for Improved Model Merging and Compression. - Milong Ren, Tian Zhu, Haicang Zhang:
CarbonNovo: Joint Design of Protein Structure and Sequence Using a Unified Energy-based Model. - Wonje Choi, Woo Kyung Kim, Minjong Yoo, Honguk Woo:
Embodied CoT Distillation From LLM To Off-the-shelf Agents. - Jan Hagnberger, Marimuthu Kalimuthu, Daniel Musekamp, Mathias Niepert:
Vectorized Conditional Neural Fields: A Framework for Solving Time-dependent Parametric Partial Differential Equations. - Sahel Iqbal, Adrien Corenflos, Simo Särkkä, Hany Abdulsamad:
Nesting Particle Filters for Experimental Design in Dynamical Systems. - Duy Minh Ho Nguyen, Nina Lukashina, Tai Nguyen, An T. Le, TrungTin Nguyen, Nhat Ho, Jan Peters, Daniel Sonntag, Viktor Zaverkin, Mathias Niepert:
Structure-Aware E(3)-Invariant Molecular Conformer Aggregation Networks. - Denis Blessing, Xiaogang Jia, Johannes Esslinger, Francisco Vargas, Gerhard Neumann:
Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling. - My Phan, Kianté Brantley, Stephanie Milani, Soroush Mehri, Gokul Swamy, Geoffrey J. Gordon:
When is Transfer Learning Possible? - Jinsoo Yoo, Yunpeng Liu, Frank Wood, Geoff Pleiss:
Layerwise Proximal Replay: A Proximal Point Method for Online Continual Learning. - Toru Shirakawa, Yi Li, Yulun Wu, Sky Qiu, Yuxuan Li, Mingduo Zhao, Hiroyasu Iso, Mark J. van der Laan:
Longitudinal Targeted Minimum Loss-based Estimation with Temporal-Difference Heterogeneous Transformer. - David Stein, Bjoern Andres:
Partial Optimality in the Linear Ordering Problem. - Ashok Vardhan Makkuva, Marco Bondaschi, Thijs Vogels, Martin Jaggi, Hyeji Kim, Michael Gastpar:
LASER: Linear Compression in Wireless Distributed Optimization. - Jiewei Zhang, Song Guo, Peiran Dong, Jie Zhang, Ziming Liu, Yue Yu, Xiao-Ming Wu:
Easing Concept Bleeding in Diffusion via Entity Localization and Anchoring. - Jinxia Yang, Bing Su, Xin Zhao, Ji-Rong Wen:
Unlocking the Power of Spatial and Temporal Information in Medical Multimodal Pre-training. - Bo Peng, Xinyi Ling, Ziru Chen, Huan Sun, Xia Ning:
eCeLLM: Generalizing Large Language Models for E-commerce from Large-scale, High-quality Instruction Data. - Junyoung Seo, Susung Hong, Wooseok Jang, Inès Hyeonsu Kim, Minseop Kwak, Doyup Lee, Seungryong Kim:
Retrieval-Augmented Score Distillation for Text-to-3D Generation. - Haiyan Jiang, Giulia De Masi, Huan Xiong, Bin Gu:
NDOT: Neuronal Dynamics-based Online Training for Spiking Neural Networks. - Xuefeng Liu, Chih-chan Tien, Peng Ding, Songhao Jiang, Rick L. Stevens:
Entropy-Reinforced Planning with Large Language Models for Drug Discovery. - Waïss Azizian, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos:
What is the Long-Run Distribution of Stochastic Gradient Descent? A Large Deviations Analysis. - Sayak Ray Chowdhury, Anush Kini, Nagarajan Natarajan:
Provably Robust DPO: Aligning Language Models with Noisy Feedback. - Mateusz Gabor, Tomasz Piotrowski, Renato L. G. Cavalcante:
Positive Concave Deep Equilibrium Models. - Tianyu Guo, Sai Praneeth Karimireddy, Michael I. Jordan:
Collaborative Heterogeneous Causal Inference Beyond Meta-analysis. - Artur P. Toshev, Jonas A. Erbesdobler, Nikolaus A. Adams, Johannes Brandstetter:
Neural SPH: Improved Neural Modeling of Lagrangian Fluid Dynamics. - Boheng Li, Yishuo Cai, Jisong Cai, Yiming Li, Han Qiu, Run Wang, Tianwei Zhang:
Purifying Quantization-conditioned Backdoors via Layer-wise Activation Correction with Distribution Approximation. - Yucen Wang, Shenghua Wan, Le Gan, Shuai Feng, De-Chuan Zhan:
AD3: Implicit Action is the Key for World Models to Distinguish the Diverse Visual Distractors. - Yancheng Huang, Kai Yang, Zelin Zhu, Leian Chen:
Triadic-OCD: Asynchronous Online Change Detection with Provable Robustness, Optimality, and Convergence. - Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang:
Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation. - Ilya Kaufman, Omri Azencot:
First-Order Manifold Data Augmentation for Regression Learning. - Zhiliang Chen, Chuan-Sheng Foo, Bryan Kian Hsiang Low:
Towards AutoAI: Optimizing a Machine Learning System with Black-box and Differentiable Components. - Tian Zhu, Milong Ren, Haicang Zhang:
Antibody Design Using a Score-based Diffusion Model Guided by Evolutionary, Physical and Geometric Constraints. - André V. Duarte, Xuandong Zhao, Arlindo L. Oliveira, Lei Li:
DE-COP: Detecting Copyrighted Content in Language Models Training Data. - Gergely Neu, Nneka Okolo:
Dealing With Unbounded Gradients in Stochastic Saddle-point Optimization. - Anqi Mao, Mehryar Mohri, Yutao Zhong:
H-Consistency Guarantees for Regression. - Li Ding, Jenny Zhang, Jeff Clune, Lee Spector, Joel Lehman:
Quality Diversity through Human Feedback: Towards Open-Ended Diversity-Driven Optimization. - Juyeon Ko, Inho Kong, Dogyun Park, Hyunwoo J. Kim:
Stochastic Conditional Diffusion Models for Robust Semantic Image Synthesis. - Alexander Munteanu, Simon Omlor:
Optimal bounds for ℓp sensitivity sampling via ℓ2 augmentation. - Luke Guerdan, Amanda Coston, Ken Holstein, Steven Wu:
Predictive Performance Comparison of Decision Policies Under Confounding. - Riccardo Grazzi, Massimiliano Pontil, Saverio Salzo:
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence Rates. - Silvia Sapora, Gokul Swamy, Chris Lu, Yee Whye Teh, Jakob Nicolaus Foerster:
EvIL: Evolution Strategies for Generalisable Imitation Learning. - Samuel Garcin, James Doran, Shangmin Guo, Christopher G. Lucas, Stefano V. Albrecht:
DRED: Zero-Shot Transfer in Reinforcement Learning via Data-Regularised Environment Design. - Núria Armengol Urpí, Marco Bagatella, Marin Vlastelica, Georg Martius:
Causal Action Influence Aware Counterfactual Data Augmentation. - Valérie Castin, Pierre Ablin, Gabriel Peyré:
How Smooth Is Attention? - Yi Hu, Xiaojuan Tang, Haotong Yang, Muhan Zhang:
Case-Based or Rule-Based: How Do Transformers Do the Math? - Mohammad-Reza Rahmani, Mohammad Hossein Yassaee, Mohammad Ali Maddah-Ali, Mohammad Reza Aref:
Fundamental Limits of Distributed Covariance Matrix Estimation Under Communication Constraints. - Mahdi Nikdan, Soroush Tabesh, Elvir Crncevic, Dan Alistarh:
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation. - Jan Pauls, Max Zimmer, Una M. Kelly, Martin Schwartz, Sassan Saatchi, Philippe Ciais, Sebastian Pokutta, Martin Brandt, Fabian Gieseke:
Estimating Canopy Height at Scale. - Yi Feng, Georgios Piliouras, Xiao Wang:
Prediction Accuracy of Learning in Games : Follow-the-Regularized-Leader meets Heisenberg. - Gaurav Pandey, Yatin Nandwani, Tahira Naseem, Mayank Mishra, Guangxuan Xu, Dinesh Raghu, Sachindra Joshi, Asim Munawar, Ramón Fernandez Astudillo:
BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback. - Chanho Park, Namyoon Lee:
SignSGD with Federated Defense: Harnessing Adversarial Attacks through Gradient Sign Decoding. - Ron Dorfman, Naseem Yehya, Kfir Yehuda Levy:
Dynamic Byzantine-Robust Learning: Adapting to Switching Byzantine Workers. - Sara Ahmadian, Edith Cohen:
Unmasking Vulnerabilities: Cardinality Sketches under Adaptive Inputs. - Tianshu Chu, Dachuan Xu, Wei Yao, Jin Zhang:
SPABA: A Single-Loop and Probabilistic Stochastic Bilevel Algorithm Achieving Optimal Sample Complexity. - Dongyoon Hwang, Byungkun Lee, Hojoon Lee, Hyunseung Kim, Jaegul Choo:
Adapting Pretrained ViTs with Convolution Injector for Visuo-Motor Control. - Jinho Bok, Weijie J. Su, Jason M. Altschuler:
Shifted Interpolation for Differential Privacy. - Souradip Chakraborty, Jiahao Qiu, Hui Yuan, Alec Koppel, Dinesh Manocha, Furong Huang, Amrit S. Bedi, Mengdi Wang:
MaxMin-RLHF: Alignment with Diverse Human Preferences. - Junyi Fan, Yuxuan Han, Zijian Liu, Jian-Feng Cai, Yang Wang, Zhengyuan Zhou:
On the Convergence of Projected Bures-Wasserstein Gradient Descent under Euclidean Strong Convexity. - Sikha Pentyala, Mayana Pereira, Martine De Cock:
CaPS: Collaborative and Private Synthetic Data Generation from Distributed Sources. - Hyungho Na, Il-Chul Moon:
LAGMA: LAtent Goal-guided Multi-Agent Reinforcement Learning. - Arjun Karuvally, Terrence J. Sejnowski, Hava T. Siegelmann:
Hidden Traveling Waves bind Working Memory Variables in Recurrent Neural Networks. - Johannes A. Schubert, Akshay K. Jagadish, Marcel Binz, Eric Schulz:
In-Context Learning Agents Are Asymmetric Belief Updaters. - Kirsten Fischer, Javed Lindner, David Dahmen, Zohar Ringel, Michael Krämer, Moritz Helias:
Critical feature learning in deep neural networks. - Steven Wilkins-Reeves, Xu Chen, Qi Ma, Christine Agarwal, Aude Hofleitner:
Multiply Robust Estimation for Local Distribution Shifts with Multiple Domains. - Ji Won Park, Natasa Tagasovska, Michael Maser, Stephen Ra, Kyunghyun Cho:
BOtied: Multi-objective Bayesian optimization with tied multivariate ranks. - Rahul Ramesh, Ekdeep Singh Lubana, Mikail Khona, Robert P. Dick, Hidenori Tanaka:
Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks. - Junwei Yang, Kangjie Zheng, Siyu Long, Zaiqing Nie, Ming Zhang, Xinyu Dai, Wei-Ying Ma, Hao Zhou:
Mol-AE: Auto-Encoder Based Molecular Representation Learning With 3D Cloze Test Objective. - Amine Ouasfi, Adnane Boukhayma:
Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries. - Hafedh El Ferchichi, Matthieu Lerasle, Vianney Perchet:
Active Ranking and Matchmaking, with Perfect Matchings. - Undral Byambadalai, Tatsushi Oka, Shota Yasui:
Estimating Distributional Treatment Effects in Randomized Experiments: Machine Learning for Variance Reduction. - Jingwei Zhang, Cheuk Ting Li, Farzan Farnia:
An Interpretable Evaluation of Entropy-based Novelty of Generative Models. - Thomas Pouplin, Alan Jeffares, Nabeel Seedat, Mihaela van der Schaar:
Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise. - Muthu Chidambaram, Holden Lee, Colin McSwiggen, Semon Rezchikov:
How Flawed Is ECE? An Analysis via Logit Smoothing. - Jinsook Kim:
Can Machines Learn the True Probabilities? - Jingtan Wang, Xiaoqiang Lin, Rui Qiao, Chuan-Sheng Foo, Bryan Kian Hsiang Low:
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions. - Zhihai Wang, Jie Wang, Dongsheng Zuo, Yunjie Ji, Xilin Xia, Yuzhe Ma, Jianye Hao, Mingxuan Yuan, Yongdong Zhang, Feng Wu:
A Hierarchical Adaptive Multi-Task Reinforcement Learning Framework for Multiplier Circuit Design. - Youwei Shu, Xi Xiao, Derui Wang, Yuxin Cao, Siji Chen, Jason Xue, Linyi Li, Bo Li:
Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing. - Alexandru Tifrea, Preethi Lahoti, Ben Packer, Yoni Halpern, Ahmad Beirami, Flavien Prost:
FRAPPÉ: A Group Fairness Framework for Post-Processing Everything. - Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su:
GPT-4V(ision) is a Generalist Web Agent, if Grounded. - Wenqiang Li, Weijun Li, Lina Yu, Min Wu, Linjun Sun, Jingyi Liu, Yanjie Li, Shu Wei, Yusong Deng, Meilan Hao:
A Neural-Guided Dynamic Symbolic Network for Exploring Mathematical Expressions from Data. - Sohei Arisaka, Qianxiao Li:
Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving. - Youngsoo Jang, Geon-Hyeong Kim, Byoungjip Kim, Yu Jin Kim, Honglak Lee, Moontae Lee:
Degeneration-free Policy Optimization: RL Fine-Tuning for Language Models without Degeneration. - Hossein Mirzaei, Mohammad Jafari, Hamid Reza Dehbashi, Ali Ansari, Sepehr Ghobadi, Masoud Hadi, Arshia Soltani Moakhar, Mohammad Azizmalayeri, Mahdieh Soleymani Baghshah, Mohammad Hossein Rohban:
RODEO: Robust Outlier Detection via Exposing Adaptive Out-of-Distribution Samples. - Jayoung Kim, Yehjin Shin, Jeongwhan Choi, Hyowon Wi, Noseong Park:
Polynomial-based Self-Attention for Table Representation Learning. - Eduard Zamfir, Zongwei Wu, Nancy Mehta, Yulun Zhang, Radu Timofte:
See More Details: Efficient Image Super-Resolution by Experts Mining. - Da Yu, Peter Kairouz, Sewoong Oh, Zheng Xu:
Privacy-Preserving Instructions for Aligning Large Language Models. - Ziyang Zhang, Qizhen Zhang, Jakob Nicolaus Foerster:
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition. - Baohong Li, Haoxuan Li, Ruoxuan Xiong, Anpeng Wu, Fei Wu, Kun Kuang:
Learning Shadow Variable Representation for Treatment Effect Estimation under Collider Bias. - Mikail Khona, Maya Okawa, Jan Hula, Rahul Ramesh, Kento Nishi, Robert P. Dick, Ekdeep Singh Lubana, Hidenori Tanaka:
Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model. - Germain Vivier-Ardisson, Alexandre Forel, Axel Parmentier, Thibaut Vidal:
CF-OPT: Counterfactual Explanations for Structured Prediction. - Shuo Wen, Maria Brbic:
Cross-domain Open-world Discovery. - Yang Zhou, Zijie Zhang, Zeru Zhang, Lingjuan Lyu, Wei-Shinn Ku:
Effective Federated Graph Matching. - Nikita Balabin, Daria Voronkova, Ilya Trofimov, Evgeny Burnaev, Serguei Barannikov:
Disentanglement Learning via Topology. - Haoyu Li, Shichang Zhang, Longwen Tang, Mathieu Bauchy, Yizhou Sun:
Predicting and Interpreting Energy Barriers of Metallic Glasses with Graph Neural Networks. - Deokjae Lee, Hyun Oh Song, Kyunghyun Cho:
Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization. - He Zhang, Chang Liu, Zun Wang, Xinran Wei, Siyuan Liu, Nanning Zheng, Bin Shao, Tie-Yan Liu:
Self-Consistency Training for Density-Functional-Theory Hamiltonian Prediction. - Yuguang Yan, Hao Zhou, Zeqin Yang, Weilin Chen, Ruichu Cai, Zhifeng Hao:
Reducing Balancing Error for Causal Inference via Optimal Transport. - Shokichi Takakura, Taiji Suzuki:
Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective. - Amit Peleg, Matthias Hein:
Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks. - Sidi Lu, Wenbo Zhao, Chenyang Tao, Arpit Gupta, Shanchan Wu, Tagyoung Chung, Nanyun Peng:
DiNADO: Norm-Disentangled Neurally-Decomposed Oracles for Controlling Language Models. - Md. Musfiqur Rahman, Murat Kocaoglu:
Modular Learning of Deep Causal Generative Models for High-dimensional Causal Inference. - Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, Artur Dubrawski:
MOMENT: A Family of Open Time-series Foundation Models. - Jacob Dunefsky, Arman Cohan:
Observable Propagation: Uncovering Feature Vectors in Transformers. - Adam Stein, Aaditya Naik, Yinjun Wu, Mayur Naik, Eric Wong:
Towards Compositionality in Concept Learning. - Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozière, David Lopez-Paz, Gabriel Synnaeve:
Better & Faster Large Language Models via Multi-token Prediction. - Zijie Pan, Yushan Jiang, Sahil Garg, Anderson Schneider, Yuriy Nevmyvaka, Dongjin Song:
S2IP-LLM: Semantic Space Informed Prompt Learning with LLM for Time Series Forecasting. - Ngoc Bui, Hieu Trung Nguyen, Viet Anh Nguyen, Rex Ying:
Explaining Graph Neural Networks via Structure-aware Interaction Index. - Brian Cho, Yaroslav Mukhin, Kyra Gan, Ivana Malenica:
Kernel Debiased Plug-in Estimation: Simultaneous, Automated Debiasing without Influence Functions for Many Target Parameters. - Andreas Kontogiannis, Vasilis Pollatos, Sotiris Kanellopoulos, Panayotis Mertikopoulos, Aris Pagourtzis, Ioannis Panageas:
The Computational Complexity of Finding Second-Order Stationary Points. - Thomas Wedenig, Rishub Nagpal, Gaëtan Cassiers, Stefan Mangard, Robert Peharz:
Exact Soft Analytical Side-Channel Attacks using Tractable Circuits. - Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong:
Training-Free Long-Context Scaling of Large Language Models. - Victor Letzelter, David Perera, Cédric Rommel, Mathieu Fontaine, Slim Essid, Gaël Richard, Patrick Pérez:
Winner-takes-all learners are geometry-aware conditional density estimators. - Weijieying Ren, Xiaoting Li, Huiyuan Chen, Vineeth Rakesh, Zhuoyi Wang, Mahashweta Das, Vasant G. Honavar:
TabLog: Test-Time Adaptation for Tabular Data Using Logic Rules. - Thomas F. Burns:
Semantically-correlated memories in a dense associative model. - Keyan Miao, Konstantinos Gatsis:
How Deep Do We Need: Accelerating Training and Inference of Neural ODEs via Control Perspective. - Aishwarya P. S., Pranav Ajit Nair, Yashas Samaga, Toby Boyd, Sanjiv Kumar, Prateek Jain, Praneeth Netrapalli:
Tandem Transformers for Inference Efficient LLMs. - David Dalton, Dirk Husmeier, Hao Gao:
Physics and Lie symmetry informed Gaussian processes. - Xiner Li, Shurui Gui, Youzhi Luo, Shuiwang Ji:
Graph Structure Extrapolation for Out-of-Distribution Generalization. - Andries P. Smit, Nathan Grinsztajn, Paul Duckworth, Thomas D. Barrett, Arnu Pretorius:
Should we be going MAD? A Look at Multi-Agent Debate Strategies for LLMs. - Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung:
Optimal Transport for Structure Learning Under Missing Data. - Xun Deng, Junlong Liu, Han Zhong, Fuli Feng, Chen Shen, Xiangnan He, Jieping Ye, Zheng Wang:
A3S: A General Active Clustering Method with Pairwise Constraints. - Shihao Zhang, Kenji Kawaguchi, Angela Yao:
Deep Regression Representation Learning with Topology. - Nicolas Michel, Maorong Wang, Ling Xiao, Toshihiko Yamasaki:
Rethinking Momentum Knowledge Distillation in Online Continual Learning. - Yuanhao Pu, Xiaolong Chen, Xu Huang, Jin Chen, Defu Lian, Enhong Chen:
Learning-Efficient Yet Generalizable Collaborative Filtering for Item Recommendation. - Yongmin Lee, Hye Won Chung:
SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching. - Lucas Weber, Ana Busic, Jiamin Zhu:
Reinforcement Learning and Regret Bounds for Admission Control. - Linyuan Gong, Mostafa Elhoushi, Alvin Cheung:
AST-T5: Structure-Aware Pretraining for Code Generation and Understanding. - Guanwen Qiu, Da Kuang, Surbhi Goel:
Complexity Matters: Feature Learning in the Presence of Spurious Correlations. - Mira Jürgens, Nis Meinert, Viktor Bengs, Eyke Hüllermeier, Willem Waegeman:
Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods? - Rom N. Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T. H. Smith, Ramin M. Hasani, Mathias Lechner, Qi An, Christopher Ré, Hajime Asama, Stefano Ermon, Taiji Suzuki, Michael Poli, Atsushi Yamashita:
State-Free Inference of State-Space Models: The *Transfer Function* Approach. - Junfu Wang, Yuanfang Guo, Liang Yang, Yunhong Wang:
Understanding Heterophily for Graph Neural Networks. - Yichen Wu, Hong Wang, Peilin Zhao, Yefeng Zheng, Ying Wei, Long-Kai Huang:
Mitigating Catastrophic Forgetting in Online Continual Learning by Modeling Previous Task Interrelations via Pareto Optimization. - Noam Razin, Yotam Alexander, Edo Cohen-Karlik, Raja Giryes, Amir Globerson, Nadav Cohen:
Implicit Bias of Policy Gradient in Linear Quadratic Control: Extrapolation to Unseen Initial States. - Jaron Maene, Vincent Derkinderen, Luc De Raedt:
On the Hardness of Probabilistic Neurosymbolic Learning. - Pengyu Li, Xiao Li, Yutong Wang, Qing Qu:
Neural Collapse in Multi-label Learning with Pick-all-label Loss. - Ionut-Vlad Modoranu, Aleksei Kalinov, Eldar Kurtic, Elias Frantar, Dan Alistarh:
Error Feedback Can Accurately Compress Preconditioners. - Myung Jun Kim, Léo Grinsztajn, Gaël Varoquaux:
CARTE: Pretraining and Transfer for Tabular Learning. - Shunxing Fan, Mingming Gong, Kun Zhang:
On the Recoverability of Causal Relations from Temporally Aggregated I.I.D. Data. - Zhiyuan He, Yijun Yang, Pin-Yu Chen, Qiang Xu, Tsung-Yi Ho:
Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning. - Jeongyeol Kwon, Dohyun Kwon, Hanbaek Lyu:
On The Complexity of First-Order Methods in Stochastic Bilevel Optimization. - Yichen Li, Chicheng Zhang:
Agnostic Interactive Imitation Learning: New Theory and Practical Algorithms. - Ignacio Hounie, Javier Porras-Valenzuela, Alejandro Ribeiro:
Loss Shaping Constraints for Long-Term Time Series Forecasting. - Xu-Hui Liu, Tian-Shuo Liu, Shengyi Jiang, Ruifeng Chen, Zhilong Zhang, Xinwei Chen, Yang Yu:
Energy-Guided Diffusion Sampling for Offline-to-Online Reinforcement Learning. - Xiangxin Zhou, Liang Wang, Yichi Zhou:
Stabilizing Policy Gradients for Stochastic Differential Equations via Consistency with Perturbation Process. - Francesco Demelas, Joseph Le Roux, Mathieu Lacroix, Axel Parmentier:
Predicting Lagrangian Multipliers for Mixed Integer Linear Programs. - Youngseog Chung, Ian Char, Jeff Schneider:
Sampling-based Multi-dimensional Recalibration. - Antti Koskela, Rachel Redberg, Yu-Xiang Wang:
Privacy Profiles for Private Selection. - Emanuele Loffredo, Mauro Pastore, Simona Cocco, Rémi Monasson:
Restoring balance: principled under/oversampling of data for optimal classification. - Jianliang He, Siyu Chen, Fengzhuo Zhang, Zhuoran Yang:
From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems. - Xiangzhe Kong, Wenbing Huang, Yang Liu:
Generalist Equivariant Transformer Towards 3D Molecular Interaction Learning. - Joseph Shenouda, Yamin Zhou, Robert D. Nowak:
ReLUs Are Sufficient for Learning Implicit Neural Representations. - Payel Das, Subhajit Chaudhury, Elliot Nelson, Igor Melnyk, Sarathkrishna Swaminathan, Sihui Dai, Aurélie C. Lozano, Georgios Kollias, Vijil Chenthamarakshan, Jirí Navrátil, Soham Dan, Pin-Yu Chen:
Larimar: Large Language Models with Episodic Memory Control. - Jeremy McMahan, Giovanni Artiglio, Qiaomin Xie:
Roping in Uncertainty: Robustness and Regularization in Markov Games. - Yang Yang, Chao Yang, Boyang Li, Yinghao Fu, Shuang Li:
Neuro-Symbolic Temporal Point Processes. - Defang Chen, Zhenyu Zhou, Can Wang, Chunhua Shen, Siwei Lyu:
On the Trajectory Regularity of ODE-based Diffusion Sampling. - Zhiwei Jia, Vineet Thumuluri, Fangchen Liu, Linghao Chen, Zhiao Huang, Hao Su:
Chain-of-Thought Predictive Control. - Zhaorun Chen, Zhuokai Zhao, Hongyin Luo, Huaxiu Yao, Bo Li, Jiawei Zhou:
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding. - Yukinari Hisaki, Isao Ono:
RVI-SAC: Average Reward Off-Policy Deep Reinforcement Learning. - Yasen Wang, Junlin Li, Zuogong Yue, Ye Yuan:
An Iterative Min-Min Optimization Method for Sparse Bayesian Learning. - Artyom Gadetsky, Yulun Jiang, Maria Brbic:
Let Go of Your Labels with Unsupervised Transfer. - Siddartha Devic, Aleksandra Korolova, David Kempe, Vatsal Sharan:
Stability and Multigroup Fairness in Ranking with Uncertain Predictions. - Qiang Huang, Chuizheng Meng, Defu Cao, Biwei Huang, Yi Chang, Yan Liu:
An Empirical Examination of Balancing Strategy for Counterfactual Estimation on Time Series. - Alex Tamkin, Mohammad Taufeeque, Noah D. Goodman:
Codebook Features: Sparse and Discrete Interpretability for Neural Networks. - Wen-Shu Fan, Su Lu, Xin-Chun Li, De-Chuan Zhan, Le Gan:
Revisit the Essence of Distilling Knowledge through Calibration. - Matej Grcic, Artyom Gadetsky, Maria Brbic:
Fine-grained Classes and How to Find Them. - Cunxiao Du, Jing Jiang, Yuanchen Xu, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You:
GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding. - Ruifeng Chen, Chengxing Jia, Zefang Huang, Tian-Shuo Liu, Xu-Hui Liu, Yang Yu:
Offline Transition Modeling via Contrastive Energy Learning. - Mao Hong, Zhengling Qi, Yanxun Xu:
Model-based Reinforcement Learning for Confounded POMDPs. - Lan Li, Xin-Chun Li, Han-Jia Ye, De-Chuan Zhan:
Enhancing Class-Imbalanced Learning with Pre-Trained Guidance through Class-Conditional Knowledge Distillation. - Lancheng Zou, Wenqian Zhao, Shuo Yin, Chen Bai, Qi Sun, Bei Yu:
BiE: Bi-Exponent Block Floating-Point for Large Language Models Quantization. - Alvaro Labarca, Denis Parra, Rodrigo Toro Icarte:
On the Unexpected Effectiveness of Reinforcement Learning for Sequential Recommendation. - Slimane Thabet, Mehdi Djellabi, Igor Olegovich Sokolov, Sachin Kasture, Louis-Paul Henry, Loïc Henriet:
Quantum Positional Encodings for Graph Neural Networks. - Kartik Sharma, Srijan Kumar, Rakshit Trivedi:
Diffuse, Sample, Project: Plug-And-Play Controllable Graph Generation. - Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh:
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities. - Thomas T. C. K. Zhang, Bruce D. Lee, Ingvar M. Ziemann, George J. Pappas, Nikolai Matni:
Guarantees for Nonlinear Representation Learning: Non-identical Covariates, Dependent Data, Fewer Samples. - Yufei Kuang, Jie Wang, Yuyan Zhou, Xijun Li, Fangzhou Zhu, Jianye Hao, Feng Wu:
Towards General Algorithm Discovery for Combinatorial Optimization: Learning Symbolic Branching Policy from Bipartite Graph. - Jon Schneider, Kiran Vodrahalli:
Online Learning with Bounded Recall. - Caterina Graziani, Tamara Drucks, Fabian Jogl, Monica Bianchini, Franco Scarselli, Thomas Gärtner:
The Expressive Power of Path-Based Graph Neural Networks. - Liangliang Shi, Jack Fan, Junchi Yan:
OT-CLIP: Understanding and Generalizing CLIP via Optimal Transport. - Milad Sefidgaran, Romain Chor, Abdellatif Zaidi, Yijun Wan:
Lessons from Generalization Error Analysis of Federated Learning: You May Communicate Less Often! - Angus Phillips, Hai-Dang Dau, Michael John Hutchinson, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet:
Particle Denoising Diffusion Sampler. - Kang Liu, Yingyi Zhang, Jingyun Zhang, Jinmin Li, Jun Wang, Shaoming Wang, Chun Yuan, Rizen Guo:
DFD: Distilling the Feature Disparity Differently for Detectors. - Huikang Liu, Peng Wang, Longxiu Huang, Qing Qu, Laura Balzano:
Symmetric Matrix Completion with ReLU Sampling. - Luca Beurer-Kellner, Mark Niklas Müller, Marc Fischer, Martin T. Vechev:
Prompt Sketching for Large Language Models. - Benjamin Kurt Miller, Ricky T. Q. Chen, Anuroop Sriram, Brandon M. Wood:
FlowMM: Generating Materials with Riemannian Flow Matching. - Zijie Geng, Jie Wang, Ziyan Liu, Siyuan Xu, Zhentao Tang, Mingxuan Yuan, Jianye Hao, Yongdong Zhang, Feng Wu:
Reinforcement Learning within Tree Search for Fast Macro Placement. - Bo Li, Wei Wang, Peng Ye:
Improved Bounds for Pure Private Agnostic Learning: Item-Level and User-Level Privacy. - Mengchu Xu, Yuxuan Zhang, Jian Wang:
Exponential Spectral Pursuit: An Effective Initialization Method for Sparse Phase Retrieval. - Lujie Yang, Hongkai Dai, Zhouxing Shi, Cho-Jui Hsieh, Russ Tedrake, Huan Zhang:
Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation. - Zehao Dou, Minshuo Chen, Mengdi Wang, Zhuoran Yang:
Theory of Consistency Diffusion Models: Distribution Estimation Meets Fast Sampling. - Yujin Han, Difan Zou:
Improving Group Robustness on Spurious Correlation Requires Preciser Group Inference. - Mohamed Elsayed, Homayoon Farrahi, Felix Dangel, A. Rupam Mahmood:
Revisiting Scalable Hessian Diagonal Approximations for Applications in Reinforcement Learning. - Siru Ouyang, Zhuosheng Zhang, Bing Yan, Xuan Liu, Yejin Choi, Jiawei Han, Lianhui Qin:
Structured Chemistry Reasoning with Large Language Models. - Byeonghu Na, Yeongmin Kim, Minsang Park, DongHyeok Shin, Wanmo Kang, Il-Chul Moon:
Diffusion Rejection Sampling. - Kai Weixian Lan, Elias Gueidon, Ayano Kaneda, Julian Panetta, Joseph Teran:
A Neural-Preconditioned Poisson Solver for Mixed Dirichlet and Neumann Boundary Conditions. - Ian Connick Covert, Wenlong Ji, Tatsunori Hashimoto, James Zou:
Scaling Laws for the Value of Individual Data Points in Machine Learning. - Yuyang Zhang, Shahriar Talebi, Na Li:
Learning Low-dimensional Latent Dynamics from High-dimensional Observations: Non-asymptotics and Lower Bounds. - See Hian Lee, Feng Ji, Kelin Xia, Wee Peng Tay:
Graph Neural Networks with a Distribution of Parametrized Graphs. - Qianlan Yang, Yu-Xiong Wang:
ATraDiff: Accelerating Online Reinforcement Learning with Imaginary Trajectories. - Soumya Basu, Ankit Singh Rawat, Manzil Zaheer:
A Statistical Framework for Data-dependent Retrieval-Augmented Models. - David Rolnick, Alán Aspuru-Guzik, Sara Beery, Bistra Dilkina, Priya L. Donti, Marzyeh Ghassemi, Hannah Kerner, Claire Monteleoni, Esther Rolf, Milind Tambe, Adam White:
Position: Application-Driven Innovation in Machine Learning. - Vikas Deep, Achal Bassamboo, Sandeep K. Juneja:
Asymptotically Optimal and Computationally Efficient Average Treatment Effect Estimation in A/B testing. - Grigory Bartosh, Dmitry P. Vetrov, Christian A. Naesseth:
Neural Diffusion Models. - Randall Balestriero, Yann LeCun:
How Learning by Reconstruction Produces Uninformative Features For Perception. - Ling Li, Yu Ye, Bingchuan Jiang, Wei Zeng:
GeoReasoner: Geo-localization with Reasoning in Street Views using a Large Vision-Language Model. - Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh:
Extreme Compression of Large Language Models via Additive Quantization. - Yair Schiff, Chia-Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov:
Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling. - Jiayi Wang, Shiqiang Wang, Rong-Rong Chen, Mingyue Ji:
A New Theoretical Perspective on Data Heterogeneity in Federated Optimization. - Sehoon Kim, Suhong Moon, Ryan Tabrizi, Nicholas Lee, Michael W. Mahoney, Kurt Keutzer, Amir Gholami:
An LLM Compiler for Parallel Function Calling. - Kaiwen Xue, Yuhao Zhou, Shen Nie, Xu Min, Xiaolu Zhang, Jun Zhou, Chongxuan Li:
Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations. - Christian Fiedler, Pierre-François Massiani, Friedrich Solowjow, Sebastian Trimpe:
On Statistical Learning Theory for Distributional Inputs. - Xinyu Guo, Kai Wu, Xiaoyu Zhang, Jing Liu:
Automated Loss function Search for Class-imbalanced Node Classification. - Vladimir R. Kostic, Karim Lounici, Prune Inzerilli, Pietro Novelli, Massimiliano Pontil:
Consistent Long-Term Forecasting of Ergodic Dynamical Systems. - Yuchen Li, Alexandre Kirchmeyer, Aashay Mehta, Yilong Qin, Boris Dadachev, Kishore Papineni, Sanjiv Kumar, Andrej Risteski:
Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines. - Weiye Zhao, Feihan Li, Yifan Sun, Rui Chen, Tianhao Wei, Changliu Liu:
Absolute Policy Optimization: Enhancing Lower Probability Bound of Performance with High Confidence. - William Muldrew, Peter Hayes, Mingtian Zhang, David Barber:
Active Preference Learning for Large Language Models. - Tianchen Zhou, Hairi, Haibo Yang, Jia Liu, Tian Tong, Fan Yang, Michinari Momma, Yan Gao:
Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning. - Yujing Liu, Zhixin Liu, Lei Guo:
Convergence of Online Learning Algorithm for a Mixture of Multiple Linear Regressions. - Michael Y. Li, Emily B. Fox, Noah D. Goodman:
Automated Statistical Model Discovery with Language Models. - Zhengyang Zhuge, Peisong Wang, Xingting Yao, Jian Cheng:
Towards Efficient Spiking Transformer: a Token Sparsification Framework for Training and Inference Acceleration. - Thomas Ferté, Dan Dutartre, Boris P. Hejblum, Romain Griffier, Vianney Jouhet, Rodolphe Thiébaut, Pierrick Legrand, Xavier Hinaut:
Reservoir Computing for Short High-Dimensional Time Series: an Application to SARS-CoV-2 Hospitalization Forecast. - Anji Liu, Kareem Ahmed, Guy Van den Broeck:
Scaling Tractable Probabilistic Circuits: A Systems Perspective. - Thomas Merth, Qichen Fu, Mohammad Rastegari, Mahyar Najibi:
Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation. - Ankit Vani, Frederick Tung, Gabriel L. Oliveira, Hossein Sharifi-Noghabi:
Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics. - Aleksandr Podkopaev, Dong Xu, Kuang-chih Lee:
Adaptive Conformal Inference by Betting. - Kaijie Zhu, Jindong Wang, Qinlin Zhao, Ruochen Xu, Xing Xie:
Dynamic Evaluation of Large Language Models by Meta Probing Agents. - Yifan Chen, Mark Goldstein, Mengjian Hua, Michael S. Albergo, Nicholas Matthew Boffi, Eric Vanden-Eijnden:
Probabilistic Forecasting with Stochastic Interpolants and Föllmer Processes. - Pierre Glaser, Steffanie Paul, Alissa M. Hummer, Charlotte M. Deane, Debora Susan Marks, Alan Nawzad Amin:
Kernel-Based Evaluation of Conditional Biological Sequence Models. - Jiabin Lin, Shana Moothedath, Namrata Vaswani:
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits. - Zhijie Wang, Bo Jiang, Shuai Li:
In-context Learning on Function Classes Unveiled for Transformers. - Tuomas P. Oikarinen, Tsui-Wei Weng:
Linear Explanations for Individual Neurons. - Sirui Zheng, Chenjia Bai, Zhuoran Yang, Zhaoran Wang:
How Does Goal Relabeling Improve Sample Efficiency? - Ehsan Saleh, Saba Ghaffari, Timothy Bretl, Luke N. Olson, Matthew West:
Learning from Integral Losses in Physics Informed Neural Networks. - Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, Yang Liu:
Position: Towards Unified Alignment Between Agents, Humans, and Environment. - Daniele Tramontano, Yaroslav Kivva, Saber Salehkaleybar, Mathias Drton, Negar Kiyavash:
Causal Effect Identification in LiNGAM Models with Latent Confounders. - Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun:
DistiLLM: Towards Streamlined Distillation for Large Language Models. - Yexin Zhang, Chenyi Zhang, Cong Fang, Liwei Wang, Tongyang Li:
Quantum Algorithms and Lower Bounds for Finite-Sum Optimization. - Jiatao Gu, Chen Wang, Shuangfei Zhai, Yizhe Zhang, Lingjie Liu, Joshua M. Susskind:
Data-free Distillation of Diffusion Models with Bootstrapping. - Tengye Xu, Zihao Li, Qinyuan Ren:
Meta-Reinforcement Learning Robust to Distributional Shift Via Performing Lifelong In-Context Learning. - Feng Xie, Zheng Li, Peng Wu, Yan Zeng, Chunchen Liu, Zhi Geng:
Local Causal Structure Learning in the Presence of Latent Variables. - Gautam Singh, Yue Wang, Jiawei Yang, Boris Ivanovic, Sungjin Ahn, Marco Pavone, Tong Che:
Parallelized Spatiotemporal Slot Binding for Videos. - Zehua Pei, Hui-Ling Zhen, Mingxuan Yuan, Yu Huang, Bei Yu:
BetterV: Controlled Verilog Generation with Discriminative Guidance. - Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr:
Auditing Private Prediction. - Yujia Wang, Shiqiang Wang, Songtao Lu, Jinghui Chen:
FADAS: Towards Federated Adaptive Asynchronous Optimization. - Ahmed Imtiaz Humayun, Randall Balestriero, Richard G. Baraniuk:
Deep Networks Always Grok and Here is Why. - Hao Di, Haishan Ye, Xiangyu Chang, Guang Dai, Ivor W. Tsang:
Double Stochasticity Gazes Faster: Snap-Shot Decentralized Stochastic Gradient Tracking Methods. - Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, Hao Zhang:
CLLMs: Consistency Large Language Models. - Jungil Kong, Junmo Lee, Jeongmin Kim, Beomjeong Kim, Jihoon Park, Dohee Kong, Changheon Lee, Sangjin Kim:
ELF: Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis. - Sourav Pal, Harshavardhan Adepu, Clinton J. Wang, Polina Golland, Vikas Singh:
Implicit Representations via Operator Learning. - Ben Lonnqvist, Zhengqing Wu, Michael H. Herzog:
Latent Noise Segmentation: How Neural Noise Leads to the Emergence of Segmentation and Grouping. - Yiran Liu, Xiaoang Xu, Zhiyi Hou, Yang Yu:
Causality Based Front-door Defense Against Backdoor Attack on Language Models. - Ying Feng, Aayush Jain, David P. Woodruff:
Fast White-Box Adversarial Streaming Without a Random Oracle. - Gugan Thoppe, Prashanth L. A., Sanjay P. Bhat:
Risk Estimation in a Markov Cost Process: Lower and Upper Bounds. - Tzu-Yuan Lin, Minghan Zhu, Maani Ghaffari:
Lie Neurons: Adjoint-Equivariant Neural Networks for Semisimple Lie Algebras. - Huanshuo Dong, Hong Wang, Haoyang Liu, Jian Luo, Jie Wang:
Accelerating PDE Data Generation via Differential Operator Action in Solution Space. - Masaaki Nishino, Kengo Nakamura, Norihito Yasuda:
Understanding the Impact of Introducing Constraints at Inference Time on Generalization Error. - Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek:
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers. - Taha Entesari, Sina Sharifi, Mahyar Fazlyab:
Compositional Curvature Bounds for Deep Neural Networks. - Kavosh Asadi, Yao Liu, Shoham Sabach, Ming Yin, Rasool Fakoor:
Learning the Target Network in Function Space. - Nantian He, Shaohui Li, Zhi Li, Yu Liu, You He:
ReDiffuser: Reliable Decision-Making Using a Diffuser with Confidence Estimation. - Yunjuan Wang, Raman Arora:
Adversarially Robust Hypothesis Transfer Learning. - Parand A. Alamdari, Toryn Q. Klassen, Elliot Creager, Sheila A. McIlraith:
Remembering to Be Fair: Non-Markovian Fairness in Sequential Decision Making. - Juntao Ren, Gokul Swamy, Steven Wu, Drew Bagnell, Sanjiban Choudhury:
Hybrid Inverse Reinforcement Learning. - Henry W. Sprueill, Carl Edwards, Khushbu Agarwal, Mariefel V. Olarte, Udishnu Sanyal, Conrad Johnston, Hongbin Liu, Heng Ji, Sutanay Choudhury:
CHEMREASONER: Heuristic Search over a Large Language Model's Knowledge Space using Quantum-Chemical Feedback. - Youhe Jiang, Ran Yan, Xiaozhe Yao, Yang Zhou, Beidi Chen, Binhang Yuan:
HexGen: Generative Inference of Large Language Model over Heterogeneous Environment. - Yan Wang, Lihao Wang, Yuning Shen, Yiqun Wang, Huizhuo Yuan, Yue Wu, Quanquan Gu:
Protein Conformation Generation via Force-Guided SE(3) Diffusion Models. - Jiecheng Lu, Xu Han, Yan Sun, Shihao Yang:
CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables. - Etash Kumar Guha, Vihan Lakshman:
On the Diminishing Returns of Width for Continual Learning. - Zheng Huang, Qihui Yang, Dawei Zhou, Yujun Yan:
Enhancing Size Generalization in Graph Neural Networks through Disentangled Representation Learning. - Jiahao Ai, Zhimei Ren:
Not all distributional shifts are equal: Fine-grained robust conformal inference. - Wenhong Zhu, Hongkun Hao, Zhiwei He, Yiming Ai, Rui Wang:
Improving Open-Ended Text Generation via Adaptive Decoding. - Juntao Dai, Yaodong Yang, Qian Zheng, Gang Pan:
Safe Reinforcement Learning using Finite-Horizon Gradient-based Estimation. - Pablo Villalobos, Anson Ho, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, Marius Hobbhahn:
Position: Will we run out of data? Limits of LLM scaling based on human-generated data. - Fan Yin, Jayanth Srinivasa, Kai-Wei Chang:
Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension. - Amir Azarmehr, Soheil Behnezhad, Mohammad Roghani:
Bipartite Matching in Massive Graphs: A Tight Analysis of EDCS. - Manuel Cherep, Nikhil Singh, Jessica Shand:
Creative Text-to-Audio Generation via Synthesizer Programming. - Xuhong Li, Jiamin Chen, Yekun Chai, Haoyi Xiong:
GiLOT: Interpreting Generative Language Models via Optimal Transport. - Ziyu Wan, Xidong Feng, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, Jun Wang:
AlphaZero-Like Tree-Search can Guide Large Language Model Decoding and Training. - Yi Wang, Zhiren Wang:
CW Complex Hypothesis for Image Data. - Sijia Chen, Baochun Li:
Toward Adaptive Reasoning in Large Language Models with Thought Rollback. - Akbar Rafiey:
Decomposable Submodular Maximization in Federated Setting. - Jinxin Liu, Xinghong Guo, Zifeng Zhuang, Donglin Wang:
DIDI: Diffusion-Guided Diversity for Offline Behavioral Generation. - Elvis Dohmatob:
Consistent Adversarially Robust Linear Classification: Non-Parametric Setting. - Haixiao Wang, Zhichao Wang:
Optimal Exact Recovery in Semi-Supervised Learning: A Study of Spectral Methods and Graph Convolutional Networks. - Zhening Li, Gabriel Poesia, Armando Solar-Lezama:
When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions. - Dongkwan Kim, Alice Oh:
Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning. - Wenbo Chen, Haoruo Zhao, Mathieu Tanneau, Pascal Van Hentenryck:
Compact Optimality Verification for Optimization Proxies. - Yifan Zhang, Min-Ling Zhang:
Generalization Analysis for Multi-Label Learning. - Sandesh Adhikary, Anqi Li, Byron Boots:
BeigeMaps: Behavioral Eigenmaps for Reinforcement Learning from Images. - Jihyeong Jung, Sangwoo Seo, Sungwon Kim, Chanyoung Park:
Unsupervised Episode Generation for Graph Meta-learning. - Avishek Ghosh, Arya Mazumdar:
Agnostic Learning of Mixed Linear Regressions with EM and AM Algorithms. - Dezhi Peng, Zhenhua Yang, Jiaxin Zhang, Chongyu Liu, Yongxin Shi, Kai Ding, Fengjun Guo, Lianwen Jin:
UPOCR: Towards Unified Pixel-Level OCR Interface. - Zhanpeng Zhou, Zijun Chen, Yilan Chen, Bo Zhang, Junchi Yan:
On the Emergence of Cross-Task Linearity in Pretraining-Finetuning Paradigm. - Mingzhou Fan, Ruida Zhou, Chao Tian, Xiaoning Qian:
Path-Guided Particle-based Sampling. - Itay Lavie, Guy Gur-Ari, Zohar Ringel:
Towards Understanding Inductive Bias in Transformers: A View From Infinity. - Soufiane Hayou, Nikhil Ghosh, Bin Yu:
LoRA+: Efficient Low Rank Adaptation of Large Models. - Tung-Yu Wu, Yu-Xiang Lin, Tsui-Wei Weng:
AND: Audio Network Dissection for Interpreting Deep Acoustic Models. - Aaron Jiaxun Li, Robin Netzorg, Zhihan Cheng, Zhuoqin Zhang, Bin Yu:
Improving Prototypical Visual Explanations with Reward Reweighing, Reselection, and Retraining. - Huancheng Chen, Haris Vikalo:
Recovering Labels from Local Updates in Federated Learning. - Michael Poli, Armin W. Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Ré, Ce Zhang, Stefano Massaroli:
Mechanistic Design and Scaling of Hybrid Architectures. - Chen-Chia Chang, Yikang Shen, Shaoze Fan, Jing Li, Shun Zhang, Ningyuan Cao, Yiran Chen, Xin Zhang:
LaMAGIC: Language-Model-based Topology Generation for Analog Integrated Circuits. - Haiqing Zhu, Alexander Soen, Yun Kuen Cheung, Lexing Xie:
Online Learning in Betting Markets: Profit versus Prediction. - Aaron J. Havens, Alexandre Araujo, Huan Zhang, Bin Hu:
Fine-grained Local Sensitivity Analysis of Standard Dot-Product Self-Attention. - Zichong Li, Qunzhi Xu, Zhenghao Xu, Yajun Mei, Tuo Zhao, Hongyuan Zha:
Beyond Point Prediction: Score Matching-based Pseudolikelihood Estimation of Neural Marked Spatio-Temporal Point Process. - Fabing Li, Yuanhao Zhai, Shuangyu Cai, Mingyu Gao:
Seesaw: Compensating for Nonlinear Reduction with Linear Computations for Private Inference. - Wu Lin, Felix Dangel, Runa Eschenhagen, Juhan Bae, Richard E. Turner, Alireza Makhzani:
Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective. - Zhenwei Lin, Chenyu Xue, Qi Deng, Yinyu Ye:
A Single-Loop Robust Policy Gradient Method for Robust Markov Decision Processes. - Luyang Fang, Yongkai Chen, Wenxuan Zhong, Ping Ma:
Bayesian Knowledge Distillation: A Bayesian Perspective of Distillation with Uncertainty Quantification. - Moonjeong Park, Jaeseung Heo, Dongwoo Kim:
Mitigating Oversmoothing Through Reverse Process of GNNs for Heterophilic Graphs. - Andreas Abildtrup Hansen, Anna Calissano, Aasa Feragen:
Interpreting Equivariant Representations. - Md Rifat Arefin, Yan Zhang, Aristide Baratin, Francesco Locatello, Irina Rish, Dianbo Liu, Kenji Kawaguchi:
Unsupervised Concept Discovery Mitigates Spurious Correlations. - Saurabh Agarwal, Bilge Acun, Basil Hosmer, Mostafa Elhoushi, Yejin Lee, Shivaram Venkataraman, Dimitris Papailiopoulos, Carole-Jean Wu:
CHAI: Clustered Head Attention for Efficient LLM Inference. - Bohang Zhang, Lingxiao Zhao, Haggai Maron:
On the Expressive Power of Spectral Invariant Graph Neural Networks. - Vy Vo, Trung Le, Long Tung Vuong, He Zhao, Edwin V. Bonilla, Dinh Phung:
Parameter Estimation in DAGs from Incomplete Data via Optimal Transport. - Yi Liu, Qirui Hu, Linglong Kong:
Tuning-free Estimation and Inference of Cumulative Distribution Function under Local Differential Privacy. - Hanqi Li, Lu Chen, Da Ma, Zijian Wu, Su Zhu, Kai Yu:
Evolving Subnetwork Training for Large Language Models. - Seok-Ju Hahn, Gi-Soo Kim, Junghye Lee:
Pursuing Overall Welfare in Federated Learning through Sequential Decision Making. - Raviteja Vemulapalli, Hadi Pouransari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari, Oncel Tuzel:
Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific Models. - Yushu Pan, Elias Bareinboim:
Counterfactual Image Editing. - Hyunsu Kim, Yegon Kim, Hongseok Yang, Juho Lee:
Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts. - Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang:
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models. - Md. Shamim Hussain, Mohammed J. Zaki, Dharmashankar Subramanian:
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers. - Kan Xu, Hamsa Bastani, Surbhi Goel, Osbert Bastani:
Stochastic Bandits with ReLU Neural Networks. - Yaxi Hu, Amartya Sanyal, Bernhard Schölkopf:
Provable Privacy with Non-Private Pre-Processing. - Zhengyan Zhang, Chaojun Xiao, Qiujieli Qin, Yankai Lin, Zhiyuan Zeng, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie Zhou:
Exploring the Benefit of Activation Sparsity in Pre-training. - Gyusang Cho, Chan-Hyun Youn:
Tilt and Average : Geometric Adjustment of the Last Layer for Recalibration. - Seojin Kim, Jaehyun Nam, Sihyun Yu, Younghoon Shin, Jinwoo Shin:
Data-Efficient Molecular Generation with Hierarchical Textual Inversion. - Hoyoung Kim, Sehyun Hwang, Suha Kwak, Jungseul Ok:
Active Label Correction for Semantic Segmentation with Foundation Models. - Yingying Li, Jing Yu, Lauren Conger, Taylan Kargin, Adam Wierman:
Learning the Uncertainty Sets of Linear Control Systems via Set Membership: A Non-asymptotic Analysis. - Daniil Vankov, Angelia Nedich, Lalitha Sankar:
Generalized Smooth Variational Inequalities: Methods with Adaptive Stepsizes. - Shikhar Murty, Christopher D. Manning, Peter Shaw, Mandar Joshi, Kenton Lee:
BAGEL: Bootstrapping Agents by Guiding Exploration with Language. - Zhu Xu, Qingchao Chen, Yuxin Peng, Yang Liu:
Semantic-Aware Human Object Interaction Image Generation. - Matias D. Cattaneo, Jason M. Klusowski, Boris Shigida:
On the Implicit Bias of Adam. - Mukesh Ghimire, Lei Zhang, Zhe Xu, Yi Ren:
State-Constrained Zero-Sum Differential Games with One-Sided Information. - Debangshu Banerjee, Gagandeep Singh:
Relational DNN Verification With Cross Executional Bound Refinement. - Aimee Maurais, Youssef M. Marzouk:
Sampling in Unit Time with Kernel Fisher-Rao Flow. - Yiming Meng, Ruikun Zhou, Amartya Mukherjee, Maxwell Fitzsimmons, Christopher Song, Jun Liu:
Physics-Informed Neural Network Policy Iteration: Algorithms, Convergence, and Verification. - Xingang Guo, Fangxu Yu, Huan Zhang, Lianhui Qin, Bin Hu:
COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability. - Weijia Zhang, Chenlong Yin, Hao Liu, Xiaofang Zhou, Hui Xiong:
Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks Approach. - Sheng Liu, Haotian Ye, Lei Xing, James Y. Zou:
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering. - Ouail Kitouni, Niklas Nolte, Víctor Samuel Pérez-Díaz, Sokratis Trifinopoulos, Mike Williams:
From Neurons to Neutrons: A Case Study in Interpretability. - Yufei Wang, Zhanyi Sun, Jesse Zhang, Zhou Xian, Erdem Biyik, David Held, Zackory Erickson:
RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback. - Dashiell Stander, Qinan Yu, Honglu Fan, Stella Biderman:
Grokking Group Multiplication with Cosets. - Mintong Kang, Zhen Lin, Jimeng Sun, Cao Xiao, Bo Li:
Certifiably Byzantine-Robust Federated Conformal Prediction. - Siyuan Zhou, Yilun Du, Jiaben Chen, Yandong Li, Dit-Yan Yeung, Chuang Gan:
RoboDreamer: Learning Compositional World Models for Robot Imagination. - Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, Dorsa Sadigh:
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models. - Harry Dong, Xinyu Yang, Zhenyu Zhang, Zhangyang Wang, Yuejie Chi, Beidi Chen:
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference. - Milind Prabhu, David P. Woodruff:
Learning Multiple Secrets in Mastermind. - Tam Minh Nguyen, César A. Uribe, Tan Minh Nguyen, Richard G. Baraniuk:
PIDformer: Transformer Meets Control Theory. - Aaron Fisher:
Inverse-Variance Weighting for Estimation of Heterogeneous Treatment Effects. - Sophia Huiwen Sun, Abishek Sankararaman, Balakrishnan Narayanaswamy:
Online Adaptive Anomaly Thresholding with Confidence Sequences. - Liheng Ma, Soumyasundar Pal, Yitian Zhang, Jiaming Zhou, Yingxue Zhang, Mark Coates:
CKGConv: General Graph Convolution with Continuous Kernels. - Alexander Gao, Maurice Chu, Mubbasir Kapadia, Ming C. Lin, Hsueh-Ti Derek Liu:
An Intrinsic Vector Heat Network. - Reyna Abhyankar, Zijian He, Vikranth Srivatsa, Hao Zhang, Yiying Zhang:
InferCept: Efficient Intercept Support for Augmented Large Language Model Inference. - Eiki Shimizu, Kenji Fukumizu, Dino Sejdinovic:
Neural-Kernel Conditional Mean Embeddings. - Marvin Li, Sitan Chen:
Critical windows: non-asymptotic theory for feature emergence in diffusion models. - Changze Lv, Yansen Wang, Dongqi Han, Xiaoqing Zheng, Xuanjing Huang, Dongsheng Li:
Efficient and Effective Time-Series Forecasting with Spiking Neural Networks. - Arnold Kazadi, James Doss-Gollin, Arlei Lopes da Silva:
Pluvial Flood Emulation with Hydraulics-informed Message Passing. - Gavin Brown, Krishnamurthy Dj Dvijotham, Georgina Evans, Daogao Liu, Adam Smith, Abhradeep Guha Thakurta:
Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation. - Lisang Ding, Ziang Chen, Xinshang Wang, Wotao Yin:
Efficient Algorithms for Sum-Of-Minimum Optimization. - Vithursan Thangarasa, Shreyas Saxena, Abhay Gupta, Sean Lie:
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency. - Logan Weber, Jesse Michel, Alex Renda, Michael Carbin:
Learning to Compile Programs to Neural Networks. - Shikai Qiu, Boran Han, Danielle C. Maddix, Shuai Zhang, Bernie Wang, Andrew Gordon Wilson:
Transferring Knowledge From Large Foundation Models to Small Downstream Models. - Yuanwei Liu, Junwei Han, Xiwen Yao, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer, Nian Liu, Fahad Shahbaz Khan:
Bidirectional Reciprocative Information Communication for Few-Shot Semantic Segmentation. - Jiaming Tang, Yilong Zhao, Kan Zhu, Guangxuan Xiao, Baris Kasikci, Song Han:
QUEST: Query-Aware Sparsity for Efficient Long-Context LLM Inference. - Donghu Kim, Hojoon Lee, Kyungmin Lee, Dongyoon Hwang, Jaegul Choo:
Investigating Pre-Training Objectives for Generalization in Vision-Based Reinforcement Learning. - Irene Wang, Jakub Tarnawski, Amar Phanishayee, Divya Mahajan:
Integrated Hardware Architecture and Device Placement Search. - Liam Schramm, Abdeslam Boularias:
Provably Efficient Long-Horizon Exploration in Monte Carlo Tree Search through State Occupancy Regularization. - Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, Tianyi Zhou:
InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models. - Rudrajit Das, Xi Chen, Bertram Ieong, Parikshit Bansal, Sujay Sanghavi:
Understanding the Training Speedup from Sampling with Approximate Losses. - Ding Huang, Ting Li, Jian Huang:
Bayesian Power Steering: An Effective Approach for Domain Adaptation of Diffusion Models. - Haoqing Xu, Dian Shen, Meng Wang, Beilun Wang:
Adaptive Group Personalization for Federated Mutual Transfer Learning. - Oleksiy Ostapenko, Zhan Su, Edoardo M. Ponti, Laurent Charlin, Nicolas Le Roux, Lucas Caccia, Alessandro Sordoni:
Towards Modular LLMs by Building and Reusing a Library of LoRAs. - Elise Han, Chengpiao Huang, Kaizheng Wang:
Model Assessment and Selection under Temporal Distribution Shift. - Wei-Ning Chen, Berivan Isik, Peter Kairouz, Albert No, Sewoong Oh, Zheng Xu:
Improved Communication-Privacy Trade-offs in L2 Mean Estimation under Streaming Differential Privacy. - Xueyang Tang, Song Guo, Jingcai Guo, Jie Zhang, Yue Yu:
Causally Motivated Personalized Federated Invariant Learning with Shortcut-Averse Information-Theoretic Regularization. - Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Sanchaita Hazra, Ashish Sabharwal, Peter Clark:
Position: Data-driven Discovery with Large Generative Models. - Mohammad Al-Jarrah, Niyizhen Jin, Bamdad Hosseini, Amirhossein Taghvaei:
Nonlinear Filtering with Brenier Optimal Transport Maps. - Yufei Wang, Zhou Xian, Feng Chen, Tsun-Hsuan Wang, Yian Wang, Katerina Fragkiadaki, Zackory Erickson, David Held, Chuang Gan:
RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation. - Sidi Lu, Hongyi Liu, Asli Celikyilmaz, Tianlu Wang, Nanyun Peng:
Open-Domain Text Evaluation via Contrastive Distribution Methods. - Michelle Pan, Mariah L. Schrum, Vivek Myers, Erdem Biyik, Anca D. Dragan:
Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation. - Puhua Niu, Shili Wu, Mingzhou Fan, Xiaoning Qian:
GFlowNet Training by Policy Gradients. - Ha Manh Bui, Anqi Liu:
Density-Softmax: Efficient Test-time Model for Uncertainty Estimation and Robustness under Distribution Shifts. - Pengyi Li, Jianye Hao, Hongyao Tang, Yan Zheng, Fazl Barez:
Value-Evolutionary-Based Reinforcement Learning. - Chuanhao Sun, Zhihang Yuan, Kai Xu, Luo Mai, N. Siddharth, Shuo Chen, Mahesh K. Marina:
Learning High-Frequency Functions Made Easy with Sinusoidal Positional Encoding. - Charles Jin, Martin C. Rinard:
Emergent Representations of Program Semantics in Language Models Trained on Programs. - Justin Cui, Ruochen Wang, Yuanhao Xiong, Cho-Jui Hsieh:
Ameliorate Spurious Correlations in Dataset Condensation. - Qinzi Zhang, Ashok Cutkosky:
Random Scaling and Momentum for Non-smooth Non-convex Optimization. - Chunlin Tian, Zhan Shi, Xinpeng Qin, Li Li, Chengzhong Xu:
Ranking-based Client Imitation Selection for Efficient Federated Learning. - Pedram Akbarian, Tongzheng Ren, Jiacheng Zhuo, Sujay Sanghavi, Nhat Ho:
Improving Computational Complexity in Statistical Models with Local Curvature Information. - Edwin Zhang, Sadie Zhao, Tonghan Wang, Safwan Hossain, Henry Gasztowtt, Stephan Zheng, David C. Parkes, Milind Tambe, Yiling Chen:
Position: Social Environment Design Should be Further Developed for AI-based Policy-Making. - Lichang Chen, Chen Zhu, Jiuhai Chen, Davit Soselia, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, Bryan Catanzaro:
ODIN: Disentangled Reward Mitigates Hacking in RLHF. - Huijie Zhang, Jinfan Zhou, Yifu Lu, Minzhe Guo, Peng Wang, Liyue Shen, Qing Qu:
The Emergence of Reproducibility and Consistency in Diffusion Models. - Kerui Gu, Rongyu Chen, Xuanlong Yu, Angela Yao:
On the Calibration of Human Pose Estimation. - Arsalan Motamedi, Pooya Ronagh:
Gibbs Sampling of Continuous Potentials on a Quantum Computer. - Xingcheng Fu, Yisen Gao, Yuecen Wei, Qingyun Sun, Hao Peng, Jianxin Li, Xianxian Li:
Hyperbolic Geometric Latent Diffusion Model for Graph Generation. - Yunxin Li, Baotian Hu, Haoyuan Shi, Wei Wang, Longyue Wang, Min Zhang:
VisionGraph: Leveraging Large Multimodal Models for Graph Theory Problems in Visual Context. - Shivam Gupta, Ajil Jalal, Aditya Parulekar, Eric Price, Zhiyang Xun:
Diffusion Posterior Sampling is Computationally Intractable. - Christopher Mohri, Tatsunori Hashimoto:
Language Models with Conformal Factuality Guarantees. - Alexandre Drouin, Maxime Gasse, Massimo Caccia, Issam H. Laradji, Manuel Del Verme, Tom Marty, David Vázquez, Nicolas Chapados, Alexandre Lacoste:
WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks? - Bram Wouters:
Optimizing Watermarks for Large Language Models. - Runze Liu, Yali Du, Fengshuo Bai, Jiafei Lyu, Xiu Li:
PEARL: Zero-shot Cross-task Preference Alignment and Robust Reward Learning for Robotic Manipulation. - Zhenqiao Song, Yunlong Zhao, Wenxian Shi, Wengong Jin, Yang Yang, Lei Li:
Generative Enzyme Design Guided by Functionally Important Sites and Small-Molecule Substrates. - Sudeep Salgia, Sattar Vakili, Qing Zhao:
Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency. - Yassir Jedra, William Réveillard, Stefan Stojanovic, Alexandre Proutière:
Low-Rank Bandits via Tight Two-to-Infinity Singular Subspace Recovery. - Yue He, Dongbai Li, Pengfei Tian, Han Yu, Jiashuo Liu, Hao Zou, Peng Cui:
Domain-wise Data Acquisition to Improve Performance under Distribution Shift. - Hamidreza Kamkari, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini, Rahul G. Krishnan, Gabriel Loaiza-Ganem:
A Geometric Explanation of the Likelihood OOD Detection Paradox. - Andrew Jacobsen, Ashok Cutkosky:
Online Linear Regression in Dynamic Environments via Discounting. - Ju-Hyun Kim, Seungki Min:
Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient. - Shomik Jain, Kathleen Creel, Ashia Camage Wilson:
Position: Scarce Resource Allocations That Rely On Machine Learning Should Be Randomized. - Rohin Manvi, Samar Khanna, Marshall Burke, David B. Lobell, Stefano Ermon:
Large Language Models are Geographically Biased. - Haotian Lin, Matthew Reimherr:
Smoothness Adaptive Hypothesis Transfer Learning. - Yuwei Zeng, Yao Mu, Lin Shao:
Learning Reward for Robot Skills Using Large Language Models via Self-Alignment. - Qi Zhang, Tianqi Du, Haotian Huang, Yifei Wang, Yisen Wang:
Look Ahead or Look Around? A Theoretical Comparison Between Autoregressive and Masked Pretraining. - Shubhada Agrawal, Prashanth L. A., Siva Theja Maguluri:
Policy Evaluation for Variance in Average Reward Reinforcement Learning. - Zhepeng Cen, Yihang Yao, Zuxin Liu, Ding Zhao:
Feasibility Consistent Representation Learning for Safe Reinforcement Learning. - Saswat Das, Marco Romanelli, Ferdinando Fioretto:
Disparate Impact on Group Accuracy of Linearization for Private Inference. - Aaron M. Ferber, Arman Zharmagambetov, Taoan Huang, Bistra Dilkina, Yuandong Tian:
GenCO: Generating Diverse Designs with Combinatorial Constraints. - Nicholas Monath, Will Sussman Grathwohl, Michael Boratko, Rob Fergus, Andrew McCallum, Manzil Zaheer:
A Fresh Take on Stale Embeddings: Improving Dense Retriever Training with Corrector Networks. - Ignat Georgiev, Krishnan Srinivasan, Jie Xu, Eric Heiden, Animesh Garg:
Adaptive Horizon Actor-Critic for Policy Learning in Contact-Rich Differentiable Simulation. - Daniel Csillag, Cláudio José Struchiner, Guilherme Tegoni Goedert:
Generalization Bounds for Causal Regression: Insights, Guarantees and Sensitivity Analysis. - Yunjuan Wang, Kaibo Zhang, Raman Arora:
Benign Overfitting in Adversarial Training of Neural Networks. - Shuze Liu, Shangtong Zhang:
Efficient Policy Evaluation with Offline Data Informed Behavior Policy Design. - Samir Khan, Martin Saveski, Johan Ugander:
Off-policy Evaluation Beyond Overlap: Sharp Partial Identification Under Smoothness. - Alex DeWeese, Guannan Qu:
Locally Interdependent Multi-Agent MDP: Theoretical Framework for Decentralized Agents with Dynamic Dependencies. - Zixiao Wang, AmirEmad Ghassami, Ilya Shpitser:
Identification and Estimation for Nonignorable Missing Data: A Data Fusion Approach. - Lingxiao Li, Raaz Dwivedi, Lester Mackey:
Debiased Distribution Compression. - Subhojyoti Mukherjee, Josiah P. Hanna, Robert D. Nowak:
SaVeR: Optimal Data Collection Strategy for Safe Policy Evaluation in Tabular MDP. - Nils Palumbo, Yang Guo, Xi Wu, Jiefeng Chen, Yingyu Liang, Somesh Jha:
Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection. - Chung-Yiu Yau, Hoi-To Wai, Parameswaran Raman, Soumajyoti Sarkar, Mingyi Hong:
EMC2: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence. - Yuhao Wang, Qiang Zhang, Ming Qin, Xiang Zhuang, Xiaotong Li, Zhichen Gong, Zeyuan Wang, Yu Zhao, Jianhua Yao, Keyan Ding, Huajun Chen:
Knowledge-aware Reinforced Language Models for Protein Directed Evolution. - Bhaskar Ray Chaudhury, Aniket Murhekar, Zhuowen Yuan, Bo Li, Ruta Mehta, Ariel D. Procaccia:
Fair Federated Learning via the Proportional Veto Core. - Atli Kosson, Bettina Messmer, Martin Jaggi:
Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks. - Hongming Piao, Yichen Wu, Dapeng Wu, Ying Wei:
Federated Continual Learning via Prompt-based Dual Knowledge Transfer. - Hongyu Liu, Runmin Cong, Hua Li, Qianqian Xu, Qingming Huang, Wei Zhang:
ESNet: Evolution and Succession Network for High-Resolution Salient Object Detection. - Mai Zeng, Florence Regol, Mark Coates:
Interacting Diffusion Processes for Event Sequence Forecasting. - Ivan Dubrovsky, Andrei Dmitrenko, Aleksei Dmitrenko, Nikita Serov, Vladimir Vinogradov:
Unveiling the Potential of AI for Nanomaterial Morphology Prediction. - Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal:
ReGAL: Refactoring Programs to Discover Generalizable Abstractions. - Hongchang Gao:
A Doubly Recursive Stochastic Compositional Gradient Descent Method for Federated Multi-Level Compositional Optimization. - Yipei Wang, Xiaoqian Wang:
Benchmarking Deletion Metrics with the Principled Explanations. - Lunyiu Nie, Zhimin Ding, Erdong Hu, Christopher M. Jermaine, Swarat Chaudhuri:
Online Cascade Learning for Efficient Inference over Streams. - Albert Tseng, Jerry Chee, Qingyao Sun, Volodymyr Kuleshov, Christopher De Sa:
QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks. - Seungyeon Kim, Ankit Singh Rawat, Manzil Zaheer, Wittawat Jitkrittum, Veeranjaneyulu Sadhanala, Sadeep Jayasumana, Aditya Krishna Menon, Rob Fergus, Sanjiv Kumar:
USTAD: Unified Single-model Training Achieving Diverse Scores for Information Retrieval. - Mohammed Muqeeth, Haokun Liu, Yufan Liu, Colin Raffel:
Learning to Route Among Specialized Experts for Zero-Shot Generalization. - Lu Han, Han-Jia Ye, De-Chuan Zhan:
SIN: Selective and Interpretable Normalization for Long-Term Time Series Forecasting. - Zheng Xiong, Risto Vuorio, Jacob Beck, Matthieu Zimmer, Kun Shao, Shimon Whiteson:
Distilling Morphology-Conditioned Hypernetworks for Efficient Universal Morphology Control. - Ruigang Wang, Krishnamurthy Dj Dvijotham, Ian R. Manchester:
Monotone, Bi-Lipschitz, and Polyak-Łojasiewicz Networks. - Christopher J. Anders, Kim Andrea Nicoli, Bingting Wu, Naima Elosegui, Samuele Pedrielli, Lena Funcke, Karl Jansen, Stefan Kühn, Shinichi Nakajima:
Adaptive Observation Cost Control for Variational Quantum Eigensolvers. - Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M. Tseng, Sergey Levine, Tommaso Biancalani:
Feedback Efficient Online Fine-Tuning of Diffusion Models. - Sulin Liu, Peter J. Ramadge, Ryan P. Adams:
Generative Marginalization Models. - Zi-Hao Qiu, Siqi Guo, Mao Xu, Tuo Zhao, Lijun Zhang, Tianbao Yang:
To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO. - Wenxi Lv, Qinliang Su, Hai Wan, Hongteng Xu, Wenchao Xu:
Contamination-Resilient Anomaly Detection via Adversarial Learning on Partially-Observed Normal and Anomalous Data. - Poompol Buathong, Jiayue Wan, Raul Astudillo, Samuel Daulton, Maximilian Balandat, Peter I. Frazier:
Bayesian Optimization of Function Networks with Partial Evaluations. - Saurabh Mishra, Anant Raj, Sharan Vaswani:
From Inverse Optimization to Feasibility to ERM. - Kate Donahue, Nicole Immorlica, Meena Jagadeesan, Brendan Lucier, Aleksandrs Slivkins:
Impact of Decentralized Learning on Player Utilities in Stackelberg Games. - Hojoon Lee, Hyeonseo Cho, Hyunseung Kim, Donghu Kim, Dugki Min, Jaegul Choo, Clare Lyle:
Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks. - Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, Peter Henderson:
Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications. - Vineet Jain, Siamak Ravanbakhsh:
Learning to Reach Goals via Diffusion. - Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Banghua Zhu, Hao Zhang, Michael I. Jordan, Joseph E. Gonzalez, Ion Stoica:
Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference. - Hossein Esfandiari, Praneeth Kacham, Vahab Mirrokni, David P. Woodruff, Peilin Zhong:
High-Dimensional Geometric Streaming for Nearly Low Rank Data. - Nicolás Alvarado, Hans Löbel:
Hyperbolic Optimizer as a Dynamical System. - Yan Zhong, Xingyu Wu, Li Zhang, Chenxi Yang, Tingting Jiang:
Causal-IQA: Towards the Generalization of Image Quality Assessment Based on Causal Inference. - Umais Zahid, Qinghai Guo, Zafeirios Fountas:
Sample as you Infer: Predictive Coding with Langevin Dynamics. - Reda Marzouk, Colin de la Higuera:
On the Tractability of SHAP Explanations under Markovian Distributions. - Zijian Guo, Weichao Zhou, Wenchao Li:
Temporal Logic Specification-Conditioned Decision Transformer for Offline Safe Reinforcement Learning. - Yichao Fu, Peter Bailis, Ion Stoica, Hao Zhang:
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. - Huy Nguyen, Nhat Ho, Alessandro Rinaldo:
On Least Square Estimation in Softmax Gating Mixture of Experts. - Zhihao Li, Yufei Wang, Alex C. Kot, Bihan Wen:
Compress Clean Signal from Noisy Raw Image: A Self-Supervised Approach. - Rishabh Kabra, Loic Matthey, Alexander Lerchner, Niloy J. Mitra:
Leveraging VLM-Based Pipelines to Annotate 3D Objects. - Justin Chih-Yao Chen, Swarnadeep Saha, Elias Stengel-Eskin, Mohit Bansal:
MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models. - Zixuan Wang, Stanley Wei, Daniel Hsu, Jason D. Lee:
Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot. - Pierre-François Massiani, Sebastian Trimpe, Friedrich Solowjow:
On the Consistency of Kernel Methods with Dependent Observations. - Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, Jason Weston:
Self-Rewarding Language Models. - Daniel D. Johnson, Daniel Tarlow, David Duvenaud, Chris J. Maddison:
Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs. - Yulong Zhang, Yuan Yao, Shuhao Chen, Pengrong Jin, Yu Zhang, Jian Jin, Jiangang Lu:
Rethinking Guidance Information to Utilize Unlabeled Samples: A Label Encoding Perspective. - Yueqi Xie, Minghong Fang, Neil Zhenqiang Gong:
FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Reconstruction Error. - Kiarash Banihashem, MohammadTaghi Hajiaghayi, Dariusz Rafal Kowalski, Jan Olkowski, Max Springer:
Dynamic Metric Embedding into lp Space. - Liwen Sun, Abhineet Agarwal, Aaron Kornblith, Bin Yu, Chenyan Xiong:
ED-Copilot: Reduce Emergency Department Wait Time with Language Model Diagnostic Assistance. - Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia, Sanjeev Arora:
Trainable Transformer in Transformer. - Fuzhong Zhou, Chenyu Zhang, Xu Chen, Xuan Di:
Graphon Mean Field Games with a Representative Player: Analysis and Learning Algorithm. - Yatong Chen, Wei Tang, Chien-Ju Ho, Yang Liu:
Performative Prediction with Bandit Feedback: Learning through Reparameterization. - Paulo Soares, Adarsh Pyarelal, Meghavarshini Krishnaswamy, Emily Butler, Kobus Barnard:
Probabilistic Modeling of Interpersonal Coordination Processes. - Zhiyi Li, Yunbei Xu, Ruohan Zhan:
Statistical Properties of Robust Satisficing. - Royson Lee, Javier Fernández-Marqués, Shell Xu Hu, Da Li, Stefanos Laskaridis, Lukasz Dudziak, Timothy M. Hospedales, Ferenc Huszár, Nicholas Donald Lane:
Recurrent Early Exits for Federated Learning with Heterogeneous Clients. - Abhimanyu Das, Weihao Kong, Rajat Sen, Yichen Zhou:
A decoder-only foundation model for time-series forecasting. - Weikang Qiu, Huangrui Chu, Selena Wang, Haolan Zuo, Xiaoxiao Li, Yize Zhao, Rex Ying:
Learning High-Order Relationships of Brain Regions. - Phillip A. Kerger, Marco Molinaro, Hongyi Jiang, Amitabh Basu:
A Universal Transfer Theorem for Convex Optimization Algorithms Using Inexact First-order Oracles. - Yuchen Li, Laura Balzano, Deanna Needell, Hanbaek Lyu:
Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms. - Kaiwen Wu, Jacob R. Gardner:
Understanding Stochastic Natural Gradient Variational Inference. - Shusen Jing, Anlan Yu, Shuai Zhang, Songyang Zhang:
FedSC: Provable Federated Self-supervised Learning with Spectral Contrastive Objective over Non-i.i.d. Data. - Mitchell Black, Lucy Lin, Weng-Keen Wong, Amir Nayyeri:
Biharmonic Distance of Graphs and its Higher-Order Variants: Theoretical Properties with Applications to Centrality and Clustering. - Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, Lingming Zhang:
Magicoder: Empowering Code Generation with OSS-Instruct. - Keyon Vafa, Ashesh Rambachan, Sendhil Mullainathan:
Do Large Language Models Perform the Way People Expect? Measuring the Human Generalization Function. - Sharath Raghvendra, Pouyan Shirzadian, Kaiyi Zhang:
A New Robust Partial p-Wasserstein-Based Metric for Comparing Distributions. - Eshaan Nichani, Alex Damian, Jason D. Lee:
How Transformers Learn Causal Structure with Gradient Descent. - Heasung Kim, Hyeji Kim, Gustavo de Veciana:
Clustered Federated Learning via Gradient-based Partitioning. - Motahareh Sohrabi, Juan Ramirez, Tianyue H. Zhang, Simon Lacoste-Julien, Jose Gallego-Posada:
On PI Controllers for Updating Lagrange Multipliers in Constrained Optimization. - Chao Li, Anbang Yao:
KernelWarehouse: Rethinking the Design of Dynamic Convolution. - Benjamin Eyre, Elliot Creager, David Madras, Vardan Papyan, Richard S. Zemel:
Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift. - Yuelin Zhang, Jiacheng Cen, Jiaqi Han, Zhiqiang Zhang, Jun Zhou, Wenbing Huang:
Improving Equivariant Graph Neural Networks on Large Geometric Graphs via Virtual Nodes Learning. - Haoxin Liu, Harshavardhan Kamarthi, Lingkai Kong, Zhiyuan Zhao, Chao Zhang, B. Aditya Prakash:
Time-Series Forecasting for Out-of-Distribution Generalization Using Invariant Learning. - Zhengyang Tang, Xingxing Zhang, Benyou Wang, Furu Wei:
MathScale: Scaling Instruction Tuning for Mathematical Reasoning. - Tianjun Zhang, Aman Madaan, Luyu Gao, Steven Zheng, Swaroop Mishra, Yiming Yang, Niket Tandon, Uri Alon:
In-Context Principle Learning from Mistakes. - Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong:
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention. - Alexander Kolesov, Petr Mokrov, Igor Udovichenko, Milena Gazdieva, Gudmund Pammer, Evgeny Burnaev, Alexander Korotin:
Estimating Barycenters of Distributions with Neural Optimal Transport. - Cong Hua, Qianqian Xu, Shilong Bao, Zhiyong Yang, Qingming Huang:
ReconBoost: Boosting Can Achieve Modality Reconcilement. - Quan Nguyen, Adji Bousso Dieng:
Quality-Weighted Vendi Scores And Their Application To Diverse Experimental Design. - Natalie Maus, Zhiyuan (Jerry) Lin, Maximilian Balandat, Eytan Bakshy:
Joint Composite Latent Space Bayesian Optimization. - Ishaan Singh Rawal, Alexander Matyasko, Shantanu Jaiswal, Basura Fernando, Cheston Tan:
Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion. - Hongkang Li, Meng Wang, Songtao Lu, Xiaodong Cui, Pin-Yu Chen:
How Do Nonlinear Transformers Learn and Generalize in In-Context Learning? - Jinyan Su, Sarah Dean:
Learning from Streaming Data when Users Choose. - Luca Masserano, Alexander Shen, Michele Doro, Tommaso Dorigo, Rafael Izbicki, Ann B. Lee:
Classification under Nuisance Parameters and Generalized Label Shift in Likelihood-Free Inference. - Sam Reifenstein, Timothée G. Leleu, Yoshihisa Yamamoto:
Dynamic Anisotropic Smoothing for Noisy Derivative-Free Optimization. - Renpu Liu, Cong Shen, Jing Yang:
Federated Representation Learning in the Under-Parameterized Regime. - Hongcheng Liu, Jindong Tong:
New Sample Complexity Bounds for Sample Average Approximation in Heavy-Tailed Stochastic Programming. - Kai Gan, Tong Wei:
Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning. - Micah Carroll, Davis Foote, Anand Siththaranjan, Stuart Russell, Anca D. Dragan:
AI Alignment with Changing and Influenceable Reward Functions. - Cody Rushing, Neel Nanda:
Explorations of Self-Repair in Language Models. - Tomohiro Hayase, Ryo Karakida:
Understanding MLP-Mixer as a wide and sparse MLP. - Vivek Myers, Chongyi Zheng, Anca D. Dragan, Sergey Levine, Benjamin Eysenbach:
Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making. - Zhongchang Sun, Sihong He, Fei Miao, Shaofeng Zou:
Constrained Reinforcement Learning Under Model Mismatch. - Wu Lin, Felix Dangel, Runa Eschenhagen, Kirill Neklyudov, Agustinus Kristiadi, Richard E. Turner, Alireza Makhzani:
Structured Inverse-Free Natural Gradient Descent: Memory-Efficient & Numerically-Stable KFAC. - Jungbin Lim, Jihwan Kim, Yonghyeon Lee, Cheongjae Jang, Frank C. Park:
Graph Geometry-Preserving Autoencoders. - Xinyun Chen, Ryan A. Chi, Xuezhi Wang, Denny Zhou:
Premise Order Matters in Reasoning with Large Language Models. - Weichen Lin, Jiaxiang Chen, Ruomin Huang, Hu Ding:
An Effective Dynamic Gradient Calibration Method for Continual Learning. - Do-Yeon Kim, Dong-Jun Han, Jun Seo, Jaekyun Moon:
Achieving Lossless Gradient Sparsification via Mapping to Alternative Space in Federated Learning. - Xudong Lu, Aojun Zhou, Yuhui Xu, Renrui Zhang, Peng Gao, Hongsheng Li:
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models. - Han Wang, Sihong He, Zhili Zhang, Fei Miao, James Anderson:
Momentum for the Win: Collaborative Federated Reinforcement Learning across Heterogeneous Environments. - Zhenting Wang, Vikash Sehwag, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas, Shiqing Ma:
How to Trace Latent Generative Model Generated Images without Artificial Watermark? - George-Octavian Barbulescu, Peter Triantafillou:
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models. - Langwen Huang, Lukas Gianinazzi, Yuejiang Yu, Peter D. Düben, Torsten Hoefler:
DiffDA: a Diffusion model for weather-scale Data Assimilation. - Lasse Blaauwbroek, Mirek Olsák, Jason Rute, Fidel Ivan Schaposnik Massolo, Jelle Piepenbrock, Vasily Pestun:
Graph2Tac: Online Representation Learning of Formal Math Concepts. - Jintong Gao, He Zhao, Dandan Guo, Hongyuan Zha:
Distribution Alignment Optimization through Neural Collapse for Long-tailed Classification. - Peyman Afshani, Chris Schwiegelshohn:
Optimal Coresets for Low-Dimensional Geometric Median. - Takayuki Osa, Tatsuya Harada:
Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning. - Song Liu, Jiahao Yu, Jack Simons, Mingxuan Yi, Mark Beaumont:
Minimizing f-Divergences by Interpolating Velocity Fields. - Avrajit Ghosh, Xitong Zhang, Kenneth K. Sun, Qing Qu, Saiprasad Ravishankar, Rongrong Wang:
Optimal Eye Surgeon: Finding image priors through sparse generators at initialization. - Huy Nguyen, Pedram Akbarian, TrungTin Nguyen, Nhat Ho:
A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts. - Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L. Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi:
Position: A Roadmap to Pluralistic Alignment. - Andong Tan, Fengtao Zhou, Hao Chen:
Post-hoc Part-Prototype Networks. - Jiacheng Zhu, Kristjan H. Greenewald, Kimia Nadjahi, Haitz Sáez de Ocáriz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, Justin Solomon:
Asymmetry in Low-Rank Adapters of Foundation Models. - Jiexi Yan, Zhihui Yin, Chenghao Xu, Cheng Deng, Heng Huang:
Retrieval Across Any Domains via Large-scale Pre-trained Model. - Brahma S. Pavse, Matthew Zurek, Yudong Chen, Qiaomin Xie, Josiah P. Hanna:
Learning to Stabilize Online Reinforcement Learning in Unbounded State Spaces. - Nikita Gushchin, Sergei Kholkin, Evgeny Burnaev, Alexander Korotin:
Light and Optimal Schrödinger Bridge Matching. - Yongshuo Zong, Tingyang Yu, Ruchika Chavhan, Bingchen Zhao, Timothy M. Hospedales:
Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations. - Isabel Chien, Wessel P. Bruinsma, Javier González Hernández, Richard E. Turner:
Safe Exploration in Dose Finding Clinical Trials with Heterogeneous Participants. - Ben Finkelshtein, Xingyue Huang, Michael M. Bronstein, Ismail Ilkan Ceylan:
Cooperative Graph Neural Networks. - Anastasios Nikolas Angelopoulos, Rina Barber, Stephen Bates:
Online conformal prediction with decaying step sizes. - Nikos Zarifis, Puqian Wang, Ilias Diakonikolas, Jelena Diakonikolas:
Robustly Learning Single-Index Models via Alignment Sharpness. - Xuyang Zhao, Huiyuan Wang, Weiran Huang, Wei Lin:
A Statistical Theory of Regularization-Based Continual Learning. - Pranav Agarwal, Sheldon Andrews, Samira Ebrahimi Kahou:
Learning to Play Atari in a World of Tokens. - Antoine Ledent, Rodrigo Alves:
Generalization Analysis of Deep Non-linear Matrix Completion. - Asterios Tsiourvas, Wei Sun, Georgia Perakis, Pin-Yu Chen, Yada Zhu:
Learning Optimal Projection for Forecast Reconciliation of Hierarchical Time Series. - Ivan Marisca, Cesare Alippi, Filippo Maria Bianchi:
Graph-based Forecasting with Missing Data through Spatiotemporal Downsampling. - Yao Mu, Junting Chen, Qinglong Zhang, Shoufa Chen, Qiaojun Yu, Chongjian Ge, Runjian Chen, Zhixuan Liang, Mengkang Hu, Chaofan Tao, Peize Sun, Haibao Yu, Chao Yang, Wenqi Shao, Wenhai Wang, Jifeng Dai, Yu Qiao, Mingyu Ding, Ping Luo:
RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis. - Brian R. Bartoldson, James Diffenderfer, Konstantinos Parasyris, Bhavya Kailkhura:
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies. - Ilker Demirel, Ahmed M. Alaa, Anthony Philippakis, David A. Sontag:
Prediction-powered Generalization of Causal Inferences. - Hancheng Min, René Vidal:
Can Implicit Bias Imply Adversarial Robustness? - Dung Nguyen, Anil Kumar S. Vullikanti:
Differentially private exact recovery for stochastic block models. - Chung-En Sun, Sicun Gao, Tsui-Wei Weng:
Breaking the Barrier: Enhanced Utility and Robustness in Smoothed DRL Agents. - Randall Balestriero, Romain Cosentino, Sarath Shekkizhar:
Characterizing Large Language Model Geometry Helps Solve Toxicity Detection and Generation. - Yegor Tkachenko:
Position: Enforced Amnesia as a Way to Mitigate the Potential Risk of Silent Suffering in the Conscious AI. - Yijun Wan, Melih Barsbey, Abdellatif Zaidi, Umut Simsekli:
Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD. - Panagiotis Koromilas, Giorgos Bouritsas, Theodoros Giannakopoulos, Mihalis Nicolaou, Yannis Panagakis:
Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning: From InfoNCE to Kernel-Based Losses. - Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John F. Canny, Ian Fischer:
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts. - Alexandre Hayderi, Amin Saberi, Ellen Vitercik, Anders Wikum:
MAGNOLIA: Matching Algorithms via GNNs for Online Value-to-go Approximation. - Helen Qu, Sang Michael Xie:
Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations. - Jocelin Su, Nan Liu, Yanbo Wang, Joshua B. Tenenbaum, Yilun Du:
Compositional Image Decomposition with Diffusion Models. - Lars Veefkind, Gabriele Cesa:
A Probabilistic Approach to Learning the Degree of Equivariance in Steerable CNNs. - Xunpeng Huang, Difan Zou, Hanze Dong, Yian Ma, Tong Zhang:
Faster Sampling via Stochastic Gradient Proximal Sampler. - Hongming Zhang, Tongzheng Ren, Chenjun Xiao, Dale Schuurmans, Bo Dai:
Provable Representation with Efficient Planning for Partially Observable Reinforcement Learning. - Yanran Wang, Qiuchen Qian, David Boyle:
Probabilistic Constrained Reinforcement Learning with Formal Interpretability. - Ankit Pratap Singh, Namrata Vaswani:
Byzantine Resilient and Fast Federated Few-Shot Learning. - Shentao Yang, Tianqi Chen, Mingyuan Zhou:
A Dense Reward View on Aligning Text-to-Image Diffusion with Preference. - Chen Jin, Ryutaro Tanno, Amrutha Saseendran, Tom Diethe, Philip Teare:
An Image is Worth Multiple Words: Discovering Object Level Concepts using Multi-Concept Prompt Learning. - Matthew Niedoba, Dylan Green, Saeid Naderiparizi, Vasileios Lioutas, Jonathan Wilder Lavington, Xiaoxuan Liang, Yunpeng Liu, Ke Zhang, Setareh Dabiri, Adam Scibior, Berend Zwartsenberg, Frank Wood:
Nearest Neighbour Score Estimators for Diffusion Generative Models. - Jules Berman, Benjamin Peherstorfer:
CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equations. - Konstantinos Ameranis, Adela Frances DePavia, Lorenzo Orecchia, Erasmo Tani:
Fast Algorithms for Hypergraph PageRank with Applications to Semi-Supervised Learning. - Yang Cai, Argyris Oikonomou, Weiqiang Zheng:
Accelerated Algorithms for Constrained Nonconvex-Nonconcave Min-Max Optimization and Comonotone Inclusion. - Yudan Wang, Yue Wang, Yi Zhou, Shaofeng Zou:
Non-Asymptotic Analysis for Single-Loop (Natural) Actor-Critic with Compatible Function Approximation. - Shubhanshu Shekhar, Aaditya Ramdas:
Reducing sequential change detection to sequential estimation. - Santiago R. Balseiro, Kshipra Bhawalkar, Zhe Feng, Haihao Lu, Vahab Mirrokni, Balasubramanian Sivan, Di Wang:
A Field Guide for Pacing Budget and ROS Constraints. - Gautam Machiraju, Alexander Derry, Arjun D. Desai, Neel Guha, Amir-Hossein Karimi, James Zou, Russ B. Altman, Christopher Ré, Parag Mallick:
Prospector Heads: Generalized Feature Attribution for Large Models & Data. - Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan, Priyatham Kattakinda, Atoosa Malemir Chegini, Soheil Feizi:
Fast Adversarial Attacks on Language Models In One GPU Minute. - Le Yu, Xinde Li, Pengfei Zhang, Zhentong Zhang, Fir Dunkin:
Enabling Few-Shot Learning with PID Control: A Layer Adaptive Optimizer. - Simin Fan, Matteo Pagliardini, Martin Jaggi:
DOGE: Domain Reweighting with Generalization Estimation. - Katherine Crowson, Stefan Andreas Baumann, Alex Birch, Tanishq Mathew Abraham, Daniel Z. Kaplan, Enrico Shippole:
Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers. - Snir Hordan, Tal Amir, Nadav Dym:
Weisfeiler Leman for Euclidean Equivariant Machine Learning. - Hongkang Li, Meng Wang, Tengfei Ma, Sijia Liu, Zaixi Zhang, Pin-Yu Chen:
What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding. - Paul Mattes, Rainer Schlosser, Ralf Herbrich:
Hieros: Hierarchical Imagination on Structured State Space Sequence World Models. - Foteini Strati, Sara McAllister, Amar Phanishayee, Jakub Tarnawski, Ana Klimovic:
DéjàVu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving. - Serina Chang, Frederic Koehler, Zhaonan Qu, Jure Leskovec, Johan Ugander:
Inferring Dynamic Networks from Marginals with Iterative Proportional Fitting. - Pol Puigdemont, Stratis Skoulakis, Grigorios Chrysos, Volkan Cevher:
Learning to Remove Cuts in Integer Linear Programming. - Che Liu, Zhongwei Wan, Cheng Ouyang, Anand Shah, Wenjia Bai, Rossella Arcucci:
Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement. - Guneykan Ozgul, Xiantao Li, Mehrdad Mahdavi, Chunhao Wang:
Stochastic Quantum Sampling for Non-Logconcave Distributions and Estimating Partition Functions. - Antonio Orvieto, Soham De, Caglar Gulcehre, Razvan Pascanu, Samuel L. Smith:
Universality of Linear Recurrences Followed by Non-linear Projections: Finite-Width Guarantees and Benefits of Complex Eigenvalues. - Samuel Greydanus, Dmitry Kobak:
Scaling Down Deep Learning with MNIST-1D. - Andi Peng, Yuying Sun, Tianmin Shu, David Abel:
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input. - Darya Biparva, Donatello Materassi:
Incorporating Information into Shapley Values: Reweighting via a Maximum Entropy Approach. - Changming Xu, Gagandeep Singh:
Robust Universal Adversarial Perturbations. - Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Vlad I. Morariu, Nanxuan Zhao, Ryan A. Rossi, Varun Manjunatha, Soheil Feizi:
On Mechanistic Knowledge Localization in Text-to-Image Generative Models. - Gustaf Ahdritz, Tian Qin, Nikhil Vyas, Boaz Barak, Benjamin L. Edelman:
Distinguishing the Knowable from the Unknowable with Language Models. - Shirin Shoushtari, Jiaming Liu, Edward P. Chandler, M. Salman Asif, Ulugbek S. Kamilov:
Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence Analysis. - Kishaan Jeeveswaran, Elahe Arani, Bahram Zonooz:
Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method. - Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni, Filippos Kokkinos:
IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation. - Stephanie Eckman, Barbara Plank, Frauke Kreuter:
Position: Insights from Survey Methodology can Improve Training Data. - Emily Jin, Michael M. Bronstein, Ismail Ilkan Ceylan, Matthias Lanzinger:
Homomorphism Counts for Graph Neural Networks: All About That Basis. - Bruno Gavranovic, Paul Lessard, Andrew Joseph Dudzik, Tamara von Glehn, João Guilherme Madeira Araújo, Petar Velickovic:
Position: Categorical Deep Learning is an Algebraic Theory of All Architectures. - Pavel E. Dvurechensky, Mathias Staudigl:
Barrier Algorithms for Constrained Non-Convex Optimization. - Mim van den Bos, Jacobus G. M. van der Linden, Emir Demirovic:
Piecewise Constant and Linear Regression Trees: An Optimal Dynamic Programming Approach. - Srinath Mahankali, Zhang-Wei Hong, Ayush Sekhari, Alexander Rakhlin, Pulkit Agrawal:
Random Latent Exploration for Deep Reinforcement Learning. - Ioannis Caragiannis, Evi Micha, Jannik Peters:
Can a Few Decide for Many? The Metric Distortion of Sortition. - Jiayi Deng, Xiaodong Yang, Jun Yu, Jun Liu, Zhaiming Shen, Danyang Huang, Huimin Cheng:
Network Tight Community Detection. - Shervin Khalafi, Saurabh Sihag, Alejandro Ribeiro:
Neural Tangent Kernels Motivate Cross-Covariance Graphs in Neural Networks. - Mengxiao Zhang, Yuheng Zhang, Haipeng Luo, Paul Mineiro:
Efficient Contextual Bandits with Uninformed Feedback Graphs. - Renhao Zhang, Haotian Fu, Yilin Miao, George Konidaris:
Model-based Reinforcement Learning for Parameterized Action Spaces. - Simon Vary, Pierre Ablin, Bin Gao, Pierre-Antoine Absil:
Optimization without Retraction on the Random Generalized Stiefel Manifold. - Joon Suk Huh, Kirthevasan Kandasamy:
Nash Incentive-compatible Online Mechanism Learning via Weakly Differentially Private Online Learning. - Ziyi Chen, Heng Huang:
Accelerated Policy Gradient for s-rectangular Robust MDPs with Large State Spaces. - Jonathan Svirsky, Ofir Lindenbaum:
Interpretable Deep Clustering for Tabular Data. - Ana-Andreea Stoica, Vivian Y. Nastl, Moritz Hardt:
Causal Inference from Competing Treatments. - Taejong Joo, Diego Klabjan:
IW-GAE: Importance weighted group accuracy estimation for improved calibration and model selection in unsupervised domain adaptation. - Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell:
OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large Language Models. - Jean-Marie Lemercier, Simon Rouard, Jade Copet, Yossi Adi, Alexandre Défossez:
An Independence-promoting Loss for Music Generation with Language Models. - Andrey Bryutkin, Jiahao Huang, Zhongying Deng, Guang Yang, Carola-Bibiane Schönlieb, Angelica I. Avilés-Rivero:
HAMLET: Graph Transformer Neural Operator for Partial Differential Equations. - Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, Chenguang Wang:
Agent Instructs Large Language Models to be General Zero-Shot Reasoners. - Dominik Fuchsgruber, Tom Wollschläger, Bertrand Charpentier, Antonio Oroz, Stephan Günnemann:
Uncertainty for Active Learning on Graphs. - Jin Hwa Lee, Stefano Sarao Mannelli, Andrew M. Saxe:
Why Do Animals Need Shaping? A Theory of Task Composition and Curriculum Learning. - Saya Higuchi, Sebastian Kairat, Sander M. Bohté, Sebastian Otte:
Balanced Resonate-and-Fire Neurons. - Mitchell Black, Zhengchao Wan, Gal Mishne, Amir Nayyeri, Yusu Wang:
Comparing Graph Transformers via Positional Encodings. - Yuezhou Hu, Kang Zhao, Weiyu Huang, Jianfei Chen, Jun Zhu:
Accelerating Transformer Pre-training with 2: 4 Sparsity. - Muru Zhang, Ofir Press, William Merrill, Alisa Liu, Noah A. Smith:
How Language Model Hallucinations Can Snowball. - Lichuan Xiang, Lukasz Dudziak, Mohamed S. Abdelfattah, Abhinav Mehrotra, Nicholas Donald Lane, Hongkai Wen:
Towards Neural Architecture Search through Hierarchical Generative Modeling. - Shikai Qiu, Andres Potapczynski, Marc Anton Finzi, Micah Goldblum, Andrew Gordon Wilson:
Compute Better Spent: Replacing Dense Layers with Structured Matrices. - Rasmus Kjær Høier, Christopher Zach:
Two Tales of Single-Phase Contrastive Hebbian Learning. - Pratik Bhowal, Achint Soni, Sirisha Rambhatla:
Why do Variational Autoencoders Really Promote Disentanglement? - Luis Scoccola, Siddharth Setlur, David Loiseaux, Mathieu Carrière, Steve Oudot:
Differentiability and Optimization of Multiparameter Persistent Homology. - Yi Cai, Gerhard Wunder:
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box. - Sriram Ganapathi Subramanian, Guiliang Liu, Mohammed Elmahgiubi, Kasra Rezaee, Pascal Poupart:
Confidence Aware Inverse Constrained Reinforcement Learning. - Mingju Liu, Yingjie Li, Jiaqi Yin, Zhiru Zhang, Cunxi Yu:
Differentiable Combinatorial Scheduling at Scale. - Brett Daley, Martha White, Marlos C. Machado:
Averaging n-step Returns Reduces Variance in Reinforcement Learning. - Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, Pin-Yu Chen, Wei-Chen Chiu:
Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts. - Pengwei Xing, Songtao Lu, Han Yu:
Federated Neuro-Symbolic Learning. - Fabian Falck, Ziyu Wang, Christopher C. Holmes:
Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective. - Giuseppe Paolo, Jonas Gonzalez-Billandon, Balázs Kégl:
Position: A Call for Embodied AI. - Stefan Horoi, Albert Manuel Orozco Camacho, Eugene Belilovsky, Guy Wolf:
Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis. - Stefan Sylvius Wagner, Stefan Harmeling:
Just Cluster It: An Approach for Exploration in High-Dimensions using Clustering and Pre-Trained Representations. - Yancheng Wang, Ping Li, Yingzhen Yang:
Visual Transformer with Differentiable Channel Selection: An Information Bottleneck Inspired Approach. - Ahmet Alacaoglu, Donghwan Kim, Stephen J. Wright:
Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity. - Roie Reshef, Kfir Yehuda Levy:
Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems. - Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
Understanding the Effects of Iterative Prompting on Truthfulness. - Tiago da Silva, Luiz Max Carvalho, Amauri H. Souza, Samuel Kaski, Diego Mesquita:
Embarrassingly Parallel GFlowNets. - Giseung Park, Woohyeon Byeon, Seongmin Kim, Elad Havakuk, Amir Leshem, Youngchul Sung:
The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm. - Kolby Nottingham, Bodhisattwa Prasad Majumder, Bhavana Dalvi Mishra, Sameer Singh, Peter Clark, Roy Fox:
Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills. - Xiwen Chen, Peijie Qiu, Wenhui Zhu, Huayu Li, Hao Wang, Aristeidis Sotiras, Yalin Wang, Abolfazl Razi:
TimeMIL: Advancing Multivariate Time Series Classification via a Time-aware Multiple Instance Learning. - Yibo Jiang, Goutham Rajendran, Pradeep Kumar Ravikumar, Bryon Aragam, Victor Veitch:
On the Origins of Linear Representations in Large Language Models. - Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, Stefano Soatto:
Fewer Truncations Improve Language Modeling. - Jordan Dotzel, Yuzong Chen, Bahaa Kotb, Sushma Prasad, Gang Wu, Sheng Li, Mohamed S. Abdelfattah, Zhiru Zhang:
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs. - William Merrill, Jackson Petty, Ashish Sabharwal:
The Illusion of State in State-Space Models. - Samuel McCauley, Benjamin Moseley, Aidin Niaparast, Shikha Singh:
Incremental Topological Ordering and Cycle Detection with Predictions. - Joshua R. Loftus:
Position: The Causal Revolution Needs Scientific Pragmatism. - Fangzhao Zhang, Mert Pilanci:
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models. - Ilias Diakonikolas, Daniel Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas:
Robust Sparse Estimation for Gaussians with Optimal Error under Huber Contamination. - Kai Xu, Hong Ge:
Practical Hamiltonian Monte Carlo on Riemannian Manifolds via Relativity Theory. - Flavio Martinelli, Berfin Simsek, Wulfram Gerstner, Johanni Brea:
Expand-and-Cluster: Parameter Recovery of Neural Networks. - Théo Guyard, Cédric Herzet, Clément Elvira, Ayse-Nur Arslan:
A New Branch-and-Bound Pruning Framework for ℓ0-Regularized Problems. - Luka Ribar, Ivan Chelombiev, Luke Hudlass-Galley, Charlie Blake, Carlo Luschi, Douglas Orr:
SparQ Attention: Bandwidth-Efficient LLM Inference. - Hao Wang, Kaifeng Yang, Michael Affenzeller:
Probability Distribution of Hypervolume Improvement in Bi-objective Bayesian Optimization. - Mohannad Elhamod, Anuj Karpatne:
Neuro-Visualizer: A Novel Auto-Encoder-Based Loss Landscape Visualization Method With an Application in Knowledge-Guided Machine Learning. - Niclas Alexander Göring, Florian Hess, Manuel Brenner, Zahra Monfared, Daniel Durstewitz:
Out-of-Domain Generalization in Dynamical Systems Reconstruction. - Shiyang Lai, Yujin Potter, Junsol Kim, Richard Zhuang, Dawn Song, James Evans:
Position: Evolving AI Collectives Enhance Human Diversity and Enable Self-Regulation. - Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, Jun Wang:
DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning. - Iain Weissburg, Mehir Arora, Xinyi Wang, Liangming Pan, William Yang Wang:
Position: AI/ML Influencers Have a Place in the Academic Process. - Victor Prokhorov, Ivan Titov, N. Siddharth:
Autoencoding Conditional Neural Processes for Representation Learning. - Yu Luo, Tianying Ji, Fuchun Sun, Jianwei Zhang, Huazhe Xu, Xianyuan Zhan:
Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL. - Aditya A. Ramesh, Kenny John Young, Louis Kirsch, Jürgen Schmidhuber:
Sequence Compression Speeds Up Credit Assignment in Reinforcement Learning. - Andrew H. Song, Richard J. Chen, Guillaume Jaume, Anurag J. Vaidya, Alexander S. Baras, Faisal Mahmood:
Multimodal Prototyping for cancer survival prediction. - Tianying Ji, Yu Luo, Fuchun Sun, Xianyuan Zhan, Jianwei Zhang, Huazhe Xu:
Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic. - Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel:
Fair Off-Policy Learning from Observational Data. - Viktor Nilsson, Anirban Samaddar, Sandeep Madireddy, Pierre Nyquist:
REMEDI: Corrective Transformations for Improved Neural Entropy Estimation. - Wenxuan Ma, Shuang Li, Lincan Cai, Jingxuan Kang:
Learning Modality Knowledge Alignment for Cross-Modality Transfer. - Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, Danqi Chen:
LESS: Selecting Influential Data for Targeted Instruction Tuning. - Zalan Fabian, Berk Tinaz, Mahdi Soltanolkotabi:
DiracDiffusion: Denoising and Incremental Reconstruction with Assured Data-Consistency. - Raymond Wang, Jaedong Hwang, Akhilan Boopathy, Ila R. Fiete:
Rapid Learning without Catastrophic Forgetting in the Morris Water Maze. - Haotian Fu, Pratyusha Sharma, Elias Stengel-Eskin, George Konidaris, Nicolas Le Roux, Marc-Alexandre Côté, Xingdi Yuan:
Language-guided Skill Learning with Temporal Variational Inference. - Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman:
D-Flow: Differentiating through Flows for Controlled Generation. - Julian Coda-Forno, Marcel Binz, Jane X. Wang, Eric Schulz:
CogBench: a large language model walks into a psychology lab. - Shahrzad Haddadan, Cheng Xin, Jie Gao:
Optimally Improving Cooperative Learning in a Social Setting. - Boris van Breugel, Mihaela van der Schaar:
Position: Why Tabular Foundation Models Should Be a Research Priority. - K. R. Prajwal, Bowen Shi, Matthew Le, Apoorv Vyas, Andros Tjandra, Mahi Luthra, Baishan Guo, Huiyu Wang, Triantafyllos Afouras, David Kant, Wei-Ning Hsu:
MusicFlow: Cascaded Flow Matching for Text Guided Music Generation. - Hao Wu, Rasmus Pagh:
Profile Reconstruction from Private Sketches. - Blake Bordelon, Alexander B. Atanasov, Cengiz Pehlevan:
A Dynamical Model of Neural Scaling Laws. - Hitesh Tulsiani, David M. Chan, Shalini Ghosh, Garima Lalwani, Prabhat Pandey, Ankish Bansal, Sri Garimella, Ariya Rastrow, Björn Hoffmeister:
An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems. - Frank Permenter, Chenyang Yuan:
Interpreting and Improving Diffusion Models from an Optimization Perspective. - Wojciech Kotlowski, Marek Wydmuch, Erik Schultheis, Rohit Babbar, Krzysztof Dembczynski:
A General Online Algorithm for Optimizing Complex Performance Metrics. - Huiwon Jang, Dongyoung Kim, Junsu Kim, Jinwoo Shin, Pieter Abbeel, Younggyo Seo:
Visual Representation Learning with Stochastic Frame Prediction. - Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin:
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models. - Yongshuo Zong, Ondrej Bohdal, Tingyang Yu, Yongxin Yang, Timothy M. Hospedales:
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models. - Semih Cantürk, Renming Liu, Olivier Lapointe-Gagné, Vincent Létourneau, Guy Wolf, Dominique Beaini, Ladislav Rampásek:
Graph Positional and Structural Encoder. - Lorenzo Bardone, Sebastian Goldt:
Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks. - Francesca-Zhoufan Li, Ava P. Amini, Yisong Yue, Kevin K. Yang, Alex Xijie Lu:
Feature Reuse and Scaling: Understanding Transfer Learning with Protein Language Models. - Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher:
MADA: Meta-Adaptive Optimizers Through Hyper-Gradient Descent. - Meng Cao, Mehdi Fatemi, Jackie C. K. Cheung, Samira Shabanian:
Successor Features for Efficient Multi-Subject Controlled Text Generation. - Sharath Chandra Raparthy, Eric Hambro, Robert Kirk, Mikael Henaff, Roberta Raileanu:
Generalization to New Sequential Decision Making Tasks with In-Context Learning. - Aaron David Tucker, Kianté Brantley, Adam Cahall, Thorsten Joachims:
Coactive Learning for Large Language Models using Implicit User Feedback. - Jiawei Fang, Haishan Song, Chengxu Zuo, Xiaoxia Gao, Xiaowei Chen, Shihui Guo, Yipeng Qin:
SuDA: Support-based Domain Adaptation for Sim2Real Hinge Joint Tracking with Flexible Sensors. - Edoardo Cetin, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric, Yann Ollivier, Ahmed Touati:
Simple Ingredients for Offline Reinforcement Learning. - Daniel Shalam, Simon Korman:
The Balanced-Pairwise-Affinities Feature Transform. - Zakhar Shumaylov, Jeremy Budd, Subhadip Mukherjee, Carola-Bibiane Schönlieb:
Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation. - Yilun Xu, Gabriele Corso, Tommi S. Jaakkola, Arash Vahdat, Karsten Kreis:
DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents. - Patrick Altmeyer, Andrew M. Demetriou, Antony Bartlett, Cynthia C. S. Liem:
Position: Stop Making Unscientific AGI Performance Claims. - Sergio Calvo-Ordoñez, Matthieu Meunier, Francesco Piatti, Yuantao Shi:
Partially Stochastic Infinitely Deep Bayesian Neural Networks. - JoonHo Lee, Jae Oh Woo, Juree Seok, Parisa Hassanzadeh, Wooseok Jang, JuYoun Son, Sima Didari, Baruch Gutow, Heng Hao, Hankyu Moon, Wenjun Hu, Yeong-Dae Kwon, Taehee Lee, Seungjai Min:
Improving Instruction Following in Language Models through Proxy-Based Uncertainty Estimation. - Haozhe Ji, Cheng Lu, Yilin Niu, Pei Ke, Hongning Wang, Jun Zhu, Jie Tang, Minlie Huang:
Towards Efficient Exact Optimization of Language Model Alignment. - Anton Rodomanov, Ali Kavis, Yongtao Wu, Kimon Antonakopoulos, Volkan Cevher:
Universal Gradient Methods for Stochastic Convex Optimization. - Pierre Clavier, Tom Huix, Alain Oliviero Durmus:
VITS : Variational Inference Thompson Sampling for contextual bandits. - Yilun Du, Jiayuan Mao, Joshua B. Tenenbaum:
Learning Iterative Reasoning through Energy Diffusion. - Chiraag Kaushik, Ran Liu, Chi-Heng Lin, Amrit Khera, Matthew Y. Jin, Wenrui Ma, Vidya Muthukumar, Eva L. Dyer:
Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance. - Maohao Shen, Subhro Das, Kristjan H. Greenewald, Prasanna Sattigeri, Gregory W. Wornell, Soumya Ghosh:
Thermometer: Towards Universal Calibration for Large Language Models. - Noel Loo, Alaa Maalouf, Ramin M. Hasani, Mathias Lechner, Alexander Amini, Daniela Rus:
Large Scale Dataset Distillation with Domain Shift. - Vincent Micheli, Eloi Alonso, François Fleuret:
Efficient World Models with Context-Aware Tokenization. - Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet:
Random features models: a way to study the success of naive imputation. - Alessio Gravina, Giulio Lovisotto, Claudio Gallicchio, Davide Bacciu, Claas Grohnfeldt:
Long Range Propagation on Continuous-Time Dynamic Graphs. - Jonathan Scott, Áine Cahill:
Improved Modelling of Federated Datasets using Mixtures-of-Dirichlet-Multinomials. - Nikita Tsoy, Nikola Konstantinov:
Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data. - Johan Samir Obando-Ceron, Aaron C. Courville, Pablo Samuel Castro:
In value-based deep reinforcement learning, a pruned network is a good network. - Raef Bassily, Corinna Cortes, Anqi Mao, Mehryar Mohri:
Differentially Private Domain Adaptation with Theoretical Guarantees. - Jakub Svoboda, Suguman Bansal, Krishnendu Chatterjee:
Reinforcement Learning from Reachability Specifications: PAC Guarantees with Expected Conditional Distance. - Ashish Hooda, Mihai Christodorescu, Miltiadis Allamanis, Aaron Wilson, Kassem Fawaz, Somesh Jha:
Do Large Code Models Understand Programming Concepts? Counterfactual Analysis for Code Predicates. - Alexis Chevalier, Jiayi Geng, Alexander Wettig, Howard Chen, Sebastian Mizera, Toni Annala, Max Jameson Aragon, Arturo Rodríguez Fanlo, Simon Frieder, Simon Machado, Akshara Prabhakar, Ellie Thieu, Jiachen T. Wang, Zirui Wang, Xindi Wu, Mengzhou Xia, Wenhan Xia, Jiatong Yu, Junjie Zhu, Zhiyong Jason Ren, Sanjeev Arora, Danqi Chen:
Language Models as Science Tutors. - Artur Back de Luca, Kimon Fountoulakis:
Simulation of Graph Algorithms with Looped Transformers. - Geoffrey Cideron, Sertan Girgin, Mauro Verzetti, Damien Vincent, Matej Kastelic, Zalán Borsos, Brian McWilliams, Victor Ungureanu, Olivier Bachem, Olivier Pietquin, Matthieu Geist, Léonard Hussenot, Neil Zeghidour, Andrea Agostinelli:
MusicRL: Aligning Music Generation to Human Preferences. - Jon Saad-Falcon, Daniel Y. Fu, Simran Arora, Neel Guha, Christopher Ré:
Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT. - Vivienne Huiling Wang, Tinghuai Wang, Wenyan Yang, Joni-Kristian Kämäräinen, Joni Pajarinen:
Probabilistic Subgoal Representations for Hierarchical Reinforcement Learning. - Danru Xu, Dingling Yao, Sébastien Lachapelle, Perouz Taslakian, Julius von Kügelgen, Francesco Locatello, Sara Magliacane:
A Sparsity Principle for Partially Observable Causal Representation Learning. - Fares Fourati, Mohamed-Slim Alouini, Vaneet Aggarwal:
Federated Combinatorial Multi-Agent Multi-Armed Bandits. - Tanmay Gautam, Youngsuk Park, Hao Zhou, Parameswaran Raman, Wooseok Ha:
Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models. - Hao Zhao, Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion:
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning. - Yosra Marnissi, Maxime Leiber:
A Unified View of FANOVA: A Comprehensive Bayesian Framework for Component Selection and Estimation. - Phong C. H. Nguyen, Xinlun Cheng, Shahab Azarfar, Pradeep K. Seshadri, Yen Thi Nguyen, Munho Kim, Sanghun Choi, H. S. Udaykumar, Stephen Baek:
PARCv2: Physics-aware Recurrent Convolutional Neural Networks for Spatiotemporal Dynamics Modeling. - Yiwei Lu, Matthew Y. R. Yang, Zuoqiu Liu, Gautam Kamath, Yaoliang Yu:
Disguised Copyright Infringement of Latent Diffusion Models. - Guanhua Zhang, Moritz Hardt:
Inherent Trade-Offs between Diversity and Stability in Multi-Task Benchmarks. - Jinuk Kim, Marwa El Halabi, Mingi Ji, Hyun Oh Song:
LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging. - Simone Bombari, Marco Mondelli:
How Spurious Features are Memorized: Precise Analysis for Random and NTK Features. - Yejia Liu, Jianyi Yang, Pengfei Li, Tongxin Li, Shaolei Ren:
Building Socially-Equitable Public Models. - Longxin Kou, Fei Ni, Yan Zheng, Jinyi Liu, Yifu Yuan, Zibin Dong, Jianye Hao:
KISA: A Unified Keyframe Identifier and Skill Annotator for Long-Horizon Robotics Demonstrations. - Shashwat Singh, Shauli Ravfogel, Jonathan Herzig, Roee Aharoni, Ryan Cotterell, Ponnurangam Kumaraguru:
Representation Surgery: Theory and Practice of Affine Steering. - Yihan Du, Anna Winnicki, Gal Dalal, Shie Mannor, R. Srikant:
Exploration-Driven Policy Optimization in RLHF: Theoretical Insights on Efficient Data Utilization. - Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, Prashant J. Shenoy:
Chasing Convex Functions with Long-term Constraints. - Justin M. Baker, Shih-Hsin Wang, Tommaso de Fernex, Bao Wang:
An Explicit Frame Construction for Normalizing 3D Point Clouds. - Cornelius Schröder, Jakob H. Macke:
Simultaneous identification of models and parameters of scientific simulators. - Andrea Cini, Danilo P. Mandic, Cesare Alippi:
Graph-based Time Series Clustering for End-to-End Hierarchical Forecasting. - Simone Bombari, Marco Mondelli:
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features. - Wenlin Chen, Mingtian Zhang, Brooks Paige, José Miguel Hernández-Lobato, David Barber:
Diffusive Gibbs Sampling. - Ignavier Ng, Xinshuai Dong, Haoyue Dai, Biwei Huang, Peter Spirtes, Kun Zhang:
Score-Based Causal Discovery of Latent Variable Causal Models. - Marek Petrik, Guy Tennenholtz, Mohammad Ghavamzadeh:
Bayesian Regret Minimization in Offline Bandits. - Noémie Jaquier, Leonel Rozo, Miguel González Duque, Viacheslav Borovitskiy, Tamim Asfour:
Bringing Motion Taxonomies to Continuous Domains via GPLVM on Hyperbolic manifolds. - Gil Kur, Pedro Abdalla, Pierre Bizeul, Fanny Yang:
Minimum Norm Interpolation Meets The Local Theory of Banach Spaces. - Alan L. S. Matias, César Lincoln C. Mattos, João Paulo Pordeus Gomes, Diego Mesquita:
Amortized Variational Deep Kernel Learning. - Yanru Qu, Keyue Qiu, Yuxuan Song, Jingjing Gong, Jiawei Han, Mingyue Zheng, Hao Zhou, Wei-Ying Ma:
MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space. - Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Shuang Ma, Hal Daumé III, Huazhe Xu, John Langford, Praveen Palanisamy, Kalyan Shankar Basu, Furong Huang:
Premier-TACO is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss. - Weijia Xu, Andrzej Banburski, Nebojsa Jojic:
Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling. - Yu Chen, Xiangcheng Zhang, Siwei Wang, Longbo Huang:
Provable Risk-Sensitive Distributional Reinforcement Learning with General Function Approximation. - Jingwei Sun, Ziyue Xu, Hongxu Yin, Dong Yang, Daguang Xu, Yudong Liu, Zhixu Du, Yiran Chen, Holger R. Roth:
FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models. - Yangyi Li, Aobo Chen, Wei Qian, Chenxu Zhao, Divya Lidder, Mengdi Huai:
Data Poisoning Attacks against Conformal Prediction. - Mehran Poursoltani, Erick Delage, Angelos Georghiou:
Robust Data-driven Prescriptiveness Optimization. - Wanyun Xie, Fabian Latorre, Kimon Antonakopoulos, Thomas Pethick, Volkan Cevher:
Improving SAM Requires Rethinking its Optimization Formulation. - Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen Feng, Qiwei Ye, Di He, Liwei Wang:
Do Efficient Transformers Really Save Computation? - Baohong Li, Haoxuan Li, Anpeng Wu, Minqin Zhu, Shiyuan Peng, Qingyu Cao, Kun Kuang:
A Generative Approach for Treatment Effect Estimation under Collider Bias: From an Out-of-Distribution Perspective. - Yuji Hirono, Akinori Tanaka, Kenji Fukushima:
Understanding Diffusion Models by Feynman's Path Integral. - Wenzhi Gao, Chunlin Sun, Chenyu Xue, Yinyu Ye:
Decoupling Learning and Decision-Making: Breaking the O(T) Barrier in Online Resource Allocation with First-Order Methods. - Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W. Mahoney, Kurt Keutzer:
SqueezeLLM: Dense-and-Sparse Quantization. - Xinyi Ni, Guanlin Liu, Lifeng Lai:
Risk-Sensitive Reward-Free Reinforcement Learning with CVaR. - Zhihan Liu, Hao Hu, Shenao Zhang, Hongyi Guo, Shuqi Ke, Boyi Liu, Zhaoran Wang:
Reason for Future, Act for Now: A Principled Architecture for Autonomous LLM Agents. - Ezgi Korkmaz:
Understanding and Diagnosing Deep Reinforcement Learning. - Yinan Feng, Yinpeng Chen, Peng Jin, Shihang Feng, Youzuo Lin:
Auto-Linear Phenomenon in Subsurface Imaging. - Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar:
Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data. - Grigorii Veviurko, Wendelin Boehmer, Mathijs de Weerdt:
To the Max: Reinventing Reward in Reinforcement Learning. - Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Adam Sealfon:
Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization. - Chengyue Gong, Adam R. Klivans, James Loy, Tianlong Chen, Qiang Liu, Daniel Jesus Diaz:
Evolution-Inspired Loss Functions for Protein Representation Learning. - Yazhe Li, Jörg Bornschein, Ting Chen:
Denoising Autoregressive Representation Learning. - Elvis Dohmatob, Yunzhen Feng, Pu Yang, François Charton, Julia Kempe:
A Tale of Tails: Model Collapse as a Change of Scaling Laws. - Jianhao He, Chengchang Liu, Xutong Liu, Lvzhou Li, John C. S. Lui:
Quantum Algorithm for Online Exp-concave Optimization. - Natasha Butt, Blazej Manczak, Auke J. Wiggers, Corrado Rainone, David W. Zhang, Michaël Defferrard, Taco Cohen:
CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay. - Tuan Anh Le, Pavel Sountsov, Matthew Douglas Hoffman, Ben Lee, Brian Patton, Rif A. Saurous:
Robust Inverse Graphics via Probabilistic Inference. - Xingyi Zhao, Depeng Xu, Shuhan Yuan:
Defense against Backdoor Attack on Pre-trained Language Models via Head Pruning and Attention Normalization. - Andrew Jesson, Chris Lu, Gunshi Gupta, Nicolas Beltran-Velez, Angelos Filos, Jakob Nicolaus Foerster, Yarin Gal:
ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages. - Piotr Nawrot, Adrian Lancucki, Marcin Chochowski, David Tarjan, Edoardo M. Ponti:
Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference. - Lujing Zhang, Aaron Roth, Linjun Zhang:
Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks. - Tonghe Zhang, Yu Chen, Longbo Huang:
Provably Efficient Partially Observable Risk-sensitive Reinforcement Learning with Hindsight Observation. - Xi Chen, Zhewen Hou, Christopher A. Metzler, Arian Maleki, Shirin Jalali:
Bagged Deep Image Prior for Recovering Images in the Presence of Speckle Noise. - Martina G. Vilas, Federico Adolfi, David Poeppel, Gemma Roig:
Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience. - Ching-Yun Ko, Pin-Yu Chen, Payel Das, Jeet Mohapatra, Luca Daniel:
What Would Gauss Say About Representations? Probing Pretrained Image Models using Synthetic Gaussian Benchmarks. - Sheng Yue, Jiani Liu, Xingyuan Hua, Ju Ren, Sen Lin, Junshan Zhang, Yaoxue Zhang:
How to Leverage Diverse Demonstrations in Offline Imitation Learning. - M. Saquib Sarfraz, Mei-Yen Chen, Lukas Layer, Kunyu Peng, Marios Koulakis:
Position: Quo Vadis, Unsupervised Time Series Anomaly Detection? - Paulina Tomaszewska, Przemyslaw Biecek:
Position: Do Not Explain Vision Models Without Context. - Sheng Yue, Xingyuan Hua, Ju Ren, Sen Lin, Junshan Zhang, Yaoxue Zhang:
OLLIE: Imitation Learning from Offline Pretraining to Online Finetuning. - Alessandro B. Palmarini, Christopher G. Lucas, N. Siddharth:
Bayesian Program Learning by Decompiling Amortized Knowledge. - Yongchang Hao, Yanshuai Cao, Lili Mou:
Flora: Low-Rank Adapters Are Secretly Gradient Compressors. - Gerardo Duran-Martin, Matías Altamirano, Alexander Y. Shestopaloff, Leandro Sánchez-Betancourt, Jeremias Knoblauch, Matt Jones, François-Xavier Briol, Kevin Patrick Murphy:
Outlier-robust Kalman Filtering through Generalised Bayes. - Fares Fourati, Vaneet Aggarwal, Mohamed-Slim Alouini:
Stochastic Q-learning for Large Discrete Action Spaces. - Megh Shukla, Mathieu Salzmann, Alexandre Alahi:
TIC-TAC: A Framework For Improved Covariance Estimation In Deep Heteroscedastic Regression. - Peter W. Chang, Leor Fishman, Seth Neel:
Feature Importance Disparities for Data Bias Investigations. - Johann Schmidt, Sebastian Stober:
Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers. - Samuel Lavoie, Polina Kirichenko, Mark Ibrahim, Mido Assran, Andrew Gordon Wilson, Aaron C. Courville, Nicolas Ballas:
Modeling Caption Diversity in Contrastive Vision-Language Pretraining. - Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas:
In-Context Language Learning: Architectures and Algorithms. - Mark Vero, Mislav Balunovic, Martin T. Vechev:
CuTS: Customizable Tabular Synthetic Data Generation. - Asterios Tsiourvas, Georgia Perakis:
Overcoming the Optimizer's Curse: Obtaining Realistic Prescriptions from Neural Networks. - Tao Yu, Gaurav Gupta, Karthick Gopalswamy, Amith R. Mamidala, Hao Zhou, Jeffrey Huynh, Youngsuk Park, Ron Diamant, Anoop Deoras, Luke Huan:
Collage: Light-Weight Low-Precision Strategy for LLM Training. - Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang:
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty. - Puja Trivedi, Ryan A. Rossi, David Arbour, Tong Yu, Franck Dernoncourt, Sungchul Kim, Nedim Lipka, Namyong Park, Nesreen K. Ahmed, Danai Koutra:
Editing Partially Observable Networks via Graph Diffusion Models. - Kimia Nadjahi, Kristjan H. Greenewald, Rickard Brüel Gabrielsson, Justin Solomon:
Slicing Mutual Information Generalization Bounds for Neural Networks. - Boning Li, Zhixuan Fang, Longbo Huang:
RL-CFR: Improving Action Abstraction for Imperfect Information Extensive-Form Games with Reinforcement Learning. - Alex Kulesza, Ananda Theertha Suresh, Yuyan Wang:
Mean Estimation in the Add-Remove Model of Differential Privacy. - Davide Maran, Alberto Maria Metelli, Matteo Papini, Marcello Restelli:
No-Regret Reinforcement Learning in Smooth MDPs. - Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos:
Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks. - Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, Jianshu Chen:
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. - Muhammed Emrullah Ildiz, Yixiao Huang, Yingcong Li, Ankit Singh Rawat, Samet Oymak:
From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers. - Huy Nguyen, Pedram Akbarian, Nhat Ho:
Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts? - Zhe Zhao, Pengkun Wang, Haibin Wen, Wei Xu, Song Lai, Qingfu Zhang, Yang Wang:
Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning. - Konstantin Schürholt, Michael W. Mahoney, Damian Borth:
Towards Scalable and Versatile Weight Space Learning. - Jiayi Wang, Zhengling Qi, Raymond K. W. Wong:
A Fine-grained Analysis of Fitted Q-evaluation: Beyond Parametric Models. - Ali Zeynali, Shahin Kamali, Mohammad Hajiesmaili:
Robust Learning-Augmented Dictionaries. - Sebastian Loeschcke, Dan Wang, Christian Leth-Espensen, Serge J. Belongie, Michael J. Kastoryano, Sagie Benaim:
Coarse-To-Fine Tensor Trains for Compact Visual Representations. - Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron:
Improved Generalization of Weight Space Networks via Augmentations. - Federico Bianchi, Edoardo Zorzi, Alberto Castellini, Thiago D. Simão, Matthijs T. J. Spaan, Alessandro Farinelli:
Scalable Safe Policy Improvement for Factored Multi-Agent MDPs. - Junhua Zeng, Chao Li, Zhun Sun, Qibin Zhao, Guoxu Zhou:
tnGPS: Discovering Unknown Tensor Network Structure Search Algorithms via Large Language Models (LLMs). - Xuexin Chen, Ruichu Cai, Zhengting Huang, Yuxuan Zhu, Julien Horwood, Zhifeng Hao, Zijian Li, José Miguel Hernández-Lobato:
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation. - Xiaowen Yang, Wenda Wei, Jie-Jing Shao, Yufeng Li, Zhi-Hua Zhou:
Analysis for Abductive Learning and Neural-Symbolic Reasoning Shortcuts. - Seth Nabarro, Mark van der Wilk, Andrew J. Davison:
Learning in Deep Factor Graphs with Gaussian Belief Propagation. - Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou, Ye Yuan, Guoren Wang:
Keypoint-based Progressive Chain-of-Thought Distillation for LLMs. - Gianluigi Lopardo, Frédéric Precioso, Damien Garreau:
Attention Meets Post-hoc Interpretability: A Mathematical Perspective. - Wei Hao, Daniel Mendoza, Rafael Mendes, Deepak Narayanan, Amar Phanishayee, Asaf Cidon, Junfeng Yang:
MGit: A Model Versioning and Management System. - Qiwei Tian, Chenhao Lin, Zhengyu Zhao, Qian Li, Chao Shen:
Collapse-Aware Triplet Decoupling for Adversarially Robust Image Retrieval. - Tom Huix, Anna Korba, Alain Oliviero Durmus, Eric Moulines:
Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians. - Yuchen Wu, Minshuo Chen, Zihao Li, Mengdi Wang, Yuting Wei:
Theoretical insights for diffusion guidance: A case study for Gaussian mixture models. - William G. Powell, Hanbaek Lyu:
Stochastic Optimization with Arbitrary Recurrent Data Sampling. - Neta Shaul, Uriel Singer, Ricky T. Q. Chen, Matthew Le, Ali K. Thabet, Albert Pumarola, Yaron Lipman:
Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models. - Weizhong Zhang, Zhiwei Zhang, Renjie Pi, Zhongming Jin, Yuan Gao, Jieping Ye, Kani Chen:
Efficient Denoising Diffusion via Probabilistic Masking. - Floris Holstege, Bram Wouters, Noud P. A. van Giersbergen, Cees Diks:
Removing Spurious Concepts from Neural Network Representations via Joint Subspace Estimation. - Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu:
DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training. - Tuan Pham, Stephan Mandt:
Neural NeRF Compression. - Jianhong Wang, Yang Li, Yuan Zhang, Wei Pan, Samuel Kaski:
Open Ad Hoc Teamwork with Cooperative Game Theory. - Shentao Qin, Yujie Yang, Yao Mu, Jie Li, Wenjun Zou, Jingliang Duan, Shengbo Eben Li:
Feasible Reachable Policy Iteration. - Johannes Zenn, Robert Bamler:
Differentiable Annealed Importance Sampling Minimizes The Jensen-Shannon Divergence Between Initial and Target Distribution. - Luis Müller, Christopher Morris:
Aligning Transformers with Weisfeiler-Leman. - Yuchen Yang, Yingdong Shi, Cheems Wang, Xiantong Zhen, Yuxuan Shi, Jun Xu:
Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing Backpropagation. - Laura Marusich, Jonathan Z. Bakdash, Yan Zhou, Murat Kantarcioglu:
Using AI Uncertainty Quantification to Improve Human Decision-Making. - Nabeel Seedat, Nicolas Huynh, Boris van Breugel, Mihaela van der Schaar:
Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes. - Pietro Sittoni, Francesco Tudisco:
Subhomogeneous Deep Equilibrium Models. - Siqiao Mu, Diego Klabjan:
On the Second-Order Convergence of Biased Policy Gradient Algorithms. - Tam Le, Truyen Nguyen, Kenji Fukumizu:
Generalized Sobolev Transport for Probability Measures on a Graph. - Nikolas Kirschstein, Yixuan Sun:
The Merit of River Network Topology for Neural Flood Forecasting. - Sherry Yang, Jacob C. Walker, Jack Parker-Holder, Yilun Du, Jake Bruce, André Barreto, Pieter Abbeel, Dale Schuurmans:
Position: Video as the New Language for Real-World Decision Making. - Liang Zhang, Bingcong Li, Kiran Koshy Thekumparampil, Sewoong Oh, Niao He:
DPZero: Private Fine-Tuning of Language Models without Backpropagation. - Andi Nika, Debmalya Mandal, Parameswaran Kamalaruban, Georgios Tzannetos, Goran Radanovic, Adish Singla:
Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences. - Kaiping Zheng, Horng Ruey Chua, Melanie Herschel, H. V. Jagadish, Beng Chin Ooi, James Wei Luen Yip:
Exploiting Negative Samples: A Catalyst for Cohort Discovery in Healthcare Analytics. - Benjamin Dupuis, Umut Simsekli:
Generalization Bounds for Heavy-Tailed SDEs through the Fractional Fokker-Planck Equation. - Dominik Schröder, Daniil Dmitriev, Hugo Cui, Bruno Loureiro:
Asymptotics of Learning with Deep Structured (Random) Features. - Xiaochuan Gong, Jie Hao, Mingrui Liu:
A Nearly Optimal Single Loop Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness. - Sabine Muzellec, Thomas Fel, Victor Boutin, Léo Andéol, Rufin VanRullen, Thomas Serre:
Saliency strikes back: How filtering out high frequencies improves white-box explanations. - Qiufeng Wang, Xu Yang, Haokun Chen, Xin Geng:
Vision Transformers as Probabilistic Expansion from Learngene. - Thanh Nguyen-Tang, Raman Arora:
On The Statistical Complexity of Offline Decision-Making. - Ruixiang Sun, Hongyu Zang, Xin Li, Riashat Islam:
Learning Latent Dynamic Robust Representations for World Models. - Zichuan Liu, Tianchun Wang, Jimeng Shi, Xu Zheng, Zhuomin Chen, Lei Song, Wenqian Dong, Jayantha Obeysekera, Farhad Shirani, Dongsheng Luo:
TimeX++: Learning Time-Series Explanations with Information Bottleneck. - Zifeng Zhuang, Dengyun Peng, Jinxin Liu, Ziqi Zhang, Donglin Wang:
Reinformer: Max-Return Sequence Modeling for Offline RL. - Xinyu Zhou, Raef Bassily:
Differentially Private Worst-group Risk Minimization. - Ying Li, Zhidi Lin, Feng Yin, Michael Minyi Zhang:
Preventing Model Collapse in Gaussian Process Latent Variable Models. - Gezheng Xu, Qi Chen, Charles Ling, Boyu Wang, Changjian Shui:
Intersectional Unfairness Discovery. - Ziheng Sun, Xudong Wang, Chris Ding, Jicong Fan:
Learning Graph Representation via Graph Entropy Maximization. - Michael Valancius, Max Lennon, Junier Oliva:
Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition. - Lukas Gruber, Markus Holzleitner, Johannes Lehner, Sepp Hochreiter, Werner Zellinger:
Overcoming Saturation in Density Ratio Estimation by Iterated Regularization. - Kihyun Kim, Jiawei Zhang, Asuman E. Ozdaglar, Pablo A. Parrilo:
A Unified Linear Programming Framework for Offline Reward Learning from Human Demonstrations and Feedback. - Zhuomin Chen, Jiaxing Zhang, Jingchao Ni, Xiaoting Li, Yuchen Bian, Md Mezbahul Islam, Ananda Mondal, Hua Wei, Dongsheng Luo:
Generating In-Distribution Proxy Graphs for Explaining Graph Neural Networks. - Loek van Rossem, Andrew M. Saxe:
When Representations Align: Universality in Representation Learning Dynamics. - João A. Cândido Ramos, Lionel Blondé, Naoya Takeishi, Alexandros Kalousis:
Mimicking Better by Matching the Approximate Action Distribution. - Georgios Kaissis, Stefan Kolek, Borja Balle, Jamie Hayes, Daniel Rueckert:
Beyond the Calibration Point: Mechanism Comparison in Differential Privacy. - Divya Nori, Wengong Jin:
RNAFlow: RNA Structure & Sequence Design via Inverse Folding-Based Flow Matching. - Chenhao Lu, Ruizhe Shi, Yuyao Liu, Kaizhe Hu, Simon Shaolei Du, Huazhe Xu:
Rethinking Transformers in Solving POMDPs. - Haohui Wang, Yuzhen Mao, Yujun Yan, Yaoqing Yang, Jianhui Sun, Kevin Choi, Balaji Veeramani, Alison Hu, Edward Bowen, Tyler Cody, Dawei Zhou:
EvoluNet: Advancing Dynamic Non-IID Transfer Learning on Graphs. - Albane Ruaud, Cansu Sancaktar, Marco Bagatella, Christoph Ratzke, Georg Martius:
Modelling Microbial Communities with Graph Neural Networks. - Evripidis Bampis, Bruno Escoffier, Michalis Xefteris:
Parsimonious Learning-Augmented Approximations for Dense Instances of NP-hard Problems. - Zhipeng Liang, Xiaoteng Ma, José H. Blanchet, Jun Yang, Jiheng Zhang, Zhengyuan Zhou:
Single-Trajectory Distributionally Robust Reinforcement Learning. - Clément Lalanne, Sébastien Gadat:
Privately Learning Smooth Distributions on the Hypercube by Projections. - Akshay K. Jagadish, Julian Coda-Forno, Mirko Thalmann, Eric Schulz, Marcel Binz:
Human-like Category Learning by Injecting Ecological Priors from Large Language Models into Neural Networks. - Bernd Frauenknecht, Artur Eisele, Devdutt Subhasish, Friedrich Solowjow, Sebastian Trimpe:
Trust the Model Where It Trusts Itself - Model-Based Actor-Critic with Uncertainty-Aware Rollout Adaption. - Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar:
Lookbehind-SAM: k steps back, 1 step forward. - Tom Fischer, Pascal Peter, Joachim Weickert, Eddy Ilg:
Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields. - Alan Nawzad Amin, Andrew Gordon Wilson:
Scalable and Flexible Causal Discovery with an Efficient Test for Adjacency. - Yilun Du, Leslie Pack Kaelbling:
Position: Compositional Generative Modeling: A Single Model is Not All You Need. - Shikhar Mohan, Deepak Saini, Anshul Mittal, Sayak Ray Chowdhury, Bhawna Paliwal, Jian Jiao, Manish Gupta, Manik Varma:
OAK: Enriching Document Representations using Auxiliary Knowledge for Extreme Classification. - Ross M. Clarke, José Miguel Hernández-Lobato:
Studying K-FAC Heuristics by Viewing Adam through a Second-Order Lens. - Mohit Sharma, Amit Deshpande:
How Far Can Fairness Constraints Help Recover From Biased Data? - Tara Akhound-Sadegh, Jarrid Rector-Brooks, Avishek Joey Bose, Sarthak Mittal, Pablo Lemos, Cheng-Hao Liu, Marcin Sendera, Siamak Ravanbakhsh, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Alexander Tong:
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities. - Mikkel Jordahn, Pablo M. Olmos:
Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks. - Alan Tendler Leibel Bacellar, Zachary Susskind, Maurício Breternitz Jr., Eugene John, Lizy Kurian John, Priscila Machado Vieira Lima, Felipe M. G. França:
Differentiable Weightless Neural Networks. - Tobia Boschi, Francesca Bonin, Rodrigo Ordonez-Hurtado, Alessandra Pascale, Jonathan P. Epperlein:
A New Computationally Efficient Algorithm to solve Feature Selection for Functional Data Classification in High-dimensional Spaces. - Alessandro Mastrototaro, Jimmy Olsson:
Online Variational Sequential Monte Carlo. - Qiang Fu, Ashia Camage Wilson:
Mean-field Underdamped Langevin Dynamics and its Spacetime Discretization. - Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann:
Standardized Interpretable Fairness Measures for Continuous Risk Scores. - Meyer Scetbon, Joel Jennings, Agrin Hilmkil, Cheng Zhang, Chao Ma:
A Fixed-Point Approach for Causal Generative Modeling. - Junyu Huang, Qilong Feng, Ziyun Huang, Jinhui Xu, Jianxin Wang:
Near-Linear Time Approximation Algorithms for k-means with Outliers. - Ghadi S. Al Hajj, Aliaksandr Hubin, Chakravarthi Kanduri, Milena Pavlovic, Knut Dagestad Rand, Michael Widrich, Anne H. Schistad Solberg, Victor Greiff, Johan Pensar, Günter Klambauer, Geir Kjetil Sandve:
Incorporating probabilistic domain knowledge into deep multiple instance learning. - Gokul Swamy, Christoph Dann, Rahul Kidambi, Steven Wu, Alekh Agarwal:
A Minimaximalist Approach to Reinforcement Learning from Human Feedback. - Jiale Zhao, Wanru Zhuang, Jia Song, Yaqi Li, Shuqi Lu:
Pre-Training Protein Bi-level Representation Through Span Mask Strategy On 3D Protein Chains. - Massimo Bini, Karsten Roth, Zeynep Akata, Anna Khoreva:
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections. - Pengyi Li, Yan Zheng, Hongyao Tang, Xian Fu, Jianye Hao:
EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search. - Chaochao Chen, Jiaming Zhang, Yuyuan Li, Zhongxuan Han:
One for All: A Universal Generator for Concept Unlearnability via Multi-Modal Alignment. - Lucas Fabian Naumann, Jannik Irmai, Shengxian Zhao, Bjoern Andres:
Box Facets and Cut Facets of Lifted Multicut Polytopes. - Christopher Harker, Aditya Bhaskara:
Convergence Guarantees for the DeepWalk Embedding on Block Models. - Chen Zhang, Steven Tin Sui Luo, Jason Chun Lok Li, Yik-Chung Wu, Ngai Wong:
Nonparametric Teaching of Implicit Neural Representations. - Lirong Wu, Yijun Tian, Haitao Lin, Yufei Huang, Siyuan Li, Nitesh V. Chawla, Stan Z. Li:
Learning to Predict Mutational Effects of Protein-Protein Interactions by Microenvironment-aware Hierarchical Prompt Learning. - Jonathan Crabbé, Nicolas Huynh, Jan Stanczuk, Mihaela van der Schaar:
Time Series Diffusion in the Frequency Domain. - Ke Wang, Guillermo Ortiz-Jiménez, Rodolphe Jenatton, Mark Collier, Efi Kokiopoulou, Pascal Frossard:
Pi-DUAL: Using privileged information to distinguish clean from noisy labels. - Meng Xia, Jonathan Wilson, Benjamin Goldstein, Ricardo Henao:
Contrastive Learning for Clinical Outcome Prediction with Partial Data Sources. - Jannis Bulian, Mike S. Schäfer, Afra Amini, Heidi Lam, Massimiliano Ciaramita, Ben Gaiarin, Michelle Chen Huebscher, Christian Buck, Niels Mede, Markus Leippold, Nadine Strauß:
Assessing Large Language Models on Climate Information. - Kai Cui, Christian Fabian, Anam Tahir, Heinz Koeppl:
Major-Minor Mean Field Multi-Agent Reinforcement Learning. - Jan Philipp Schneider, Mishal Fatima, Jovita Lukasik, Andreas Kolb, Margret Keuper, Michael Moeller:
Implicit Representations for Constrained Image Segmentation. - Wenhan Xian, Ziyi Chen, Heng Huang:
Delving into the Convergence of Generalized Smooth Minimax Optimization. - Baohong Li, Anpeng Wu, Ruoxuan Xiong, Kun Kuang:
Two-Stage Shadow Inclusion Estimation: An IV Approach for Causal Inference under Latent Confounding and Collider Bias. - Miltiadis Allamanis, Sheena Panthaplackel, Pengcheng Yin:
Unsupervised Evaluation of Code LLMs with Round-Trip Correctness. - Shenghua Wan, Ziyuan Chen, Le Gan, Shuai Feng, De-Chuan Zhan:
SeMOPO: Learning High-quality Model and Policy from Low-quality Offline Visual Datasets. - Songhua Liu, Xin Jin, Xingyi Yang, Jingwen Ye, Xinchao Wang:
StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization. - Shiqi Chen, Miao Xiong, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He:
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation. - Feng Gao, Liangzhi Shi, Shenao Zhang, Zhaoran Wang, Yi Wu:
Adaptive-Gradient Policy Optimization: Enhancing Policy Learning in Non-Smooth Differentiable Simulations. - Alex James Chan, Hao Sun, Samuel Holt, Mihaela van der Schaar:
Dense Reward for Free in Reinforcement Learning from Human Feedback. - Paul S. Scotti, Mihir Tripathy, Cesar Torrico, Reese Kneeland, Tong Chen, Ashutosh Narang, Charan Santhirasegaran, Jonathan Xu, Thomas Naselaris, Kenneth A. Norman, Tanishq Mathew Abraham:
MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data. - Zixin Zhang, Fan Qi, Changsheng Xu:
Enhancing Storage and Computational Efficiency in Federated Multimodal Learning for Large-Scale Models. - Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, Zhiyuan Liu, Maosong Sun:
ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback. - Matthew Ashman, Cristiana Diaconu, Junhyuck Kim, Lakee Sivaraya, Stratis Markou, James Requeima, Wessel P. Bruinsma, Richard E. Turner:
Translation Equivariant Transformer Neural Processes. - Xuanfei Ren, Tianyuan Jin, Pan Xu:
Optimal Batched Linear Bandits. - Anselm Paulus, Georg Martius, Vít Musil:
LPGD: A General Framework for Backpropagation through Embedded Optimization Layers. - Nuoya Xiong, Zhaoran Wang, Zhuoran Yang:
A General Framework for Sequential Decision-Making under Adaptivity Constraints. - Yue Wu, Xidao Hu, Yongzhe Yuan, Xiaolong Fan, Maoguo Gong, Hao Li, Mingyang Zhang, Qiguang Miao, Wenping Ma:
PointMC: Multi-instance Point Cloud Registration based on Maximal Cliques. - Ning Xu, Yihao Hu, Congyu Qiao, Xin Geng:
Aligned Objective for Soft-Pseudo-Label Generation in Supervised Learning. - Pengdeng Li, Shuxin Li, Chang Yang, Xinrun Wang, Shuyue Hu, Xiao Huang, Hau Chan, Bo An:
Configurable Mirror Descent: Towards a Unification of Decision Making. - Andreas Opedal, Alessandro Stolfo, Haruki Shirakami, Ying Jiao, Ryan Cotterell, Bernhard Schölkopf, Abulhair Saparov, Mrinmaya Sachan:
Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? - Filippo Leveni, Guilherme Weigert Cassales, Bernhard Pfahringer, Albert Bifet, Giacomo Boracchi:
Online Isolation Forest. - Alexander Rudikov, Vladimir Fanaskov, Ekaterina A. Muravleva, Yuri M. Laevsky, Ivan V. Oseledets:
Neural operators meet conjugate gradients: The FCG-NO method for efficient PDE solving. - Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy S. Vatolin:
IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image- and Video-Quality Metrics. - Nicola Novello, Andrea M. Tonello:
f-Divergence Based Classification: Beyond the Use of Cross-Entropy. - Hoang Phan, Andrew Gordon Wilson, Qi Lei:
Controllable Prompt Tuning For Balancing Group Distributional Robustness. - Anastasia Koloskova, Nikita Doikov, Sebastian U. Stich, Martin Jaggi:
On Convergence of Incremental Gradient for Non-convex Smooth Functions. - Anna Rogers, Sasha Luccioni:
Position: Key Claims in LLM Research Have a Long Tail of Footnotes. - Apoorva Nitsure, Youssef Mroueh, Mattia Rigotti, Kristjan H. Greenewald, Brian Belgodere, Mikhail Yurochkin, Jirí Navrátil, Igor Melnyk, Jarret Ross:
Risk Aware Benchmarking of Large Language Models. - Manh Cuong Dao, Phi Le Nguyen, Truong Thao Nguyen, Trong Nghia Hoang:
Boosting Offline Optimizers with Surrogate Sensitivity. - Jan Stanczuk, Georgios Batzolis, Teo Deveney, Carola-Bibiane Schönlieb:
Diffusion Models Encode the Intrinsic Dimension of Data Manifolds. - Rico Angell, Andrew McCallum:
Fast, Scalable, Warm-Start Semidefinite Programming with Spectral Bundling and Sketching. - Anahita Baninajjar, Ahmed Rezine, Amir Aminifar:
VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees. - Shiyin Tan, Dongyuan Li, Renhe Jiang, Ying Zhang, Manabu Okumura:
Community-Invariant Graph Contrastive Learning. - Vijay Keswani, Anay Mehrotra, L. Elisa Celis:
Fair Classification with Partial Feedback: An Exploration-Based Data Collection Approach. - David Martínez-Rubio, Christophe Roux, Sebastian Pokutta:
Convergence and Trade-Offs in Riemannian Gradient Descent and Riemannian Proximal Point. - Rodrigo Veiga, Anastasia Remizova, Nicolas Macris:
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution for Weak Features. - Bowen Gao, Minsi Ren, Yuyan Ni, Yanwen Huang, Bo Qiang, Zhi-Ming Ma, Wei-Ying Ma, Yanyan Lan:
Rethinking Specificity in SBDD: Leveraging Delta Score and Energy-Guided Diffusion. - Seungbeom Hong, Ilmun Kim, Jun Song:
Enhancing Sufficient Dimension Reduction via Hellinger Correlation. - David W. Hogg, Soledad Villar:
Position: Is machine learning good or bad for the natural sciences? - Peng Wu, Ziyu Shen, Feng Xie, Zhongyao Wang, Chunchen Liu, Yan Zeng:
Policy Learning for Balancing Short-Term and Long-Term Rewards. - Shanka Subhra Mondal, Jonathan D. Cohen, Taylor Whittington Webb:
Slot Abstractors: Toward Scalable Abstract Visual Reasoning. - Chengde Qian, Mengyuan Wang, Haojie Ren, Changliang Zou:
ByMI: Byzantine Machine Identification with False Discovery Rate Control. - Yutong He, Jie Hu, Xinmeng Huang, Songtao Lu, Bin Wang, Kun Yuan:
Distributed Bilevel Optimization with Communication Compression. - Qunxi Zhu, Wei Lin:
Switched Flow Matching: Eliminating Singularities via Switching ODEs. - Yogesh Verma, Amauri H. Souza, Vikas Garg:
Topological Neural Networks go Persistent, Equivariant, and Continuous. - Francesco Emanuele Stradi, Jacopo Germano, Gianmarco Genalti, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti:
Online Learning in CMDPs: Handling Stochastic and Adversarial Constraints. - Jianxiong Li, Jinliang Zheng, Yinan Zheng, Liyuan Mao, Xiao Hu, Sijie Cheng, Haoyi Niu, Jihao Liu, Yu Liu, Jingjing Liu, Ya-Qin Zhang, Xianyuan Zhan:
DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning. - Hanzhang Wang, Jiawen Zhang, Qingyuan Ma:
Exploring Intrinsic Dimension for Vision-Language Model Pruning. - Xenia Heilmann, Mattia Cerrato, Ernst Althaus:
Differentially Private Sum-Product Networks. - Xavier Suau, Pieter Delobelle, Katherine Metcalf, Armand Joulin, Nicholas Apostoloff, Luca Zappella, Pau Rodríguez:
Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models. - Juha Harviainen, Mikko Koivisto:
Estimating the Permanent by Nesting Importance Sampling. - Jiangfei Duan, Runyu Lu, Haojie Duanmu, Xiuhong Li, Xingcheng Zhang, Dahua Lin, Ion Stoica, Hao Zhang:
MuxServe: Flexible Spatial-Temporal Multiplexing for Multiple LLM Serving. - Yongchao Feng, Shiwei Li, Yingjie Gao, Ziyue Huang, Yanan Zhang, Qingjie Liu, Yunhong Wang:
DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection. - Zhen-Yu Zhang, Siwei Han, Huaxiu Yao, Gang Niu, Masashi Sugiyama:
Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought. - Kejing Lu, Chuan Xiao, Yoshiharu Ishikawa:
Probabilistic Routing for Graph-Based Approximate Nearest Neighbor Search. - Yu Du, Fangyun Wei, Hongyang Zhang:
AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls. - Luca Ambrogioni:
Stationarity without mean reversion in improper Gaussian processes. - Jing Wang, Miao Yu, Peng Zhao, Zhi-Hua Zhou:
Learning with Adaptive Resource Allocation. - Steinar Laenen, He Sun:
Dynamic Spectral Clustering with Provable Approximation Guarantee. - Alexander Munteanu, Simon Omlor:
Turnstile ℓp leverage score sampling with applications. - Shang-Fu Chen, Hsiang-Chun Wang, Ming-Hao Hsu, Chun-Mao Lai, Shao-Hua Sun:
Diffusion Model-Augmented Behavioral Cloning. - Jacob Westerhout, TrungTin Nguyen, Xin Guo, Hien Duy Nguyen:
On the Asymptotic Distribution of the Minimum Empirical Risk. - Jianhao Yuan, Francesco Pinto, Adam Davies, Philip Torr:
Not Just Pretty Pictures: Toward Interventional Data Augmentation Using Text-to-Image Generators. - Aleksandra Nowak, Lukasz Gniecki, Filip Szatkowski, Jacek Tabor:
Sparser, Better, Deeper, Stronger: Improving Static Sparse Training with Exact Orthogonal Initialization. - Tommaso Salvatori, Luca Pinchetti, Amine M'Charrak, Beren Millidge, Thomas Lukasiewicz:
Predictive Coding beyond Correlations. - Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo, Di Fu, Tze-Yun Leong:
Reward Shaping for Reinforcement Learning with An Assistant Reward Agent. - Dongha Kim, Jaesung Hwang, Jongjin Lee, Kunwoong Kim, Yongdai Kim:
ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models. - Haobo Xu, Yuchen Yan, Dingsu Wang, Zhe Xu, Zhichen Zeng, Tarek F. Abdelzaher, Jiawei Han, Hanghang Tong:
SLOG: An Inductive Spectral Graph Neural Network Beyond Polynomial Filter. - Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari:
Extracting Training Data From Document-Based VQA Models. - Thalles Silva, Hélio Pedrini, Adín Ramírez Rivera:
Learning from Memory: Non-Parametric Memory Augmented Self-Supervised Learning of Visual Features. - Yi Dong, Ronghui Mu, Gaojie Jin, Yi Qi, Jinwei Hu, Xingyu Zhao, Jie Meng, Wenjie Ruan, Xiaowei Huang:
Position: Building Guardrails for Large Language Models Requires Systematic Design. - Kumar Shubham, Aishwarya Jayagopal, Syed Mohammed Danish, Prathosh A. P., Vaibhav Rajan:
WISER: Weak Supervision and Supervised Representation Learning to Improve Drug Response Prediction in Cancer. - Johan Peralez, Aurélien Delage, Olivier Buffet, Jilles Steeve Dibangoye:
Solving Hierarchical Information-Sharing Dec-POMDPs: An Extensive-Form Game Approach. - Soochan Lee, Hyeonseong Jeon, Jaehyeon Son, Gunhee Kim:
Learning to Continually Learn with the Bayesian Principle. - Haonan Huang, Guoxu Zhou, Yanghang Zheng, Yuning Qiu, Andong Wang, Qibin Zhao:
Adversarially Robust Deep Multi-View Clustering: A Novel Attack and Defense Framework. - Kirill Brilliantov, Fedor Pavutnitskiy, Dmitry Pasechnyuk, German Magai:
Applying language models to algebraic topology: generating simplicial cycles using multi-labeling in Wu's formula. - Soo Yong Lee, Sunwoo Kim, Fanchen Bu, Jaemin Yoo, Jiliang Tang, Kijung Shin:
Feature Distribution on Graph Topology Mediates the Effect of Graph Convolution: Homophily Perspective. - Fangru Lin, Emanuele La Malfa, Valentin Hofmann, Elle Michelle Yang, Anthony G. Cohn, Janet B. Pierrehumbert:
Graph-enhanced Large Language Models in Asynchronous Plan Reasoning. - Adeel Pervez, Francesco Locatello, Stratis Gavves:
Mechanistic Neural Networks for Scientific Machine Learning. - Jiahan Li, Chaoran Cheng, Zuofan Wu, Ruihan Guo, Shitong Luo, Zhizhou Ren, Jian Peng, Jianzhu Ma:
Full-Atom Peptide Design based on Multi-modal Flow Matching. - Li Kevin Wenliang, Grégoire Delétang, Matthew Aitchison, Marcus Hutter, Anian Ruoss, Arthur Gretton, Mark Rowland:
Distributional Bellman Operators over Mean Embeddings. - Ming-Kun Xie, Jiahao Xiao, Pei Peng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang:
Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training. - Jiawei Zhou, Linye Lyu, Daojing He, Yu Li:
RAUCA: A Novel Physical Adversarial Attack on Vehicle Detectors via Robust and Accurate Camouflage Generation. - Zheng Zhang, Na Wang, Ziqi Zhang, Yao Zhang, Tianyi Zhang, Jianwei Liu, Ye Wu:
GroupCover: A Secure, Efficient and Scalable Inference Framework for On-device Model Protection based on TEEs. - Ziniu Li, Tian Xu, Yushun Zhang, Zhihang Lin, Yang Yu, Ruoyu Sun, Zhi-Quan Luo:
ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models. - Xiaofan Bai, Chaoxiang He, Xiaojing Ma, Bin Benjamin Zhu, Hai Jin:
Intersecting-Boundary-Sensitive Fingerprinting for Tampering Detection of DNN Models. - Zenan Ling, Longbo Li, Zhanbo Feng, Yixuan Zhang, Feng Zhou, Robert C. Qiu, Zhenyu Liao:
Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures. - Shengzhuang Chen, Jihoon Tack, Yunqiao Yang, Yee Whye Teh, Jonathan Richard Schwarz, Ying Wei:
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts. - Aditya Chattopadhyay, Benjamin David Haeffele, René Vidal, Donald Geman:
Performance Bounds for Active Binary Testing with Information Maximization. - Mathieu Alain, So Takao, Brooks Paige, Marc Peter Deisenroth:
Gaussian Processes on Cellular Complexes. - Sebastian Sanokowski, Sepp Hochreiter, Sebastian Lehner:
A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization. - Bjørn Leth Møller, Christian Igel, Kristoffer Knutsen Wickstrøm, Jon Sporring, Robert Jenssen, Bulat Ibragimov:
Finding NEM-U: Explaining unsupervised representation learning through neural network generated explanation masks. - Mattie Fellows, Brandon Kaplowitz, Christian Schröder de Witt, Shimon Whiteson:
Bayesian Exploration Networks. - Ioana Bica, Anastasija Ilic, Matthias Bauer, Goker Erdogan, Matko Bosnjak, Christos Kaplanis, Alexey A. Gritsenko, Matthias Minderer, Charles Blundell, Razvan Pascanu, Jovana Mitrovic:
Improving fine-grained understanding in image-text pre-training. - Bowen Zheng, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan:
Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning. - Jufang Duan, Wei Zheng, Yangzhou Du, Wenfa Wu, Haipeng Jiang, Hongsheng Qi:
MF-CLR: Multi-Frequency Contrastive Learning Representation for Time Series. - Jiashun Liu, Jianye Hao, Yi Ma, Shuyin Xia:
Unlock the Cognitive Generalization of Deep Reinforcement Learning via Granular Ball Representation. - Michael Eli Sander, Raja Giryes, Taiji Suzuki, Mathieu Blondel, Gabriel Peyré:
How do Transformers Perform In-Context Autoregressive Learning ? - Shanglun Feng, Florian Tramèr:
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models. - Felix Draxler, Stefan Wahl, Christoph Schnörr, Ullrich Köthe:
On the Universality of Volume-Preserving and Coupling-Based Normalizing Flows. - Denys Pushkin, Raphaël Berthier, Emmanuel Abbe:
On the Minimal Degree Bias in Generalization on the Unseen for non-Boolean Functions. - Linshan Hou, Ruili Feng, Zhongyun Hua, Wei Luo, Leo Yu Zhang, Yiming Li:
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency. - Michael Beukman, Samuel Coward, Michael T. Matthews, Mattie Fellows, Minqi Jiang, Michael D. Dennis, Jakob Nicolaus Foerster:
Refining Minimax Regret for Unsupervised Environment Design. - Peijia Lin, Pin Chen, Rui Jiao, Qing Mo, Jianhuan Cen, Wenbing Huang, Yang Liu, Dan Huang, Yutong Lu:
Equivariant Diffusion for Crystal Structure Prediction. - Fan Yao, Chuanhao Li, Denis Nekipelov, Hongning Wang, Haifeng Xu:
Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict? - Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya:
Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning. - Junxin Lu, Shiliang Sun:
CauDiTS: Causal Disentangled Domain Adaptation of Multivariate Time Series. - Sree Harsha Nelaturu, Nishaanth Kanna Ravichandran, Cuong Tran, Sara Hooker, Ferdinando Fioretto:
On The Fairness Impacts of Hardware Selection in Machine Learning. - Andi Han, Pratik Jawanpuria, Bamdev Mishra:
Riemannian coordinate descent algorithms on matrix manifolds. - Tong Wei, Zhen Mao, Zi-Hao Zhou, Yuanyu Wan, Min-Ling Zhang:
Learning Label Shift Correction for Test-Agnostic Long-Tailed Recognition. - Daniele Calandriello, Zhaohan Daniel Guo, Rémi Munos, Mark Rowland, Yunhao Tang, Bernardo Ávila Pires, Pierre Harvey Richemond, Charline Le Lan, Michal Valko, Tianqi Liu, Rishabh Joshi, Zeyu Zheng, Bilal Piot:
Human Alignment of Large Language Models through Online Preference Optimisation. - Yahong Zhang, Chenchen Fan, Donghui Chen, Congrui Li, Wenli Ouyang, Mingda Zhu, Junchi Yan:
MILP-FBGen: LP/MILP Instance Generation with Feasibility/Boundedness. - Vincent Cohen-Addad, Tommaso d'Orsi, Silvio Lattanzi, Rajai Nasser:
Multi-View Stochastic Block Models. - Matthias Weissenbacher, Rishabh Agarwal, Yoshinobu Kawahara:
SiT: Symmetry-invariant Transformers for Generalisation in Reinforcement Learning. - Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, Hang Li:
Boximator: Generating Rich and Controllable Motions for Video Synthesis. - Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Xiaoli Li:
TSLANet: Rethinking Transformers for Time Series Representation Learning. - Uri Gadot, Kaixin Wang, Navdeep Kumar, Kfir Yehuda Levy, Shie Mannor:
Bring Your Own (Non-Robust) Algorithm to Solve Robust MDPs by Estimating The Worst Kernel. - Yihao Xue, Ali Payani, Yu Yang, Baharan Mirzasoleiman:
Few-shot Adaptation to Distribution Shifts By Mixing Source and Target Embeddings. - Yifan Zhu, Yibo Miao, Yinpeng Dong, Xiao-Shan Gao:
Toward Availability Attacks in 3D Point Clouds. - Nadav Hallak, Kfir Yehuda Levy:
A Study of First-Order Methods with a Deterministic Relative-Error Gradient Oracle. - Jeremy Wayland, Corinna Coupette, Bastian Rieck:
Mapping the Multiverse of Latent Representations. - Nicolas Chopin, Francesca R. Crucinio, Anna Korba:
A connection between Tempering and Entropic Mirror Descent. - William Yang, Ye Zhu, Zhiwei Deng, Olga Russakovsky:
What is Dataset Distillation Learning? - Chenfeng Miao, Qingying Zhu, Minchuan Chen, Wei Hu, Zijian Li, Shaojun Wang, Jing Xiao:
DFlow: A Generative Model Combining Denoising AutoEncoder and Normalizing Flow for High Fidelity Waveform Generation. - Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman:
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks. - Henrik Schopmans, Pascal Friederich:
Conditional Normalizing Flows for Active Learning of Coarse-Grained Molecular Representations. - Riccardo De Santi, Federico Arangath Joseph, Noah Liniger, Mirco Mutti, Andreas Krause:
Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction. - Kevin Doran, Marvin Seifert, Carola A. M. Yovanovich, Tom Baden:
Spike Distance Function as a Learning Objective for Spike Prediction. - Sumanth Varambally, Yian Ma, Rose Yu:
Discovering Mixtures of Structural Causal Models from Time Series Data. - William Toner, Luke Nicholas Darlow:
An Analysis of Linear Time Series Forecasting Models. - Qinshuo Liu, Zixin Wang, Xi'an Li, Xinyao Ji, Lei Zhang, Lin Liu, Zhonghua Liu:
DNA-SE: Towards Deep Neural-Nets Assisted Semiparametric Estimation. - Geewon Suh, Gyeongjo Hwang, Seokjun Kang, Doojin Baek, Mingeun Kang:
Enforcing Constraints in RNA Secondary Structure Predictions: A Post-Processing Framework Based on the Assignment Problem. - Xin Li, Jingdong Zhang, Qunxi Zhu, Chengli Zhao, Xue Zhang, Xiaojun Duan, Wei Lin:
From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems. - Tri Dao, Albert Gu:
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality. - Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji:
Executable Code Actions Elicit Better LLM Agents. - Haitong Liu, Qiang Li, Hoi-To Wai:
Two-timescale Derivative Free Optimization for Performative Prediction with Markovian Data. - Carles Balsells Rodas, Yixin Wang, Yingzhen Li:
On the Identifiability of Switching Dynamical Systems. - Shuo Xie, Zhiyuan Li:
Implicit Bias of AdamW: ℓ∞-Norm Constrained Optimization. - Yuu Jinnai, Tetsuro Morimura, Ukyo Honda, Kaito Ariu, Kenshi Abe:
Model-Based Minimum Bayes Risk Decoding for Text Generation. - Hongzhan Yu, Sicun Gao:
Activation-Descent Regularization for Input Optimization of ReLU Networks. - Nir Greshler, David Ben-Eli, Carmel Rabinovitz, Gabi Guetta, Liran Gispan, Guy Zohar, Aviv Tamar:
A Bayesian Approach to Online Planning. - Victor Agostinelli, Sanghyun Hong, Lizhong Chen:
LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions. - Angeliki Dimitriou, Maria Lymperaiou, Giorgos Filandrianos, Konstantinos Thomas, Giorgos Stamou:
Structure Your Data: Towards Semantic Graph Counterfactuals. - Yilong Wang, Haishan Ye, Guang Dai, Ivor W. Tsang:
Can Gaussian Sketching Converge Faster on a Preconditioned Landscape? - Yexin Li, Zhancun Mu, Siyuan Qi:
A Contextual Combinatorial Bandit Approach to Negotiation. - Hiroshi Morioka, Aapo Hyvärinen:
Causal Representation Learning Made Identifiable by Grouping of Observational Variables. - Yu-Chang Wu, Shen-Huan Lyu, Haopu Shang, Xiangyu Wang, Chao Qian:
Confidence-aware Contrastive Learning for Selective Classification. - Mark Kozdoba, Binyamin Perets, Shie Mannor:
Sobolev Space Regularised Pre Density Models. - Michael Psenka, Alejandro Escontrela, Pieter Abbeel, Yi Ma:
Learning a Diffusion Model Policy from Rewards via Q-Score Matching. - Yanfei Zhou, Lars Lindemann, Matteo Sesia:
Conformalized Adaptive Forecasting of Heterogeneous Trajectories. - Wenzhi Gao, Qi Deng:
Stochastic Weakly Convex Optimization beyond Lipschitz Continuity. - Xuran Meng, Difan Zou, Yuan Cao:
Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data. - Fangyun Wei, Xi Chen, Lin Luo:
Rethinking Generative Large Language Model Evaluation for Semantic Comprehension. - Dheeraj Baby, Soumyabrata Pal:
Online Matrix Completion: A Collaborative Approach with Hott Items. - Dan Qiao, Yu-Xiang Wang:
Near-Optimal Reinforcement Learning with Self-Play under Adaptivity Constraints. - Yuwei Zhou, Zirui Pan, Xin Wang, Hong Chen, Haoyang Li, Yanwen Huang, Zhixiao Xiong, Fangzhou Xiong, Peiyang Xu, Shengnan Liu, Wenwu Zhu:
CurBench: Curriculum Learning Benchmark. - Lin Li, Yifei Wang, Chawin Sitawarin, Michael W. Spratling:
OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift. - Weili Nie, Sifei Liu, Morteza Mardani, Chao Liu, Benjamin Eckart, Arash Vahdat:
Compositional Text-to-Image Generation with Dense Blob Representations. - Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy L. Nguyen, Kunal Talwar, Samson Zhou:
Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages. - Xingrun Xing, Zheng Zhang, Ziyi Ni, Shitao Xiao, Yiming Ju, Siqi Fan, Yequan Wang, Jiajun Zhang, Guoqi Li:
SpikeLM: Towards General Spike-Driven Language Modeling via Elastic Bi-Spiking Mechanisms. - Yifan Gong, Zheng Zhan, Qing Jin, Yanyu Li, Yerlan Idelbayev, Xian Liu, Andrey Zharkov, Kfir Aberman, Sergey Tulyakov, Yanzhi Wang, Jian Ren:
E2GAN: Efficient Training of Efficient GANs for Image-to-Image Translation. - Xiaoqiang Lin, Xinyi Xu, Zhaoxuan Wu, See-Kiong Ng, Bryan Kian Hsiang Low:
Distributionally Robust Data Valuation. - Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan, Wenhu Chen, William Yang Wang:
Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation. - Shikun Feng, Yuyan Ni, Minghao Li, Yanwen Huang, Zhi-Ming Ma, Wei-Ying Ma, Yanyan Lan:
UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning. - Yicheng Liu, Jie Wen, Chengliang Liu, Xiaozhao Fang, Zuoyong Li, Yong Xu, Zheng Zhang:
Language-Driven Cross-Modal Classifier for Zero-Shot Multi-Label Image Recognition. - Ruijia Niu, Dongxia Wu, Kai Kim, Yian Ma, Duncan Watson-Parris, Rose Yu:
Multi-Fidelity Residual Neural Processes for Scalable Surrogate Modeling. - Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang, Jia-Chen Gu:
Neighboring Perturbations of Knowledge Editing on Large Language Models. - Yang Wu, Yifan Zhang, Zhenxing Liang, Jian Cheng:
HGCN2SP: Hierarchical Graph Convolutional Network for Two-Stage Stochastic Programming. - Kyle Hsu, Jubayer Ibn Hamid, Kaylee Burns, Chelsea Finn, Jiajun Wu:
Tripod: Three Complementary Inductive Biases for Disentangled Representation Learning. - Shayan Kiyani, George J. Pappas, Hamed Hassani:
Conformal Prediction with Learned Features. - Yulun Zhang, Kai Zhang, Luc Van Gool, Martin Danelljan, Fisher Yu:
Lightweight Image Super-Resolution via Flexible Meta Pruning. - Bowen Tao, Xin-Chun Li, De-Chuan Zhan:
MLI Formula: A Nearly Scale-Invariant Solution with Noise Perturbation. - Yihao Zhang, Hangzhou He, Jingyu Zhu, Huanran Chen, Yifei Wang, Zeming Wei:
On the Duality Between Sharpness-Aware Minimization and Adversarial Training. - Litian Liu, Yao Qin:
Fast Decision Boundary based Out-of-Distribution Detector. - Jiaxu Wang, Junhao He, Ziyi Zhang, Mingyuan Sun, Jingkai Sun, Renjing Xu:
EvGGS: A Collaborative Learning Framework for Event-based Generalizable Gaussian Splatting. - Zhenqiao Song, Tinglin Huang, Lei Li, Wengong Jin:
SurfPro: Functional Protein Design Based on Continuous Surface. - Biao Liu, Ning Xu, Xiangyu Fang, Xin Geng:
Correlation-Induced Label Prior for Semi-Supervised Multi-Label Learning. - Kyungeun Lee, Ye Seul Sim, Hye-Seung Cho, Moonjung Eo, Suhee Yoon, Sanghyu Yoon, Woohyung Lim:
Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains. - Ossi Räisä, Joonas Jälkö, Antti Honkela:
Subsampling is not Magic: Why Large Batch Sizes Work for Differentially Private Stochastic Optimisation. - Shuying Huang, Mingyang Ren, Yong Yang, Xiaozheng Wang, Yingzhi Wei:
MFTN: A Multi-scale Feature Transfer Network Based on IMatchFormer for Hyperspectral Image Super-Resolution. - Yulun Zhang, Haotong Qin, Zixiang Zhao, Xianglong Liu, Martin Danelljan, Fisher Yu:
Flexible Residual Binarization for Image Super-Resolution. - Dongyang Liu, Renrui Zhang, Longtian Qiu, Siyuan Huang, Weifeng Lin, Shitian Zhao, Shijie Geng, Ziyi Lin, Peng Jin, Kaipeng Zhang, Wenqi Shao, Chao Xu, Conghui He, Junjun He, Hao Shao, Pan Lu, Yu Qiao, Hongsheng Li, Peng Gao:
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models. - Teng Xiao, Huaisheng Zhu, Zhiwei Zhang, Zhimeng Guo, Charu C. Aggarwal, Suhang Wang, Vasant G. Honavar:
Efficient Contrastive Learning for Fast and Accurate Inference on Graphs. - Joowon Lee, Hanbaek Lyu, Weixin Yao:
Supervised Matrix Factorization: Local Landscape Analysis and Applications. - Yuxuan Zhou, Mario Fritz, Margret Keuper:
MultiMax: Sparse and Multi-Modal Attention Learning. - Shi Fu, Sen Zhang, Yingjie Wang, Xinmei Tian, Dacheng Tao:
Towards Theoretical Understandings of Self-Consuming Generative Models. - Ruizhong Qiu, Hanghang Tong:
Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization. - Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas Singh:
FrameQuant: Flexible Low-Bit Quantization for Transformers. - Alekh Agarwal, Jian Qian, Alexander Rakhlin, Tong Zhang:
The Non-linear F-Design and Applications to Interactive Learning. - Kun Su, Xiulong Liu, Eli Shlizerman:
From Vision to Audio and Beyond: A Unified Model for Audio-Visual Representation and Generation. - Dong Lao, Yangchao Wu, Tian Yu Liu, Alex Wong, Stefano Soatto:
Sub-token ViT Embedding via Stochastic Resonance Transformers. - Ben Athiwaratkun, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Haifeng Qian, Hantian Ding, Qing Sun, Jun Wang, Jiacheng Guo, Liangfu Chen, Parminder Bhatia, Ramesh Nallapati, Sudipta Sengupta, Bing Xiang:
Bifurcated Attention for Single-Context Large-Batch Sampling. - Shujian Zhang, Korawat Tanwisuth, Chengyue Gong, Pengcheng He, Mingyuan Zhou:
Switchable Decision: Dynamic Neural Generation Networks. - Jiaqi Wang, Chenxu Zhao, Lingjuan Lyu, Quanzeng You, Mengdi Huai, Fenglong Ma:
Bridging Model Heterogeneity in Federated Learning via Uncertainty-based Asymmetrical Reciprocity Learning. - Jiancong Xiao, Jiawei Zhang, Zhi-Quan Luo, Asuman E. Ozdaglar:
Uniformly Stable Algorithms for Adversarial Training and Beyond. - Yanbo Wang, Muhan Zhang:
An Empirical Study of Realized GNN Expressiveness. - Neil Mallinar, Austin Zane, Spencer Frei, Bin Yu:
Minimum-Norm Interpolation Under Covariate Shift. - Guancheng Wan, Yijun Tian, Wenke Huang, Nitesh V. Chawla, Mang Ye:
S3GCL: Spectral, Swift, Spatial Graph Contrastive Learning. - Jiachen Zhao, Zhun Deng, David Madras, James Zou, Mengye Ren:
Learning and Forgetting Unsafe Examples in Large Language Models. - Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, Charalampos E. Tsourakakis:
On the Role of Edge Dependency in Graph Generative Models. - Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang:
Graph Neural Network Explanations are Fragile. - Jinqiu Jin, Haoxuan Li, Fuli Feng:
On the Maximal Local Disparity of Fairness-Aware Classifiers. - Luke Bailey, Euan Ong, Stuart Russell, Scott Emmons:
Image Hijacks: Adversarial Images can Control Generative Models at Runtime. - Mintong Kang, Nezihe Merve Gürel, Ning Yu, Dawn Song, Bo Li:
C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models. - Wang Lin, Jingyuan Chen, Jiaxin Shi, Yichen Zhu, Chen Liang, Junzhong Miao, Tao Jin, Zhou Zhao, Fei Wu, Shuicheng Yan, Hanwang Zhang:
Non-confusing Generation of Customized Concepts in Diffusion Models. - Chenkai Mao, Robert Lupoiu, Tianxiang Dai, Mingkun Chen, Jonathan A. Fan:
Towards General Neural Surrogate Solvers with Specialized Neural Accelerators. - Zehan Wang, Ziang Zhang, Xize Cheng, Rongjie Huang, Luping Liu, Zhenhui Ye, Haifeng Huang, Yang Zhao, Tao Jin, Peng Gao, Zhou Zhao:
FreeBind: Free Lunch in Unified Multimodal Space via Knowledge Fusion. - Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, Antonio Torralba:
A Multimodal Automated Interpretability Agent. - Lu Yin, Ajay Kumar Jaiswal, Shiwei Liu, Souvik Kundu, Zhangyang Wang:
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs. - Jincheng Huang, Jialie Shen, Xiaoshuang Shi, Xiaofeng Zhu:
On Which Nodes Does GCN Fail? Enhancing GCN From the Node Perspective. - Yajie Bao, Michael Crawshaw, Mingrui Liu:
Provable Benefits of Local Steps in Heterogeneous Federated Learning for Neural Networks: A Feature Learning Perspective. - Akhil Agnihotri, Rahul Jain, Haipeng Luo:
ACPO: A Policy Optimization Algorithm for Average MDPs with Constraints. - Scott M. Jordan, Adam White, Bruno Castro da Silva, Martha White, Philip S. Thomas:
Position: Benchmarking is Limited in Reinforcement Learning Research. - Zhou Wang, Xingye Qiao:
Efficient Online Set-valued Classification with Bandit Feedback. - Soroush Nasiriany, Fei Xia, Wenhao Yu, Ted Xiao, Jacky Liang, Ishita Dasgupta, Annie Xie, Danny Driess, Ayzaan Wahid, Zhuo Xu, Quan Vuong, Tingnan Zhang, Tsang-Wei Edward Lee, Kuang-Huei Lee, Peng Xu, Sean Kirmani, Yuke Zhu, Andy Zeng, Karol Hausman, Nicolas Heess, Chelsea Finn, Sergey Levine, Brian Ichter:
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs. - Lunjia Hu, Charlotte Peale, Judy Hanwen Shen:
Multigroup Robustness. - Enneng Yang, Li Shen, Zhenyi Wang, Guibing Guo, Xiaojun Chen, Xingwei Wang, Dacheng Tao:
Representation Surgery for Multi-Task Model Merging. - Yuantong Li, Guang Cheng, Xiaowu Dai:
Two-sided Competing Matching Recommendation Markets With Quota and Complementary Preferences Constraints. - Lin Lu, Chenxi Dai, Wangcheng Tao, Binhang Yuan, Yanan Sun, Pan Zhou:
Position: Exploring the Robustness of Pipeline-Parallelism-Based Decentralized Training. - Wangzi Qi, Wei Wang, Chao Huang, Jie Wen, Cong Wang:
Batch Singular Value Polarization and Weighted Semantic Augmentation for Universal Domain Adaptation. - Jinyue Tian, Hui Xue, Yanfang Xue, Pengfei Fang:
Copula-Nested Spectral Kernel Network. - Jonggyu Jang, Seongjin Hwang, Hyun Jong Yang:
Rethinking DP-SGD in Discrete Domain: Exploring Logistic Distribution in the Realm of signSGD. - Ruipeng Liu, Borui He, Naveed Tahir, Garrett E. Katz:
On the Feasibility of Single-Pass Full-Capacity Learning in Linear Threshold Neurons with Binary Input Vectors. - Minh Hoang, Azza Fadhel, Aryan Deshwal, Jana Doppa, Trong Nghia Hoang:
Learning Surrogates for Offline Black-Box Optimization via Gradient Matching. - Jie Wen, Shijie Deng, Waikeung Wong, Guoqing Chao, Chao Huang, Lunke Fei, Yong Xu:
Diffusion-based Missing-view Generation With the Application on Incomplete Multi-view Clustering. - Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, Shujian Huang, Quanquan Gu:
Diffusion Language Models Are Versatile Protein Learners. - Jannik Deuschel, Caleb Ellington, Yingtao Luo, Benjamin J. Lengerich, Pascal Friederich, Eric P. Xing:
Contextualized Policy Recovery: Modeling and Interpreting Medical Decisions with Adaptive Imitation Learning. - Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch:
Improving Factuality and Reasoning in Language Models through Multiagent Debate. - Alexander Havrilla, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, Roberta Raileanu:
GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements. - Joshua Engels, Benjamin Landrum, Shangdi Yu, Laxman Dhulipala, Julian Shun:
Approximate Nearest Neighbor Search with Window Filters. - Kangjie Zheng, Siyu Long, Tianyu Lu, Junwei Yang, Xinyu Dai, Ming Zhang, Zaiqing Nie, Wei-Ying Ma, Hao Zhou:
ESM All-Atom: Multi-Scale Protein Language Model for Unified Molecular Modeling. - Guorui Quan, Zhiqiang Xu, Guiliang Liu:
Learning Constraints from Offline Demonstrations via Superior Distribution Correction Estimation. - Xinran Li, Zifan Liu, Shibo Chen, Jun Zhang:
Individual Contributions as Intrinsic Exploration Scaffolds for Multi-agent Reinforcement Learning. - Zhifeng Kong, Arushi Goel, Rohan Badlani, Wei Ping, Rafael Valle, Bryan Catanzaro:
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities. - David Heurtel-Depeiges, Charles Margossian, Ruben Ohana, Bruno Régaldo-Saint Blancard:
Listening to the noise: Blind Denoising with Gibbs Diffusion. - Nguyen Minh Quang, Hady W. Lauw:
Augmenting Decision with Hypothesis in Reinforcement Learning. - Han Fu, Jian Tan, Pinhan Zhang, Feifei Li, Jianling Sun:
PinNet: Pinpoint Instructive Information for Retrieval Augmented Code-to-Text Generation. - Alaa Anani, Tobias Lorenz, Bernt Schiele, Mario Fritz:
Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing. - Yusong Wu, Tim Cooijmans, Kyle Kastner, Adam Roberts, Ian Simon, Alexander Scarlatos, Chris Donahue, Cassie Tarakajian, Shayegan Omidshafiei, Aaron C. Courville, Pablo Samuel Castro, Natasha Jaques, Cheng-Zhi Anna Huang:
Adaptive Accompaniment with ReaLchords. - Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, Mikhail Belkin:
Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning. - José H. Blanchet, Peng Cui, Jiajin Li, Jiashuo Liu:
Stability Evaluation through Distributional Perturbation Analysis. - Zhengmian Hu, Heng Huang:
Accelerated Speculative Sampling Based on Tree Monte Carlo. - Kenneth Li, Samy Jelassi, Hugh Zhang, Sham M. Kakade, Martin Wattenberg, David Brandfonbrener:
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models. - Xinwen Zhang, Ali Payani, Myungjin Lee, Richard Souvenir, Hongchang Gao:
A Federated Stochastic Multi-level Compositional Minimax Algorithm for Deep AUC Maximization. - Shaochen Zhong, Duy Le, Zirui Liu, Zhimeng Jiang, Andrew Ye, Jiamu Zhang, Jiayi Yuan, Kaixiong Zhou, Zhaozhuo Xu, Jing Ma, Shuai Xu, Vipin Chaudhary, Xia Hu:
GNNs Also Deserve Editing, and They Need It More Than Once. - Doyoung Kim, Susik Yoon, Dongmin Park, Youngjun Lee, Hwanjun Song, Jihwan Bang, Jae-Gil Lee:
One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning. - Gen Li, Yu Huang, Timofey Efimov, Yuting Wei, Yuejie Chi, Yuxin Chen:
Accelerating Convergence of Score-Based Diffusion Models, Provably. - Keke Huang, Yu Guang Wang, Ming Li, Pietro Lio:
How Universal Polynomial Bases Enhance Spectral Graph Neural Networks: Heterophily, Over-smoothing, and Over-squashing. - Zhaozhuo Xu, Zirui Liu, Beidi Chen, Shaochen Zhong, Yuxin Tang, Jue Wang, Kaixiong Zhou, Xia Hu, Anshumali Shrivastava:
Soft Prompt Recovers Compressed LLMs, Transferably. - Vikranth Dwaracherla, Seyed Mohammad Asghari, Botao Hao, Benjamin Van Roy:
Efficient Exploration for LLMs. - Zhuowen Yuan, Wenbo Guo, Jinyuan Jia, Bo Li, Dawn Song:
SHINE: Shielding Backdoors in Deep Reinforcement Learning. - Zhe Huang, Xiaowei Yu, Dajiang Zhu, Michael C. Hughes:
InterLUDE: Interactions between Labeled and Unlabeled Data to Enhance Semi-Supervised Learning. - Behrooz Tahmasebi, Ashkan Soleymani, Dara Bahri, Stefanie Jegelka, Patrick Jaillet:
A Universal Class of Sharpness-Aware Minimization Algorithms. - Jeongwhan Choi, Sumin Park, Hyowon Wi, Sung-Bae Cho, Noseong Park:
PANDA: Expanded Width-Aware Message Passing Beyond Rewiring. - Rachael Hwee Ling Sim, Jue Fan, Xiao Tian, Patrick Jaillet, Bryan Kian Hsiang Low:
Deletion-Anticipative Data Selection with a Limited Budget. - Yirui Liu, Xinghao Qiao, Yulong Pei, Liying Wang:
Deep Functional Factor Models: Forecasting High-Dimensional Functional Time Series via Bayesian Nonparametric Factorization. - Wenjie Wang, Biwei Huang, Feng Liu, Xinge You, Tongliang Liu, Kun Zhang, Mingming Gong:
Optimal Kernel Choice for Score Function-based Causal Discovery. - Kailas Vodrahalli, James Zou:
ArtWhisperer: A Dataset for Characterizing Human-AI Interactions in Artistic Creations. - Ben Adcock, Juan M. Cardenas, Nick C. Dexter:
A Unified Framework for Learning with Nonlinear Model Classes from Arbitrary Linear Samples. - Dennis Wu, Jerry Yao-Chieh Hu, Teng-Yun Hsiao, Han Liu:
Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models. - Rohan Wadhawan, Hritik Bansal, Kai-Wei Chang, Nanyun Peng:
ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models. - Zhangheng Li, Shiwei Liu, Tianlong Chen, Ajay Kumar Jaiswal, Zhenyu Zhang, Dilin Wang, Raghuraman Krishnamoorthi, Shiyu Chang, Zhangyang Wang:
Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once. - Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Rémi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo Ávila Pires, Bilal Piot:
Generalized Preference Optimization: A Unified Approach to Offline Alignment. - Xiang Meng, Shibal Ibrahim, Kayhan Behdin, Hussein Hazimeh, Natalia Ponomareva, Rahul Mazumder:
OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization. - Yvonne Zhou, Mingyu Liang, Ivan Brugere, Danial Dervovic, Antigoni Polychroniadou, Min Wu, Dana Dachman-Soled:
Bounding the Excess Risk for Linear Models Trained on Marginal-Preserving, Differentially-Private, Synthetic Data. - Laixi Shi, Eric Mazumdar, Yuejie Chi, Adam Wierman:
Sample-Efficient Robust Multi-Agent Reinforcement Learning in the Face of Environmental Uncertainty. - Tom Yan, Ziyu Xu, Zachary Chase Lipton:
Foundations of Testing for Finite-Sample Causal Discovery. - Matthias Fey, Weihua Hu, Kexin Huang, Jan Eric Lenssen, Rishabh Ranjan, Joshua Robinson, Rex Ying, Jiaxuan You, Jure Leskovec:
Position: Relational Deep Learning - Graph Representation Learning on Relational Databases. - Gary Cheng, Moritz Hardt, Celestine Mendler-Dünner:
Causal Inference out of Control: Estimating Performativity without Treatment Randomization. - Arshia Afzal, Grigorios Chrysos, Volkan Cevher, Mahsa Shoaran:
REST: Efficient and Accelerated EEG Seizure Analysis through Residual State Updates. - Mirco Mutti, Aviv Tamar:
Test-Time Regret Minimization in Meta Reinforcement Learning. - Samuel Deng, Daniel Hsu:
Multi-group Learning for Hierarchical Groups. - Qian Huang, Jian Vora, Percy Liang, Jure Leskovec:
MLAgentBench: Evaluating Language Agents on Machine Learning Experimentation. - Oscar Smee, Fred Roosta:
Inexact Newton-type Methods for Optimisation with Nonnegativity Constraints. - Arvi Jonnarth, Jie Zhao, Michael Felsberg:
Learning Coverage Paths in Unknown Environments with Deep Reinforcement Learning. - Kirill Neklyudov, Rob Brekelmans, Alexander Tong, Lazar Atanackovic, Qiang Liu, Alireza Makhzani:
A Computational Framework for Solving Wasserstein Lagrangian Flows. - Stephen Zhang, Vardan Papyan:
Sparsest Models Elude Pruning: An Exposé of Pruning's Current Capabilities. - Mohammed Nowaz Rabbani Chowdhury, Meng Wang, Kaoutar El Maghraoui, Naigang Wang, Pin-Yu Chen, Christopher D. Carothers:
A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts. - Behrooz Tahmasebi, Stefanie Jegelka:
Sample Complexity Bounds for Estimating Probability Divergences under Invariances. - Artem Vysogorets, Anna Dawid, Julia Kempe:
Deconstructing the Goldilocks Zone of Neural Network Initialization. - Elena Orlova, Aleksei Ustimenko, Ruoxi Jia, Peter Y. Lu, Rebecca Willett:
Deep Stochastic Mechanics. - Guy Bar-Shalom, Beatrice Bevilacqua, Haggai Maron:
Subgraphormer: Unifying Subgraph GNNs and Graph Transformers via Graph Products. - Haoyang Zheng, Hengrong Du, Qi Feng, Wei Deng, Guang Lin:
Constrained Exploration via Reflected Replica Exchange Stochastic Gradient Langevin Dynamics. - Stefano Sarao Mannelli, Yaraslau Ivashinka, Andrew M. Saxe, Luca Saglietti:
Tilting the Odds at the Lottery: the Interplay of Overparameterisation and Curricula in Neural Networks. - Zihao Wang, Chirag Nagpal, Jonathan Berant, Jacob Eisenstein, Alexander Nicholas D'Amour, Sanmi Koyejo, Victor Veitch:
Transforming and Combining Rewards for Aligning Large Language Models. - Vincent Cohen-Addad, Tommaso d'Orsi, Aida Mousavifar:
A Near-Linear Time Approximation Algorithm for Beyond-Worst-Case Graph Clustering. - Bo Sun, Jerry Huang, Nicolas Christianson, Mohammad Hajiesmaili, Adam Wierman, Raouf Boutaba:
Online Algorithms with Uncertainty-Quantified Predictions. - Mohamad Amin Mohamadi, Zhiyuan Li, Lei Wu, Danica J. Sutherland:
Why Do You Grok? A Theoretical Analysis on Grokking Modular Addition. - Yedi Zhang, Peter E. Latham, Andrew M. Saxe:
Understanding Unimodal Bias in Multimodal Deep Linear Networks. - Srikanth Malla, Joon Hee Choi, Chiho Choi:
COPAL: Continual Pruning in Large Language Generative Models. - Ting Li, Chengchun Shi, Qianglin Wen, Yang Sui, Yongli Qin, Chunbo Lai, Hongtu Zhu:
Combining Experimental and Historical Data for Policy Evaluation. - Lior Cohen, Kaixin Wang, Bingyi Kang, Shie Mannor:
Improving Token-Based World Models with Parallel Observation Prediction. - Nadav Dym, Hannah Lawrence, Jonathan W. Siegel:
Equivariant Frames and the Impossibility of Continuous Canonicalization. - Peter Mikhael, Itamar Chinn, Regina Barzilay:
CLIPZyme: Reaction-Conditioned Virtual Screening of Enzymes. - Jacopo Iollo, Christophe Heinkelé, Pierre Alliez, Florence Forbes:
PASOA- PArticle baSed Bayesian Optimal Adaptive design. - Pierre Mergny, Justin Ko, Florent Krzakala:
Spectral Phase Transition and Optimal PCA in Block-Structured Spiked Models. - Songtao Feng, Ming Yin, Yu-Xiang Wang, Jing Yang, Yingbin Liang:
Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games. - David P. Woodruff, Taisuke Yasuda:
Reweighted Solutions for Weighted Low Rank Approximation. - Nimrah Mustafa, Rebekka Burkholz:
GATE: How to Keep Out Intrusive Neighbors. - Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Guha Thakurta:
Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements. - Tanishq Kumar, Kevin Luo, Mark Sellke:
No Free Prune: Information-Theoretic Barriers to Pruning at Initialization. - Yizhuo Chen, Chun-Fu Chen, Hsiang Hsu, Shaohan Hu, Marco Pistoia, Tarek F. Abdelzaher:
MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective. - Vighnesh Subramaniam, Colin Conwell, Christopher Wang, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu:
Revealing Vision-Language Integration in the Brain with Multimodal Networks. - Ziqian Lin, Kangwook Lee:
Dual Operating Modes of In-Context Learning. - Emre Sahinoglu, Shahin Shahrampour:
An Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization. - Xihaier Luo, Xiaoning Qian, Byung-Jun Yoon:
Hierarchical Neural Operator Transformer with Learnable Frequency-aware Loss Prior for Arbitrary-scale Super-resolution. - Jie Ji, Gen Li, Lu Yin, Minghai Qin, Geng Yuan, Linke Guo, Shiwei Liu, Xiaolong Ma:
Advancing Dynamic Sparse Training by Exploring Optimization Opportunities. - Tin Sum Cheng, Aurélien Lucchi, Anastasis Kratsios, David Belius:
Characterizing Overfitting in Kernel Ridgeless Regression Through the Eigenspectrum. - Haitao Lin, Odin Zhang, Huifeng Zhao, Dejun Jiang, Lirong Wu, Zicheng Liu, Yufei Huang, Stan Z. Li:
PPFLOW: Target-Aware Peptide Design with Torsional Flow Matching. - Haitao Lin, Lirong Wu, Yufei Huang, Yunfan Liu, Odin Zhang, Yuanqing Zhou, Rui Sun, Stan Z. Li:
GeoAB: Towards Realistic Antibody Design and Reliable Affinity Maturation. - Jonas Arruda, Yannik Schälte, Clemens Peiter, Olga Teplytska, Ulrich Jaehde, Jan Hasenauer:
An amortized approach to non-linear mixed-effects modeling based on neural posterior estimation. - Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski, Maciej Pióro, Michal Krutul, Szymon Antoniak, Kamil Ciebiera, Krystian Król, Tomasz Odrzygózdz, Piotr Sankowski, Marek Cygan, Sebastian Jaszczur:
Scaling Laws for Fine-Grained Mixture of Experts. - Arshia Soltani Moakhar, Eugenia Iofinova, Elias Frantar, Dan Alistarh:
SPADE: Sparsity-Guided Debugging for Deep Neural Networks. - Johannes Müller, Marius Zeinhofer:
Position: Optimization in SciML Should Employ the Function Space Geometry. - Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri:
FairProof : Confidential and Certifiable Fairness for Neural Networks. - Zhenghao Zeng, David Arbour, Avi Feller, Raghavendra Addanki, Ryan A. Rossi, Ritwik Sinha, Edward H. Kennedy:
Continuous Treatment Effects with Surrogate Outcomes. - Jonas Beck, Nathanael Bosch, Michael Deistler, Kyra L. Kadhim, Jakob H. Macke, Philipp Hennig, Philipp Berens:
Diffusion Tempering Improves Parameter Estimation with Probabilistic Integrators for Ordinary Differential Equations. - Deividas Eringis, John Leth, Zheng-Hua Tan, Rafal Wisniewski, Mihály Petreczky:
PAC-Bayesian Error Bound, via Rényi Divergence, for a Class of Linear Time-Invariant State-Space Models. - Cheng Zhang, Jianyi Cheng, George Anthony Constantinides, Yiren Zhao:
LQER: Low-Rank Quantization Error Reconstruction for LLMs. - Haoyu Yang, Haoxing Ren:
ILILT: Implicit Learning of Inverse Lithography Technologies. - Annie Xie, Logan M. Bhamidipaty, Evan Zheran Liu, Joey Hong, Sergey Levine, Chelsea Finn:
Learning to Explore in POMDPs with Informational Rewards. - Thomas Decker, Ananta R. Bhattarai, Jindong Gu, Volker Tresp, Florian Buettner:
Provably Better Explanations with Optimized Aggregation of Feature Attributions. - Zhixiong Zhuang, Maria-Irina Nicolae, Mario Fritz:
Stealthy Imitation: Reward-guided Environment-free Policy Stealing. - Ori Press, Ravid Shwartz-Ziv, Yann LeCun, Matthias Bethge:
The Entropy Enigma: Success and Failure of Entropy Minimization. - Deepak Ravikumar, Efstathia Soufleri, Abolfazl Hashemi, Kaushik Roy:
Unveiling Privacy, Memorization, and Input Curvature Links. - Shangda Yang, Vitaly Zankin, Maximilian Balandat, Stefan Scherer, Kevin T. Carlberg, Neil Walton, Kody J. H. Law:
Accelerating Look-ahead in Bayesian Optimization: Multilevel Monte Carlo is All you Need. - Ke Li, Weian Mao, Shuaike Shen, Xiaoran Jiao, Zheng Sun, Hao Cheng, Chunhua Shen:
Floating Anchor Diffusion Model for Multi-motif Scaffolding. - Taeseong Yoon, Heeyoung Kim:
Uncertainty Estimation by Density Aware Evidential Deep Learning. - Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang:
Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models. - Yuwei Fu, Haichao Zhang, Di Wu, Wei Xu, Benoit Boulet:
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning. - Jongsuk Kim, Hyeongkeun Lee, Kyeongha Rho, Junmo Kim, Joon Son Chung:
EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning. - Yun-Hsuan Lien, Ping-Chun Hsieh, Tzu-Mao Li, Yu-Shuen Wang:
Enhancing Value Function Estimation through First-Order State-Action Dynamics in Offline Reinforcement Learning. - Ilya Zisman, Vladislav Kurenkov, Alexander Nikulin, Viacheslav Sinii, Sergey Kolesnikov:
Emergence of In-Context Reinforcement Learning from Noise Distillation. - Arjun Subramonian, Levent Sagun, Yizhou Sun:
Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction. - Zhichen Zeng, Ruizhong Qiu, Zhe Xu, Zhining Liu, Yuchen Yan, Tianxin Wei, Lei Ying, Jingrui He, Hanghang Tong:
Graph Mixup on Approximate Gromov-Wasserstein Geodesics. - Viacheslav Sinii, Alexander Nikulin, Vladislav Kurenkov, Ilya Zisman, Sergey Kolesnikov:
In-Context Reinforcement Learning for Variable Action Spaces. - Ruicheng Xian, Qiaobo Li, Gautam Kamath, Han Zhao:
Differentially Private Post-Processing for Fair Regression. - Yafei Wang, Bo Pan, Mei Li, Jianya Lu, Lingchen Kong, Bei Jiang, Linglong Kong:
Sample Average Approximation for Conditional Stochastic Optimization with Dependent Data. - Yuancheng Xu, Chenghao Deng, Yanchao Sun, Ruijie Zheng, Xiyao Wang, Jieyu Zhao, Furong Huang:
Adapting Static Fairness to Sequential Decision-Making: Bias Mitigation Strategies towards Equal Long-term Benefit Rate. - Samy Jelassi, David Brandfonbrener, Sham M. Kakade, Eran Malach:
Repeat After Me: Transformers are Better than State Space Models at Copying. - Ludwig Winkler, Lorenz Richter, Manfred Opper:
Bridging discrete and continuous state spaces: Exploring the Ehrenfest process in time-continuous diffusion models. - Jiaqi Zhang, Joel Jennings, Agrin Hilmkil, Nick Pawlowski, Cheng Zhang, Chao Ma:
Towards Causal Foundation Model: on Duality between Optimal Balancing and Attention. - Batiste Le Bars, Aurélien Bellet, Marc Tommasi, Kevin Scaman, Giovanni Neglia:
Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm. - Jingdong Zhang, Luan Yang, Qunxi Zhu, Wei Lin:
FESSNC: Fast Exponentially Stable and Safe Neural Controller. - Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang:
WAVES: Benchmarking the Robustness of Image Watermarks. - Alec Koppel, Sujay Bhatt, Jiacheng Guo, Joe Eappen, Mengdi Wang, Sumitra Ganesh:
Information-Directed Pessimism for Offline Reinforcement Learning. - Anurag Koul, Shivakanth Sujit, Shaoru Chen, Ben Evans, Lili Wu, Byron Xu, Rajan Chari, Riashat Islam, Raihan Seraj, Yonathan Efroni, Lekan P. Molu, Miroslav Dudík, John Langford, Alex Lamb:
PcLast: Discovering Plannable Continuous Latent States. - Theresa Stadler, Bogdan Kulynych, Michael Gastpar, Nicolas Papernot, Carmela Troncoso:
The Fundamental Limits of Least-Privilege Learning. - Rahul Singh, Akshay Mete, Avik Kar, Panganamala R. Kumar:
Finite Time Logarithmic Regret Bounds for Self-Tuning Regulation. - Kang Lin, Reinhard Heckel:
Robustness of Deep Learning for Accelerated MRI: Benefits of Diverse Training Data. - Kartik Patwari, Chen-Nee Chuah, Lingjuan Lyu, Vivek Sharma:
PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization for GDPR. - Rui Miao, Kaixiong Zhou, Yili Wang, Ninghao Liu, Ying Wang, Xin Wang:
Rethinking Independent Cross-Entropy Loss For Graph-Structured Data. - Yuxing Liu, Lesi Chen, Luo Luo:
Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers. - Emile van Krieken, Pasquale Minervini, Edoardo M. Ponti, Antonio Vergari:
On the Independence Assumption in Neurosymbolic Learning. - Su Hyeong Lee, Qingqi Zhang, Risi Kondor:
Sign Rank Limitations for Inner Product Graph Decoders. - Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang:
Non-stationary Online Convex Optimization with Arbitrary Delays. - Jianqing Liang, Min Chen, Jiye Liang:
Graph External Attention Enhanced Transformer. - Fateme Jamshidi, Luca Ganassali, Negar Kiyavash:
On the sample complexity of conditional independence testing with Von Mises estimator with application to causal discovery. - Aleksandr Karakulev, Dave Zachariah, Prashant Singh:
Adaptive Robust Learning using Latent Bernoulli Variables. - Yichuan Mo, Hui Huang, Mingjie Li, Ang Li, Yisen Wang:
TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors. - Eleonora Bonel, Luca Nannini, Davide Bassi, Michele Joshua Maggini:
Position: Machine Learning-powered Assessments of the EU Digital Services Act Aid Quantify Policy Impacts on Online Harms. - Kunda Yan, Sen Cui, Abudukelimu Wuerkaixi, Jingfeng Zhang, Bo Han, Gang Niu, Masashi Sugiyama, Changshui Zhang:
Balancing Similarity and Complementarity for Federated Learning. - Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib:
Transitional Uncertainty with Layered Intermediate Predictions. - Hossein Zakerinia, Amin Behjati, Christoph H. Lampert:
More Flexible PAC-Bayesian Meta-Learning by Learning Learning Algorithms. - Shenzhi Yang, Bin Liang, An Liu, Lin Gui, Xingkai Yao, Xiaofang Zhang:
Bounded and Uniform Energy-based Out-of-distribution Detection for Graphs. - Amir Mohammad Karimi-Mamaghan, Panagiotis Tigas, Karl Henrik Johansson, Yarin Gal, Yashas Annadani, Stefan Bauer:
Challenges and Considerations in the Evaluation of Bayesian Causal Discovery. - Jin Wang, Shichao Dong, Yapeng Zhu, Kelu Yao, Weidong Zhao, Chao Li, Ping Luo:
Diagnosing the Compositional Knowledge of Vision Language Models from a Game-Theoretic View. - Joakim Sunde, Brigt Arve Toppe Håvardstun, Jan Kratochvíl, Jan Arne Telle:
On a Combinatorial Problem Arising in Machine Teaching. - Ang Li, Yichuan Mo, Mingjie Li, Yisen Wang:
PID: Prompt-Independent Data Protection Against Latent Diffusion Models. - Hao Hu, Yiqin Yang, Jianing Ye, Chengjie Wu, Ziqing Mai, Yujing Hu, Tangjie Lv, Changjie Fan, Qianchuan Zhao, Chongjie Zhang:
Bayesian Design Principles for Offline-to-Online Reinforcement Learning. - Jie Zhang, Xiaosong Ma, Song Guo, Peng Li, Wenchao Xu, Xueyang Tang, Zicong Hong:
Amend to Alignment: Decoupled Prompt Tuning for Mitigating Spurious Correlation in Vision-Language Models. - Charles Guille-Escuret, Hiroki Naganuma, Kilian Fatras, Ioannis Mitliagkas:
No Wrong Turns: The Simple Geometry Of Neural Networks Optimization Paths. - David Glukhov, Ilia Shumailov, Yarin Gal, Nicolas Papernot, Vardan Papyan:
Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches. - Riccardo De Santi, Manish Prajapat, Andreas Krause:
Global Reinforcement Learning : Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods. - Marco Bressan, Mauro Sozio:
Fully-Dynamic Approximate Decision Trees With Worst-Case Update Time Guarantees. - Nimrod Berman, Ilan Naiman, Idan Arbiv, Gal Fadlon, Omri Azencot:
Sequential Disentanglement by Extracting Static Information From A Single Sequence Element. - Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Haohan Guo, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Zhou Zhao, Xixin Wu, Helen M. Meng:
UniAudio: Towards Universal Audio Generation with Large Language Models. - Daniel Gedon, Antônio H. Ribeiro, Thomas B. Schön:
No Double Descent in Principal Component Regression: A High-Dimensional Analysis. - Hugo Attali, Davide Buscaldi, Nathalie Pernelle:
Delaunay Graph: Addressing Over-Squashing and Over-Smoothing Using Delaunay Triangulation. - Fengchun Qiao, Xi Peng:
Ensemble Pruning for Out-of-distribution Generalization. - Filippo Lazzati, Mirco Mutti, Alberto Maria Metelli:
Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms. - Xinyu Ma, Xu Chu, Zhibang Yang, Yang Lin, Xin Gao, Junfeng Zhao:
Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. - Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan:
Online Learning and Information Exponents: The Importance of Batch size & Time/Complexity Tradeoffs. - Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych:
Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates. - Chang He, Zhaoye Pan, Xiao Wang, Bo Jiang:
Riemannian Accelerated Zeroth-order Algorithm: Improved Robustness and Lower Query Complexity. - Yihang Chen, Fanghui Liu, Taiji Suzuki, Volkan Cevher:
High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization. - Qitan Lv, Jie Wang, Hanzhu Chen, Bin Li, Yongdong Zhang, Feng Wu:
Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Models. - Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim, Hao Peng:
Data Engineering for Scaling Language Models to 128K Context. - Motasem Alfarra, Hani Itani, Alejandro Pardo, Shyma Alhuwaider, Merey Ramazanova, Juan Camilo Pérez, Zhipeng Cai, Matthias Müller, Bernard Ghanem:
Evaluation of Test-Time Adaptation Under Computational Time Constraints. - Christian H. X. Ali Mehmeti-Göpel, Michael Wand:
On the Weight Dynamics of Deep Normalized Networks. - Tenglong Liu, Yang Li, Yixing Lan, Hao Gao, Wei Pan, Xin Xu:
Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning. - Yunfei Long, Zilin Tian, Liguo Zhang, Huosheng Xu:
Reparameterized Importance Sampling for Robust Variational Bayesian Neural Networks. - Junjie Zhang, Chenjia Bai, Haoran He, Zhigang Wang, Bin Zhao, Xiu Li, Xuelong Li:
SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation. - Dingyang Chen, Qi Zhang:
E(3)-Equivariant Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning. - Kaibo He, Chenhui Zuo, Chengtian Ma, Yanan Sui:
DynSyn: Dynamical Synergistic Representation for Efficient Learning and Control in Overactuated Embodied Systems. - Runsheng Yu, Youzhi Zhang, James T. Kwok:
Improving Sharpness-Aware Minimization by Lookahead. - Yoni Choukroun, Lior Wolf:
Learning Linear Block Error Correction Codes. - Zhangyang Gao, Daize Dong, Cheng Tan, Jun Xia, Bozhen Hu, Stan Z. Li:
A Graph is Worth K Words: Euclideanizing Graph using Pure Transformer. - Ziheng Cheng, Longlin Yu, Tianyu Xie, Shiyue Zhang, Cheng Zhang:
Kernel Semi-Implicit Variational Inference. - Xuyang Zhong, Yixiao Huang, Chen Liu:
Towards Efficient Training and Evaluation of Robust Models against l0 Bounded Adversarial Perturbations. - Wenhao Yang, Wei Jiang, Yibo Wang, Ping Yang, Yao Hu, Lijun Zhang:
Small-loss Adaptive Regret for Online Convex Optimization. - Kevin Kögler, Aleksandr Shevchenko, Hamed Hassani, Marco Mondelli:
Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth. - Maksim Sinelnikov, Manuel Haussmann, Harri Lähdesmäki:
Latent variable model for high-dimensional point process with structured missingness. - Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei:
Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss. - Roman Koshkin, Tomoki Fukai:
convSeq: Fast and Scalable Method for Detecting Patterns in Spike Data. - Zhiwei Tang, Jiasheng Tang, Hao Luo, Fan Wang, Tsung-Hui Chang:
Accelerating Parallel Sampling of Diffusion Models. - Iris A. M. Huijben, Matthijs Douze, Matthew J. Muckley, Ruud van Sloun, Jakob Verbeek:
Residual Quantization with Implicit Neural Codebooks. - Gholamali Aminian, Yixuan He, Gesine Reinert, Lukasz Szpruch, Samuel N. Cohen:
Generalization Error of Graph Neural Networks in the Mean-field Regime. - Wei Wang, Takashi Ishida, Yu-Jie Zhang, Gang Niu, Masashi Sugiyama:
Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical. - Wei Chen, Zhen Huang, Liang Xie, Binbin Lin, Houqiang Li, Le Lu, Xinmei Tian, Deng Cai, Yonggang Zhang, Wenxiao Wang, Xu Shen, Jieping Ye:
From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning. - Pedro Alonso Campana, Paul Prasse, Tobias Scheffer:
Predicting Dose-Response Curves with Deep Neural Networks. - Marvin Schmitt, Desi R. Ivanova, Daniel Habermann, Ullrich Köthe, Paul-Christian Bürkner, Stefan T. Radev:
Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference. - Zhongze Wu, Hongyan Xu, Yitian Long, Shan You, Xiu Su, Jun Long, Yueyi Luo, Chang Xu:
Detecting Any instruction-to-answer interaction relationship: Universal Instruction-to-Answer Navigator for Med-VQA. - Gautier Dagan, Gabriel Synnaeve, Baptiste Rozière:
Getting the most out of your tokenizer for pre-training and domain adaptation. - Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, Young Jin Kim:
Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation. - Antoine Scheid, Daniil Tiapkin, Etienne Boursier, Aymeric Capitaine, Eric Moulines, Michael I. Jordan, El-Mahdi El-Mhamdi, Alain Oliviero Durmus:
Incentivized Learning in Principal-Agent Bandit Games. - Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Rätsch, Vincent Fortuin:
Improving Neural Additive Models with Bayesian Principles. - Fabian Fumagalli, Maximilian Muschalik, Patrick Kolpaczki, Eyke Hüllermeier, Barbara Hammer:
KernelSHAP-IQ: Weighted Least Square Optimization for Shapley Interactions. - Kyriakos Axiotis, Vincent Cohen-Addad, Monika Henzinger, Sammy Jerome, Vahab Mirrokni, David Saulpic, David P. Woodruff, Michael Wunder:
Data-Efficient Learning via Clustering-Based Sensitivity Sampling: Foundation Models and Beyond. - Jianhao Ding, Zhiyu Pan, Yujia Liu, Zhaofei Yu, Tiejun Huang:
Robust Stable Spiking Neural Networks. - Yifan Sun, Grace Yi:
Regression Learning with Limited Observations of Multivariate Outcomes and Features. - Langqi Liu, Yibo Wang, Lijun Zhang:
High-Probability Bound for Non-Smooth Non-Convex Stochastic Optimization with Heavy Tails. - Anke Tang, Li Shen, Yong Luo, Nan Yin, Lefei Zhang, Dacheng Tao:
Merging Multi-Task Models via Weight-Ensembling Mixture of Experts. - Christoph Jürgen Hemmer, Manuel Brenner, Florian Hess, Daniel Durstewitz:
Optimal Recurrent Network Topologies for Dynamical Systems Reconstruction. - Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie:
The Good, The Bad, and Why: Unveiling Emotions in Generative AI. - Wanqi Zhou, Shuanghao Bai, Shujian Yu, Qibin Zhao, Badong Chen:
Jacobian Regularizer-based Neural Granger Causality. - Guangzhi Sun, Wenyi Yu, Changli Tang, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Yuxuan Wang, Chao Zhang:
video-SALMONN: Speech-Enhanced Audio-Visual Large Language Models. - Omri Ben-Dov, Jake Fawkes, Samira Samadi, Amartya Sanyal:
The Role of Learning Algorithms in Collective Action. - Shuo Yang, Zhe Cao, Sheng Guo, Ruiheng Zhang, Ping Luo, Shengping Zhang, Liqiang Nie:
Mind the Boundary: Coreset Selection via Reconstructing the Decision Boundary. - Lee-Ad Gottlieb, Timor Sharabi, Roi Weiss:
Weighted distance nearest neighbor condensing. - Gabriela Kadlecová, Jovita Lukasik, Martin Pilát, Petra Vidnerová, Mahmoud Safari, Roman Neruda, Frank Hutter:
Surprisingly Strong Performance Prediction with Neural Graph Features. - Weijia Shao:
Improved Dimensionality Dependence for Zeroth-Order Optimisation over Cross-Polytopes. - Jerry Yao-Chieh Hu, Thomas Lin, Zhao Song, Han Liu:
On Computational Limits of Modern Hopfield Models: A Fine-Grained Complexity Analysis. - Taeyoung Kim, Hongseok Yang:
An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network. - Paul Duetting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard, Morteza Zadimoghaddam:
Consistent Submodular Maximization. - Linxiao Yang, Yunze Tong, Xinyue Gu, Liang Sun:
Explain Temporal Black-Box Models via Functional Decomposition. - Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao:
Position: TrustLLM: Trustworthiness in Large Language Models. - Han Zhong, Jiachen Hu, Yecheng Xue, Tongyang Li, Liwei Wang:
Provably Efficient Exploration in Quantum Reinforcement Learning with Logarithmic Worst-Case Regret. - Yibo Yang, Xiaojie Li, Motasem Alfarra, Hasan Abed Al Kader Hammoud, Adel Bibi, Philip Torr, Bernard Ghanem:
Towards Interpretable Deep Local Learning with Successive Gradient Reconciliation. - Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, Quanquan Gu:
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models. - Hongyi Peng, Han Yu, Xiaoli Tang, Xiaoxiao Li:
FedCal: Achieving Local and Global Calibration in Federated Learning via Aggregated Parameterized Scaler. - Zhi Zheng, Shunyu Yao, Zhenkun Wang, Xialiang Tong, Mingxuan Yuan, Ke Tang:
DPN: Decoupling Partition and Navigation for Neural Solvers of Min-max Vehicle Routing Problems. - Peihua Mai, Ran Yan, Zhe Huang, Youjia Yang, Yan Pang:
Split-and-Denoise: Protect large language model inference with local differential privacy. - Yididiya Y. Nadew, Xuhui Fan, Christopher John Quinn:
Conditionally-Conjugate Gaussian Process Factor Analysis for Spike Count Data via Data Augmentation. - Yewen Pu, Saujas Vaduguru, Priyan Vaithilingam, Elena L. Glassman, Daniel Fried:
Amortizing Pragmatic Program Synthesis with Rankings. - Anton Plaksin, Vitaly Kalev:
Zero-Sum Positional Differential Games as a Framework for Robust Reinforcement Learning: Deep Q-Learning Approach. - Dong Chen, Hongyuan Qu, Guangwu Xu:
AegisFL: Efficient and Flexible Privacy-Preserving Byzantine-Robust Cross-silo Federated Learning. - Xiao Zhang, Miao Li, Ji Wu:
Conditional Language Learning with Context. - Sanjay Kariyappa, Freddy Lécué, Saumitra Mishra, Christopher Pond, Daniele Magazzeni, Manuela Veloso:
Progressive Inference: Explaining Decoder-Only Sequence Classification Models Using Intermediate Predictions. - Qitian Wu, Fan Nie, Chenxiao Yang, Junchi Yan:
Learning Divergence Fields for Shift-Robust Graph Representations. - Kaining Zhang, Jiayi Ma:
Sparse-to-dense Multimodal Image Registration via Multi-Task Learning. - Chaoqun Du, Yizeng Han, Gao Huang:
SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning. - Mingze Wang, Zeping Min, Lei Wu:
Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling. - Tomohiro Shiraishi, Daiki Miwa, Teruyuki Katsuoka, Vo Nguyen Le Duy, Kouichi Taji, Ichiro Takeuchi:
Statistical Test for Attention Maps in Vision Transformers. - Neelkamal Bhuyan, Debankur Mukherjee, Adam Wierman:
Best of Both Worlds Guarantees for Smoothed Online Quadratic Optimization. - Taoan Huang, Aaron M. Ferber, Arman Zharmagambetov, Yuandong Tian, Bistra Dilkina:
Contrastive Predict-and-Search for Mixed Integer Linear Programs. - Xingchen Cao, Fan-Ming Luo, Junyin Ye, Tian Xu, Zhilong Zhang, Yang Yu:
Limited Preference Aided Imitation Learning from Imperfect Demonstrations. - Junfeng Chen, Kailiang Wu:
Positional Knowledge is All You Need: Position-induced Transformer (PiT) for Operator Learning. - Taehwan Lee, Sung Whan Yoon:
Rethinking the Flat Minima Searching in Federated Learning. - Dake Bu, Wei Huang, Taiji Suzuki, Ji Cheng, Qingfu Zhang, Zhiqiang Xu, Hau-San Wong:
Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples. - Lijun Zhang, Haomin Bai, Wei-Wei Tu, Ping Yang, Yao Hu:
Efficient Stochastic Approximation of Minimax Excess Risk Optimization. - Da Wang, Lin Li, Wei Wei, Qixian Yu, Jianye Hao, Jiye Liang:
Improving Generalization in Offline Reinforcement Learning via Adversarial Data Splitting. - Chengliang Liu, Gehui Xu, Jie Wen, Yabo Liu, Chao Huang, Yong Xu:
Partial Multi-View Multi-Label Classification via Semantic Invariance Learning and Prototype Modeling. - Andrew Campbell, Jason Yim, Regina Barzilay, Tom Rainforth, Tommi S. Jaakkola:
Generative Flows on Discrete State-Spaces: Enabling Multimodal Flows with Applications to Protein Co-Design. - Vivien Cabannes, Berfin Simsek, Alberto Bietti:
Learning Associative Memories with Gradient Descent. - Shuoyuan Wang, Jindong Wang, Guoqing Wang, Bob Zhang, Kaiyang Zhou, Hongxin Wei:
Open-Vocabulary Calibration for Fine-tuned CLIP. - Yepeng Liu, Yuheng Bu:
Adaptive Text Watermark for Large Language Models. - Noga Mudrik, Gal Mishne, Adam S. Charles:
SiBBlInGS: Similarity-driven Building-Block Inference using Graphs across States. - Xinyu Peng, Ziyang Zheng, Wenrui Dai, Nuoqian Xiao, Chenglin Li, Junni Zou, Hongkai Xiong:
Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance. - Yongxin Guo, Xiaoying Tang, Tao Lin:
FedRC: Tackling Diverse Distribution Shifts Challenge in Federated Learning by Robust Clustering. - Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, Xia Hu:
KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache. - Xiaokang Pan, Xingyu Li, Jin Liu, Tao Sun, Kai Sun, Lixing Chen, Zhe Qu:
Stability and Generalization for Stochastic Recursive Momentum-based Algorithms for (Strongly-)Convex One to K-Level Stochastic Optimizations. - Zhangyu Wang, Gengchen Mai, Krzysztof Janowicz, Ni Lao:
MC-GTA: Metric-Constrained Model-Based Clustering using Goodness-of-fit Tests with Autocorrelations. - Runjin Chen, Tong Zhao, Ajay Kumar Jaiswal, Neil Shah, Zhangyang Wang:
LLaGA: Large Language and Graph Assistant. - Eslam Zaher, Maciej Trzaskowski, Quan Nguyen, Fred Roosta:
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution. - Hao Ban, Kaiyi Ji:
Fair Resource Allocation in Multi-Task Learning. - David P. Woodruff, Taisuke Yasuda:
Coresets for Multiple ℓp Regression. - Sunayana Rane:
Position: The Reasonable Person Standard for AI. - Zhiruo Wang, Graham Neubig, Daniel Fried:
TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks. - Shijie Lian, Ziyi Zhang, Hua Li, Wenjie Li, Laurence Tianruo Yang, Sam Kwong, Runmin Cong:
Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset. - Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, Jiayi Lei, Quanfeng Lu, Runjian Chen, Peng Xu, Renrui Zhang, Haozhe Zhang, Peng Gao, Yali Wang, Yu Qiao, Ping Luo, Kaipeng Zhang, Wenqi Shao:
MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI. - Kevin Leahy, Makai Mann, Zachary Serlin:
Run-Time Task Composition with Safety Semantics. - Fangshuo Liao, Junhyung Lyle Kim, Cruz Barnum, Anastasios Kyrillidis:
On the Error-Propagation of Inexact Hotelling's Deflation for Principal Component Analysis. - Jun-Yi Hang, Min-Ling Zhang:
Binary Decomposition: A Problem Transformation Perspective for Open-Set Semi-Supervised Learning. - Langzhang Liang, Sunwoo Kim, Kijung Shin, Zenglin Xu, Shirui Pan, Yuan Qi:
Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs. - Guanjie Chen, Xinyu Zhao, Tianlong Chen, Yu Cheng:
MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts. - Duy Le, Shaochen Zhong, Zirui Liu, Shuai Xu, Vipin Chaudhary, Kaixiong Zhou, Zhaozhuo Xu:
Knowledge Graphs Can be Learned with Just Intersection Features. - Zhisheng Zheng, Puyuan Peng, Ziyang Ma, Xie Chen, Eunsol Choi, David Harwath:
BAT: Learning to Reason about Spatial Sounds with Large Language Models. - Xueyu Hu, Ziyu Zhao, Shuang Wei, Ziwei Chai, Qianli Ma, Guoyin Wang, Xuwu Wang, Jing Su, Jingjing Xu, Ming Zhu, Yao Cheng, Jianbo Yuan, Jiwei Li, Kun Kuang, Yang Yang, Hongxia Yang, Fei Wu:
InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks. - Mo Tiwari, Ryan Kang, Jaeyong Lee, Donghyun Lee, Christopher Piech, Sebastian Thrun, Ilan Shomorony, Martin Jinye Zhang:
Faster Maximum Inner Product Search in High Dimensions. - Han Bao, Ryuichiro Hataya, Ryo Karakida:
Self-attention Networks Localize When QK-eigenspectrum Concentrates. - Gabriel Stella, Dmitri Loguinov:
QORA: Zero-Shot Transfer via Interpretable Object-Relational Model Learning. - Yuhang Zhou, Zihua Zhao, Siyuan Du, Haolin Li, Jiangchao Yao, Ya Zhang, Yanfeng Wang:
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters. - Yahong Yang, Juncai He:
Deeper or Wider: A Perspective from Optimal Generalization Error with Sobolev Loss. - Fanchen Bu, Hyeonsoo Jo, Soo Yong Lee, Sungsoo Ahn, Kijung Shin:
Tackling Prevalent Conditions in Unsupervised Combinatorial Optimization: Cardinality, Minimum, Covering, and More. - Taha Ameen, Bruce E. Hajek:
Robust Graph Matching when Nodes are Corrupt. - Jiin Woo, Laixi Shi, Gauri Joshi, Yuejie Chi:
Federated Offline Reinforcement Learning: Collaborative Single-Policy Coverage Suffices. - Saber Malekmohammadi, Yaoliang Yu, Yang Cao:
Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning. - Emanuele Francazi, Aurélien Lucchi, Marco Baity-Jesi:
Initial Guessing Bias: How Untrained Networks Favor Some Classes. - Xue Yu, Muchen Li, Yan Leng, Renjie Liao:
Learning Latent Structures in Network Games via Data-Dependent Gated-Prior Graph Variational Autoencoders. - Haozhe Chen, Carl Vondrick, Chengzhi Mao:
SelfIE: Self-Interpretation of Large Language Model Embeddings. - Rahul Thapa, Bryan He, Magnus Ruud Kjær, Hyatt E. Moore IV, Gauri Ganjoo, Emmanuel Mignot, James Zou:
SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals. - Luis A. Ortega Andrés, Simón Rodríguez Santana, Daniel Hernández-Lobato:
Variational Linearized Laplace Approximation for Bayesian Deep Learning. - Ye Tian, Haolei Weng, Yang Feng:
Towards the Theory of Unsupervised Federated Learning: Non-asymptotic Analysis of Federated EM Algorithms. - Xutao Ma, Chao Ning, Wenli Du:
Differentiable Distributionally Robust Optimization Layers. - Harrie Oosterhuis, Lijun Lyu, Avishek Anand:
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions. - Achille Nazaret, Justin Hong, Elham Azizi, David M. Blei:
Stable Differentiable Causal Discovery. - Yuyang Deng, Mingda Qiao:
Collaborative Learning with Different Labeling Functions. - Yake Wei, Di Hu:
MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance. - Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj:
A General Framework for Learning from Weak Supervision. - Behrad Moniri, Donghwan Lee, Hamed Hassani, Edgar Dobriban:
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks. - Tingting Dan, Ziquan Wei, Won Hwa Kim, Guorong Wu:
Exploring the Enigma of Neural Dynamics Through A Scattering-Transform Mixer Landscape for Riemannian Manifold. - Jiali Cui, Tian Han:
Learning Latent Space Hierarchical EBM Diffusion Models. - Ram Dyuthi Sristi, Ofir Lindenbaum, Shira Lifshitz, Maria Lavzin, Jackie Schiller, Gal Mishne, Hadas Benisty:
Contextual Feature Selection with Conditional Stochastic Gates. - Hanna Mazzawi, Javier Gonzalvo, Michael Wunder, Sammy Jerome, Benoit Dherin:
Deep Fusion: Efficient Network Training via Pre-trained Initializations. - Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, Aviral Kumar:
ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL. - Mo Yu, Qiujing Wang, Shunchi Zhang, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Liyan Xu, Jing Li, Yue Yu, Jie Zhou:
Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind. - Kwangjun Ahn, Zhiyu Zhang, Yunbum Kook, Yan Dai:
Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise. - Jingyuan Wang, Perry Dong, Ying Jin, Ruohan Zhan, Zhengyuan Zhou:
Adaptively Learning to Select-Rank in Online Platforms. - Meng Ding, Kaiyi Ji, Di Wang, Jinhui Xu:
Understanding Forgetting in Continual Learning with Linear Regression. - Slobodan Mitrovic, Theodore Pan:
Faster Streaming and Scalable Algorithms for Finding Directed Dense Subgraphs in Large Graphs. - Magzhan Gabidolla, Arman Zharmagambetov, Miguel Á. Carreira-Perpiñán:
Beyond the ROC Curve: Classification Trees Using Cost-Optimal Curves, with Application to Imbalanced Datasets. - Amutheezan Sivagnanam, Ava Pettet, Hunter Lee, Ayan Mukhopadhyay, Abhishek Dubey, Aron Laszka:
Multi-Agent Reinforcement Learning with Hierarchical Coordination for Emergency Responder Stationing. - Juan Carlos Perdomo:
The Relative Value of Prediction in Algorithmic Decision Making. - Joshua Patrick Gardner, Simon Durand, Daniel Stoller, Rachel M. Bittner:
LLark: A Multimodal Instruction-Following Language Model for Music. - César Bravo, Alexander Kozachinskiy, Cristobal Rojas:
On dimensionality of feature vectors in MPNNs. - Doron Haviv, Russell Zhang Kunes, Thomas Dougherty, Cassandra Burdziak, Tal Nawy, Anna Gilbert, Dana Pe'er:
Wasserstein Wormhole: Scalable Optimal Transport Distance with Transformer. - Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash:
RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback. - Changyu Gao, Andrew Lowy, Xingyu Zhou, Stephen J. Wright:
Private Heterogeneous Federated Learning Without a Trusted Server Revisited: Error-Optimal and Communication-Efficient Algorithms for Convex Losses. - Santiago Cortes-Gomez, Mateo Dulce Rubio, Carlos Miguel Patiño, Bryan Wilder:
Statistical Inference Under Constrained Selection Bias. - Vicente Balmaseda, Ying Xu, Yixin Cao, Nate Veldt:
Combinatorial Approximations for Cluster Deletion: Simpler, Faster, and Better. - Naveen Raman, Mateo Espinosa Zarlenga, Mateja Jamnik:
Understanding Inter-Concept Relationships in Concept-Based Models. - Hengyi Wang, Shiwei Tan, Hao Wang:
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models. - Liran Ringel, Regev Cohen, Daniel Freedman, Michael Elad, Yaniv Romano:
Early Time Classification with Accumulated Accuracy Gap Control. - Asaf Cassel, Haipeng Luo, Aviv Rosenberg, Dmitry Sotnikov:
Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback. - Eva Memmel, Clara Menzen, Jetze Schuurmans, Frederiek Wesel, Kim Batselier:
Position: Tensor Networks are a Valuable Asset for Green AI. - Shibo Jie, Yehui Tang, Ning Ding, Zhi-Hong Deng, Kai Han, Yunhe Wang:
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning. - Safwan Hossain, Tonghan Wang, Tao Lin, Yiling Chen, David C. Parkes, Haifeng Xu:
Multi-Sender Persuasion: A Computational Perspective. - Wenxiao Xiao, Jiuxiang Gu, Hongfu Liu:
Category-Aware Active Domain Adaptation. - Safwan Hossain, Andjela Mladenovic, Yiling Chen, Gauthier Gidel:
A Persuasive Approach to Combating Misinformation. - Dongkyun Kim, Mingi Kwon, Youngjung Uh:
Attribute Based Interpretable Evaluation Metrics for Generative Models. - Aleksandar Petrov, Philip Torr, Adel Bibi:
Prompting a Pretrained Transformer Can Be a Universal Approximator. - Junhong Shen, Neil A. Tenenholtz, James Brian Hall, David Alvarez-Melis, Nicolò Fusi:
Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains. - Jesse Friedbaum, Sudarshan Adiga, Ravi Tandon:
Trustworthy Actionable Perturbations. - Yu-Guan Hsieh, James Thornton, Eugène Ndiaye, Michal Klein, Marco Cuturi, Pierre Ablin:
Careful with that Scalpel: Improving Gradient Surgery with an EMA. - Gregor Bachmann, Vaishnavh Nagarajan:
The Pitfalls of Next-Token Prediction. - Zhihao Zhang, Alan Zhu, Lijie Yang, Yihua Xu, Lanting Li, Phitchaya Mangpo Phothilimthana, Zhihao Jia:
Accelerating Iterative Retrieval-augmented Language Model Serving with Speculation. - Yifei He, Shiji Zhou, Guojun Zhang, Hyokun Yun, Yi Xu, Belinda Zeng, Trishul Chilimbi, Han Zhao:
Robust Multi-Task Learning with Excess Risks. - Christopher Morris, Fabrizio Frasca, Nadav Dym, Haggai Maron, Ismail Ilkan Ceylan, Ron Levie, Derek Lim, Michael M. Bronstein, Martin Grohe, Stefanie Jegelka:
Position: Future Directions in the Theory of Graph Machine Learning. - Stelios Triantafyllou, Aleksa Sukovic, Debmalya Mandal, Goran Radanovic:
Agent-Specific Effects: A Causal Effect Propagation Analysis in Multi-Agent MDPs. - Ran Ben-Basat, Shay Vargaftik, Amit Portnoy, Gil Einziger, Yaniv Ben-Itzhak, Michael Mitzenmacher:
Accelerating Federated Learning with Quick Distributed Mean Estimation. - Chentong Wang, Yannan Qu, Zhangzhi Peng, Yukai Wang, Hongli Zhu, Dachuan Chen, Longxing Cao:
Proteus: Exploring Protein Structure Generation for Enhanced Designability and Efficiency. - Yotam Wolf, Noam Wies, Oshri Avnery, Yoav Levine, Amnon Shashua:
Fundamental Limitations of Alignment in Large Language Models. - Michael Komodromos, Marina Evangelou, Sarah Filippi:
Logistic Variational Bayes Revisited. - Abhimanyu Hans, Avi Schwarzschild, Valeriia Cherepanova, Hamid Kazemi, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein:
Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text. - Sebastian Gregor Gruber, Florian Buettner:
A Bias-Variance-Covariance Decomposition of Kernel Scores for Generative Models. - Nicola Mariella, Albert Akhriev, Francesco Tacchino, Christa Zoufal, Juan Carlos Gonzalez-Espitia, Benedek Harsanyi, Eugene Koskin, Ivano Tavernelli, Stefan Woerner, Marianna Rapsomaniki, Sergiy Zhuk, Jannis Born:
Quantum Theory and Application of Contextual Optimal Transport. - Christopher Hojny, Shiqiang Zhang, Juan S. Campos, Ruth Misener:
Verifying message-passing neural networks via topology-based bounds tightening. - Florian E. Dorner, Moritz Hardt:
Don't Label Twice: Quantity Beats Quality when Comparing Binary Classifiers on a Budget. - Pedro O. Pinheiro, Arian Rokkum Jamasb, Omar Mahmood, Vishnu Sresht, Saeed Saremi:
Structure-based drug design by denoising voxel grids. - Matteo Bettini, Ryan Kortvelesy, Amanda Prorok:
Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning. - Dan Garber, Ben Kretzu:
Projection-Free Online Convex Optimization with Time-Varying Constraints. - Carmen Martin-Turrero, Maxence Bouvier, Manuel Breitenstein, Pietro Zanuttigh, Vincent Parret:
ALERT-Transformer: Bridging Asynchronous and Synchronous Machine Learning for Real-Time Event-based Spatio-Temporal Data. - Risheng Liu, Zhu Liu, Wei Yao, Shangzhi Zeng, Jin Zhang:
Moreau Envelope for Nonconvex Bi-Level Optimization: A Single-Loop and Hessian-Free Solution Strategy. - Jeongheon Oh, Kibok Lee:
On the Effectiveness of Supervision in Asymmetric Non-Contrastive Learning. - Masanobu Horie, Naoto Mitsume:
Graph Neural PDE Solvers with Conservation and Similarity-Equivariance. - Dehao Yuan, Cornelia Fermüller, Tahseen Rabbani, Furong Huang, Yiannis Aloimonos:
A Linear Time and Space Local Point Cloud Geometry Encoder via Vectorized Kernel Mixture (VecKM). - Xiuheng Wang, Ricardo Augusto Borsoi, Cédric Richard:
Non-parametric Online Change Point Detection on Riemannian Manifolds. - Shi-Sheng Huang, Guo Chen, Chen Li Heng, Hua Huang:
NeuralIndicator: Implicit Surface Reconstruction from Neural Indicator Priors. - Théo Cachet, Christopher R. Dance, Olivier Sigaud:
Bridging Environments and Language with Rendering Functions and Vision-Language Models. - Ziping Ma, Furong Xu, Jian Liu, Ming Yang, Qingpei Guo:
SyCoCa: Symmetrizing Contrastive Captioners with Attentive Masking for Multimodal Alignment. - Ryosuke Nagumo, Hironori Fujisawa:
Density Ratio Estimation with Doubly Strong Robustness. - Fenghao Lei, Long Yang, Shiting Wen, Zhixiong Huang, Zhiwang Zhang, Chaoyi Pang:
Langevin Policy for Safe Reinforcement Learning. - Yi Liu, Alexander Levis, Sharon-Lise T. Normand, Larry Han:
Multi-Source Conformal Inference Under Distribution Shift. - Zhenxing Ge, Zheng Xu, Tianyu Ding, Linjian Meng, Bo An, Wenbin Li, Yang Gao:
Safe and Robust Subgame Exploitation in Imperfect Information Games. - Yaozhong Gan, Renye Yan, Zhe Wu, Junliang Xing:
Reflective Policy Optimization. - Marta Catalano, Hugo Lavenant:
Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity. - Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, Tong Zhang:
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint. - Yizun Lin, Yangyu Zhang, Zhao-Rong Lai, Cheng Li:
Autonomous Sparse Mean-CVaR Portfolio Optimization. - Akhil Kedia, Mohd Abbas Zaidi, Sushil Khyalia, Jungho Jung, Harshith Goka, Haejun Lee:
Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models. - Tianyi Bao, Qitian Wu, Zetian Jiang, Yiting Chen, Jiawei Sun, Junchi Yan:
Graph Out-of-Distribution Detection Goes Neighborhood Shaping. - Xiangming Gu, Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Ye Wang, Jing Jiang, Min Lin:
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast. - Marien Renaud, Jean Prost, Arthur Leclaire, Nicolas Papadakis:
Plug-and-Play image restoration with Stochastic deNOising REgularization. - Damien Teney, Jindong Wang, Ehsan Abbasnejad:
Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup. - Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei:
Conformal Prediction for Deep Classifier via Label Ranking. - Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju:
In-Context Unlearning: Language Models as Few-Shot Unlearners. - Guangyan Li, Yongqiang Tang, Wensheng Zhang:
LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models. - Xiaoyu Wen, Chenjia Bai, Kang Xu, Xudong Yu, Yang Zhang, Xuelong Li, Zhen Wang:
Contrastive Representation for Data Filtering in Cross-Domain Offline Reinforcement Learning. - Yaodong Yang, Guangyong Chen, Jianye Hao, Pheng-Ann Heng:
Sample-Efficient Multiagent Reinforcement Learning with Reset Replay. - Alexander Pan, Erik Jones, Meena Jagadeesan, Jacob Steinhardt:
Feedback Loops With Language Models Drive In-Context Reward Hacking. - Qingyu Yin, Xuzheng He, Xiang Zhuang, Yu Zhao, Jianhua Yao, Xiaoyu Shen, Qiang Zhang:
StableMask: Refining Causal Masking in Decoder-only Transformer. - Bin Pu, Xingguo Lv, Jiewen Yang, Guannan He, Xingbo Dong, Yiqun Lin, Shengli Li, Tan Ying, Fei Liu, Ming Chen, Zhe Jin, Kenli Li, Xiaomeng Li:
Unsupervised Domain Adaptation for Anatomical Structure Detection in Ultrasound Images. - Ziquan Liu, Yufei Cui, Yan Yan, Yi Xu, Xiangyang Ji, Xue Liu, Antoni B. Chan:
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks. - Hien Dang, Tho Tran Huu, Tan Minh Nguyen, Nhat Ho:
Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Features Model. - Weiming Liu, Xiaolin Zheng, Chaochao Chen, Jiahe Xu, Xinting Liao, Fan Wang, Yanchao Tan, Yew-Soon Ong:
Reducing Item Discrepancy via Differentially Private Robust Embedding Alignment for Privacy-Preserving Cross Domain Recommendation. - Songyang Gao, Qiming Ge, Wei Shen, Shihan Dou, Junjie Ye, Xiao Wang, Rui Zheng, Yicheng Zou, Zhi Chen, Hang Yan, Qi Zhang, Dahua Lin:
Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback. - Xiaoyan Hu, Farzan Farnia, Ho-fung Leung:
An Information Theoretic Approach to Interaction-Grounded Learning. - Ze Cheng, Zhongkai Hao, Xiaoqiang Wang, Jianing Huang, Youjia Wu, Xudan Liu, Yiru Zhao, Songming Liu, Hang Su:
Reference Neural Operators: Learning the Smooth Dependence of Solutions of PDEs on Geometric Deformations. - Chengjie Wu, Hao Hu, Yiqin Yang, Ning Zhang, Chongjie Zhang:
Planning, Fast and Slow: Online Reinforcement Learning with Action-Free Offline Data via Multiscale Planners. - Chentao Cao, Zhun Zhong, Zhanke Zhou, Yang Liu, Tongliang Liu, Bo Han:
Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection. - Muyang Li, Xiaobo Xia, Runze Wu, Fengming Huang, Jun Yu, Bo Han, Tongliang Liu:
Towards Realistic Model Selection for Semi-supervised Learning. - Rui Wang, Elyssa F. Hofgard, Hang Gao, Robin Walters, Tess E. Smidt:
Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution. - Youzhi Zhang, Bo An, Daniel Dajun Zeng:
DAG-Based Column Generation for Adversarial Team Games. - Yujia Zheng, Zeyu Tang, Yiwen Qiu, Bernhard Schölkopf, Kun Zhang:
Detecting and Identifying Selection Structure in Sequential Data. - Peng Wang, Li Shen, Zerui Tao, Shuaida He, Dacheng Tao:
Generalization Analysis of Stochastic Weight Averaging with General Sampling. - Junyuan Hong, Jinhao Duan, Chenhui Zhang, Zhangheng Li, Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian R. Bartoldson, Ajay Kumar Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li:
Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression. - Jinhao Li, Haopeng Li, Sarah Monazam Erfani, Lei Feng, James Bailey, Feng Liu:
Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models. - Ryuichi Kanoh, Mahito Sugiyama:
Neural Tangent Kernels for Axis-Aligned Tree Ensembles. - Wei Jiang, Sifan Yang, Wenhao Yang, Yibo Wang, Yuanyu Wan, Lijun Zhang:
Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization. - Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao, Junge Zhang:
Position: Foundation Agents as the Paradigm Shift for Decision Making. - Jiacheng Zhang, Feng Liu, Dawei Zhou, Jingfeng Zhang, Tongliang Liu:
Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial Training. - Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg:
GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks. - Matthew J. Holland:
Criterion Collapse and Loss Distribution Control. - Shiang Qi, Yakun Yu, Russell Greiner:
Conformalized Survival Distributions: A Generic Post-Process to Increase Calibration. - Tianjun Ke, Haoqun Cao, Feng Zhou:
Accelerating Convergence in Bayesian Few-Shot Classification. - Yue Wu, Tao Jin, Qiwei Di, Hao Lou, Farzad Farnoud, Quanquan Gu:
Borda Regret Minimization for Generalized Linear Dueling Bandits. - Mouxiang Chen, Chenghao Liu, Zemin Liu, Zhuo Li, Jianling Sun:
Identifiability Matters: Revealing the Hidden Recoverable Condition in Unbiased Learning to Rank. - Mengmeng Ma, Tang Li, Xi Peng:
Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients. - Hyeonah Kim, Minsu Kim, Sungsoo Ahn, Jinkyoo Park:
Symmetric Replay Training: Enhancing Sample Efficiency in Deep Reinforcement Learning for Combinatorial Optimization. - S. Ashwin Hebbar, Sravan Kumar Ankireddy, Hyeji Kim, Sewoong Oh, Pramod Viswanath:
DeepPolar: Inventing Nonlinear Large-Kernel Polar Codes via Deep Learning. - Wei Ju, Zhengyang Mao, Siyu Yi, Yifang Qin, Yiyang Gu, Zhiping Xiao, Yifan Wang, Xiao Luo, Ming Zhang:
Hypergraph-enhanced Dual Semi-supervised Graph Classification. - Xiaoyue Zhang, Hanzhang Qin, Mabel C. Chou:
Online Resource Allocation with Non-Stationary Customers. - Yahav Bechavod:
Monotone Individual Fairness. - Zhiyu Zhang, David Bombara, Heng Yang:
Discounted Adaptive Online Learning: Towards Better Regularization. - Wenshuo Wang, Lucas Janson, Lihua Lei, Aaditya Ramdas:
Total Variation Floodgate for Variable Importance Inference in Classification. - Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mossé, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde, William S. Zwicker:
Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback. - Xixun Lin, Wenxiao Zhang, Fengzhao Shi, Chuan Zhou, Lixin Zou, Xiangyu Zhao, Dawei Yin, Shirui Pan, Yanan Cao:
Graph Neural Stochastic Diffusion for Estimating Uncertainty in Node Classification. - Ming Jin, Yifan Zhang, Wei Chen, Kexin Zhang, Yuxuan Liang, Bin Yang, Jindong Wang, Shirui Pan, Qingsong Wen:
Position: What Can Large Language Models Tell Us about Time Series Analysis. - Xuheng Li, Heyang Zhao, Quanquan Gu:
Feel-Good Thompson Sampling for Contextual Dueling Bandits. - Ming Yang, Xiyuan Wei, Tianbao Yang, Yiming Ying:
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms. - Houshuang Chen, Yuchen He, Chihao Zhang:
On Interpolating Experts and Multi-Armed Bandits. - Paarth Neekhara, Shehzeen Samarah Hussain, Rafael Valle, Boris Ginsburg, Rishabh Ranjan, Shlomo Dubnov, Farinaz Koushanfar, Julian J. McAuley:
SelfVC: Voice Conversion With Iterative Refinement using Self Transformations. - Clément Pierquin, Aurélien Bellet, Marc Tommasi, Matthieu Boussard:
Rényi Pufferfish Privacy: General Additive Noise Mechanisms and Privacy Amplification by Iteration via Shift Reduction Lemmas. - Trenton Chang, Jenna Wiens:
From Biased Selective Labels to Pseudo-Labels: An Expectation-Maximization Framework for Learning from Biased Decisions. - Han Shen, Zhuoran Yang, Tianyi Chen:
Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF. - Matteo Castiglioni, Andrea Celli, Christian Kroer:
Online Learning under Budget and ROI Constraints via Weak Adaptivity. - Yihan Wu, Zhengmian Hu, Junfeng Guo, Hongyang Zhang, Heng Huang:
A Resilient and Accessible Distribution-Preserving Watermark for Large Language Models. - Isabela Albuquerque, Jessica Schrouff, David Warde-Farley, Ali Taylan Cemgil, Sven Gowal, Olivia Wiles:
Evaluating Model Bias Requires Characterizing its Mistakes. - Shuai Tang, Steven Wu, Sergül Aydöre, Michael Kearns, Aaron Roth:
Membership Inference Attacks on Diffusion Models via Quantile Regression. - Shengyao Lu, Bang Liu, Keith G. Mills, Jiao He, Di Niu:
EiG-Search: Generating Edge-Induced Subgraphs for GNN Explanation in Linear Time. - Tao Wang, Sylvia L. Herbert, Sicun Gao:
Mollification Effects of Policy Gradient Methods. - Michel Ma, Tianwei Ni, Clement Gehring, Pierluca D'Oro, Pierre-Luc Bacon:
Do Transformer World Models Give Better Policy Gradients? - Minsu Kim, Joohwan Ko, Taeyoung Yun, Dinghuai Zhang, Ling Pan, Woochang Kim, Jinkyoo Park, Emmanuel Bengio, Yoshua Bengio:
Learning to Scale Logits for Temperature-Conditional GFlowNets. - Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Kumar Jaiswal, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, Shiwei Liu:
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity. - Bowen Jin, Hansi Zeng, Guoyin Wang, Xiusi Chen, Tianxin Wei, Ruirui Li, Zhengyang Wang, Zheng Li, Yang Li, Hanqing Lu, Suhang Wang, Jiawei Han, Xianfeng Tang:
Language Models as Semantic Indexers. - Haibo Yang, Peiwen Qiu, Prashant Khanduri, Minghong Fang, Jia Liu:
Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation. - Jayaraman J. Thiagarajan, Vivek Sivaraman Narayanaswamy, Puja Trivedi, Rushil Anirudh:
PAGER: Accurate Failure Characterization in Deep Regression Models. - Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Haozheng Luo, Hong-Yu Chen, Weijian Li, Wei-Po Wang, Han Liu:
Outlier-Efficient Hopfield Layers for Large Transformer-Based Models. - Chun-Yin Huang, Kartik Srinivas, Xin Zhang, Xiaoxiao Li:
Overcoming Data and Model heterogeneities in Decentralized Federated Learning via Synthetic Anchors. - Benjamin Walker, Andrew D. McLeod, Tiexin Qin, Yichuan Cheng, Haoliang Li, Terry J. Lyons:
Log Neural Controlled Differential Equations: The Lie Brackets Make A Difference. - L. Elisa Celis, Amit Kumar, Nisheeth K. Vishnoi, Andrew Xu:
Centralized Selection with Preferences in the Presence of Biases. - Banghua Zhu, Michael I. Jordan, Jiantao Jiao:
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF. - Yiyang Ma, Wenhan Yang, Jiaying Liu:
Correcting Diffusion-Based Perceptual Image Compression with Privileged End-to-End Decoder. - Heiner Kremer, Bernhard Schölkopf:
Geometry-Aware Instrumental Variable Regression. - Tyler Ingebrand, Amy Zhang, Ufuk Topcu:
Zero-Shot Reinforcement Learning via Function Encoders. - Emanuel Sommer, Lisa Wimmer, Theodore Papamarkou, Ludwig Bothmann, Bernd Bischl, David Rügamer:
Connecting the Dots: Is Mode-Connectedness the Key to Feasible Sample-Based Inference in Bayesian Neural Networks? - Keqiang Yan, Alexandra Saxton, Xiaofeng Qian, Xiaoning Qian, Shuiwang Ji:
A Space Group Symmetry Informed Network for O(3) Equivariant Crystal Tensor Prediction. - Minsik Cho, Mohammad Rastegari, Devang Naik:
KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation. - Bhrij Patel, Wesley A. Suttle, Alec Koppel, Vaneet Aggarwal, Brian M. Sadler, Dinesh Manocha, Amrit S. Bedi:
Towards Global Optimality for Practical Average Reward Reinforcement Learning without Mixing Time Oracles. - Kwangjun Ahn, Ali Jadbabaie, Suvrit Sra:
How to Escape Sharp Minima with Random Perturbations. - Pranav Singh Chib, Pravendra Singh:
Enhancing Trajectory Prediction through Self-Supervised Waypoint Distortion Prediction. - Rafal Karczewski, Amauri H. Souza, Vikas Garg:
On the Generalization of Equivariant Graph Neural Networks. - Mike Laszkiewicz, Jonas Ricker, Johannes Lederer, Asja Fischer:
Single-Model Attribution of Generative Models Through Final-Layer Inversion. - Anish Dhir, Samuel Power, Mark van der Wilk:
Bivariate Causal Discovery using Bayesian Model Selection. - Robin San Roman, Pierre Fernandez, Hady Elsahar, Alexandre Défossez, Teddy Furon, Tuan Tran:
Proactive Detection of Voice Cloning with Localized Watermarking. - Jingyu Liu, Huayi Tang, Yong Liu:
Perfect Alignment May be Poisonous to Graph Contrastive Learning. - Philip Schär, Michael Habeck, Daniel Rudolf:
Parallel Affine Transformation Tuning of Markov Chain Monte Carlo. - Ao Zhang, Yuan Yao, Wei Ji, Zhiyuan Liu, Tat-Seng Chua:
NExT-Chat: An LMM for Chat, Detection and Segmentation. - Diwen Wan, Ruijie Lu, Gang Zeng:
Superpoint Gaussian Splatting for Real-Time High-Fidelity Dynamic Scene Reconstruction. - Hyuna Cho, Jaeyoon Sim, Guorong Wu, Won Hwa Kim:
Neurodegenerative Brain Network Classification via Adaptive Diffusion with Temporal Regularization. - Qingyuan Wu, Simon Sinong Zhan, Yixuan Wang, Yuhui Wang, Chung-Wei Lin, Chen Lv, Qi Zhu, Jürgen Schmidhuber, Chao Huang:
Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays. - Yang Chen, Cong Fang, Zhouchen Lin, Bing Liu:
Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective. - Long Ma, Yuanfei Wang, Fangwei Zhong, Song-Chun Zhu, Yizhou Wang:
Fast Peer Adaptation with Context-aware Exploration. - Yuyang Qian, Peng Zhao, Yu-Jie Zhang, Masashi Sugiyama, Zhi-Hua Zhou:
Efficient Non-stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation. - Zhiming Li, Yushi Cao, Yan Zheng, Xu Liu, Bozhi Wu, Tianlin Li, Xiufeng Xu, Junzhe Jiang, Yon Shin Teo, Shang-Wei Lin, Yang Liu:
Improving Neural Logic Machines via Failure Reflection. - Jialong Guo, Xinghao Chen, Yehui Tang, Yunhe Wang:
SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization. - David Steinmann, Wolfgang Stammer, Felix Friedrich, Kristian Kersting:
Learning to Intervene on Concept Bottlenecks. - Yongtuo Liu, Sara Magliacane, Miltiadis Kofinas, Stratis Gavves:
Amortized Equation Discovery in Hybrid Dynamical Systems. - Yu Dai, Junchen Shen, Zijie Zhai, Danlin Liu, Jingyang Chen, Yu Sun, Ping Li, Jie Zhang, Kai Zhang:
High-Order Contrastive Learning with Fine-grained Comparative Levels for Sparse Ordinal Tensor Completion. - Ruo-Chun Tzeng, Naoto Ohsaka, Kaito Ariu:
Matroid Semi-Bandits in Sublinear Time. - Jiwon Song, Kyungseok Oh, Taesu Kim, Hyungjun Kim, Yulhwa Kim, Jae-Joon Kim:
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks. - Evgenii Egorov, Riccardo Valperga, Stratis Gavves:
Ai-sampler: Adversarial Learning of Markov kernels with involutive maps. - Edward Pearce-Crump, William J. Knottenbelt:
Graph Automorphism Group Equivariant Neural Networks. - Mingxuan Xia, Zenan Huang, Runze Wu, Gengyu Lyu, Junbo Zhao, Gang Chen, Haobo Wang:
Unbiased Multi-Label Learning from Crowdsourced Annotations. - Xinsong Ma, Xin Zou, Weiwei Liu:
A Provable Decision Rule for Out-of-Distribution Detection. - Hoang Anh Dung, Cuong Pham, Trung Le, Jianfei Cai, Thanh-Toan Do:
Sharpness-Aware Data Generation for Zero-shot Quantization. - Junnan Liu, Qianren Mao, Weifeng Jiang, Jianxin Li:
KnowFormer: Revisiting Transformers for Knowledge Graph Reasoning. - Yi Xiao, Lei Bai, Wei Xue, Hao Chen, Kun Chen, Kang Chen, Tao Han, Wanli Ouyang:
Towards a Self-contained Data-driven Global Weather Forecasting Framework. - Shaojie Li, Bowei Zhu, Yong Liu:
Algorithmic Stability Unleashed: Generalization Bounds with Unbounded Losses. - Chenxiao Yang, Qitian Wu, David Wipf, Ruoyu Sun, Junchi Yan:
How Graph Neural Networks Learn: Lessons from Training Dynamics. - Hoyong Choi, Nohyun Ki, Hye Won Chung:
BWS: Best Window Selection Based on Sample Scores for Data Pruning across Broad Ranges. - Cong Geng, Tian Han, Peng-Tao Jiang, Hao Zhang, Jinwei Chen, Søren Hauberg, Bo Li:
Improving Adversarial Energy-Based Model via Diffusion Process. - Zhengqi Pei, Anran Zhang, Shuhui Wang, Qingming Huang:
Modeling Language Tokens as Functionals of Semantic Fields. - Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, Yang You:
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models. - Miguel Liu-Schiaffini, Julius Berner, Boris Bonev, Thorsten Kurth, Kamyar Azizzadenesheli, Anima Anandkumar:
Neural Operators with Localized Integral and Differential Kernels. - Zhi Zhou, Ming Yang, Jiang-Xin Shi, Lan-Zhe Guo, Yufeng Li:
DeCoOp: Robust Prompt Tuning with Out-of-Distribution Detection. - Haoyu Zhen, Xiaowen Qiu, Peihao Chen, Jincheng Yang, Xin Yan, Yilun Du, Yining Hong, Chuang Gan:
3D-VLA: A 3D Vision-Language-Action Generative World Model. - Cheng Tan, Zhangyang Gao, Hanqun Cao, Xingran Chen, Ge Wang, Lirong Wu, Jun Xia, Jiangbin Zheng, Stan Z. Li:
Deciphering RNA Secondary Structure Prediction: A Probabilistic K-Rook Matching Perspective. - Boxin Wang, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, Bryan Catanzaro:
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining. - Shuai Zhang, Heshan Devaka Fernando, Miao Liu, Keerthiram Murugesan, Songtao Lu, Pin-Yu Chen, Tianyi Chen, Meng Wang:
SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning. - Weixin Chen, Dawn Song, Bo Li:
GRATH: Gradual Self-Truthifying for Large Language Models. - Hong Chul Nam, Julius Berner, Anima Anandkumar:
Solving Poisson Equations using Neural Walk-on-Spheres. - Sili Huang, Jifeng Hu, Hechang Chen, Lichao Sun, Bo Yang:
In-Context Decision Transformer: Reinforcement Learning via Hierarchical Chain-of-Thought. - Zhiyong Yang, Qianqian Xu, Zitai Wang, Sicong Li, Boyu Han, Shilong Bao, Xiaochun Cao, Qingming Huang:
Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition. - Luca Grillotti, Maxence Faldor, Borja G. León, Antoine Cully:
Quality-Diversity Actor-Critic: Learning High-Performing and Diverse Behaviors via Value and Successor Features Critics. - Adeesh Kolluru, John R. Kitchin:
AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion. - Zhentao Tan, Yadong Mu:
Learning Solution-Aware Transformers for Efficiently Solving Quadratic Assignment Problem. - Andrew Lowy, Jonathan R. Ullman, Stephen J. Wright:
How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization. - Sunil Hwang, Jaehong Yoon, Youngwan Lee, Sung Ju Hwang:
EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal Tokens. - Safwan Hossain, Yiling Chen:
Equilibrium of Data Markets with Externality. - Mingjia Huo, Sai Ashish Somayajula, Youwei Liang, Ruisi Zhang, Farinaz Koushanfar, Pengtao Xie:
Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models. - Sudarshan Babu, Richard Liu, Avery Zhou, Michael Maire, Greg Shakhnarovich, Rana Hanocka:
HyperFields: Towards Zero-Shot Generation of NeRFs from Text. - Wenke Huang, Zekun Shi, Mang Ye, He Li, Bo Du:
Self-Driven Entropy Aggregation for Byzantine-Robust Heterogeneous Federated Learning. - Junkai Zhang, Weitong Zhang, Dongruo Zhou, Quanquan Gu:
Uncertainty-Aware Reward-Free Exploration with General Function Approximation. - Heewoong Choi, Sangwon Jung, Hongjoon Ahn, Taesup Moon:
Listwise Reward Estimation for Offline Preference-based Reinforcement Learning. - Levi E. Lingsch, Mike Yan Michelis, Emmanuel de Bézenac, Sirani M. Perera, Robert K. Katzschmann, Siddhartha Mishra:
Beyond Regular Grids: Fourier-Based Neural Operators on Arbitrary Domains. - Jongha Jon Ryu, Xiangxiang Xu, Hasan Sabri Melihcan Erol, Yuheng Bu, Lizhong Zheng, Gregory W. Wornell:
Operator SVD with Neural Networks via Nested Low-Rank Approximation. - Hannes Stärk, Bowen Jing, Chenyu Wang, Gabriele Corso, Bonnie Berger, Regina Barzilay, Tommi S. Jaakkola:
Dirichlet Flow Matching with Applications to DNA Sequence Design. - Lincan Cai, Shuang Li, Wenxuan Ma, Jingxuan Kang, Binhui Xie, Zixun Sun, Chengwei Zhu:
Enhancing Cross-Modal Fine-Tuning with Gradually Intermediate Modality Generation. - Souradip Chakraborty, Amrit S. Bedi, Sicheng Zhu, Bang An, Dinesh Manocha, Furong Huang:
Position: On the Possibilities of AI-Generated Text Detection. - Lin Zheng, Jianbo Yuan, Zhi Zhang, Hongxia Yang, Lingpeng Kong:
Self-Infilling Code Generation. - Yuda Song, Lili Wu, Dylan J. Foster, Akshay Krishnamurthy:
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics. - Lingxiao Yang, Shutong Ding, Yifan Cai, Jingyi Yu, Jingya Wang, Ye Shi:
Guidance with Spherical Gaussian Constraint for Conditional Diffusion. - Fusheng Liu, Qianxiao Li:
From Generalization Analysis to Optimization Designs for State Space Models. - Bin Lu, Ze Zhao, Luyu Han, Xiaoying Gan, Yuntao Zhou, Lei Zhou, Luoyi Fu, Xinbing Wang, Chenghu Zhou, Jing Zhang:
OxyGenerator: Reconstructing Global Ocean Deoxygenation Over a Century with Deep Learning. - Yechan Kim, Hwijoon Lim, Dongsu Han:
Scaling Beyond the GPU Memory Limit for Large Mixture-of-Experts Model Training. - Yunke Wang, Minjing Dong, Yukun Zhao, Bo Du, Chang Xu:
Imitation Learning from Purified Demonstrations. - Tianyu Xie, Yu Zhu, Longlin Yu, Tong Yang, Ziheng Cheng, Shiyue Zhang, Xiangyu Zhang, Cheng Zhang:
Reflected Flow Matching. - Feng Hong, Yueming Lyu, Jiangchao Yao, Ya Zhang, Ivor W. Tsang, Yanfeng Wang:
Diversified Batch Selection for Training Acceleration. - Didi Zhu, Zhongyi Sun, Zexi Li, Tao Shen, Ke Yan, Shouhong Ding, Chao Wu, Kun Kuang:
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models. - Li Ding, Wen Fei, Yuyang Huang, Shuangrui Ding, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong:
AMPA: Adaptive Mixed Precision Allocation for Low-Bit Integer Training. - Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang, Mao Yang:
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens. - Hao Wu, Huiyuan Wang, Kun Wang, Weiyan Wang, Changan Ye, Yangyu Tao, Chong Chen, Xian-Sheng Hua, Xiao Luo:
Prometheus: Out-of-distribution Fluid Dynamics Modeling with Disentangled Graph ODE. - Mengfei Xia, Yujun Shen, Ceyuan Yang, Ran Yi, Wenping Wang, Yongjin Liu:
SMaRt: Improving GANs with Score Matching Regularity. - Andrew C. Cullen, Shijie Liu, Paul Montague, Sarah Monazam Erfani, Benjamin I. P. Rubinstein:
Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples. - Itamar Zimerman, Lior Wolf:
Viewing Transformers Through the Lens of Long Convolutions Layers. - Itamar Zimerman, Moran Baruch, Nir Drucker, Gilad Ezov, Omri Soceanu, Lior Wolf:
Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption. - Sheng Xu, Mingze Wang, Yanjing Li, Mingbao Lin, Baochang Zhang, David S. Doermann, Xiao Sun:
Learning 1-Bit Tiny Object Detector with Discriminative Feature Refinement. - Zecheng Tang, Chenfei Wu, Zekai Zhang, Minheng Ni, Shengming Yin, Yu Liu, Zhengyuan Yang, Lijuan Wang, Zicheng Liu, Juntao Li, Nan Duan:
StrokeNUWA - Tokenizing Strokes for Vector Graphic Synthesis. - Yudong Gao, Honglong Chen, Peng Sun, Zhe Li, Junjian Li, Huajie Shao:
Energy-based Backdoor Defense without Task-Specific Samples and Model Retraining. - Peng Wang, Huikang Liu, Druv Pai, Yaodong Yu, Zhihui Zhu, Qing Qu, Yi Ma:
A Global Geometric Analysis of Maximal Coding Rate Reduction. - Xiaoqiang Lin, Zhaoxuan Wu, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low:
Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers. - Yuni Lai, Bailin Pan, Kaihuang Chen, Yancheng Yuan, Kai Zhou:
Collective Certified Robustness against Graph Injection Attacks. - Xiang Cheng, Yuxin Chen, Suvrit Sra:
Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context. - Jae-Hong Lee, Joon-Hyuk Chang:
Stationary Latent Weight Inference for Unreliable Observations from Online Test-Time Adaptation. - Hongduan Tian, Feng Liu, Tongliang Liu, Bo Du, Yiu-ming Cheung, Bo Han:
MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence. - Jing Xu, Jingzhao Zhang:
Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning. - Haotian Lin, Matthew Reimherr:
On Hypothesis Transfer Learning of Functional Linear Models. - Licong Lin, Tijana Zrnic:
Plug-in Performative Optimization. - Zhengyu Zhou, Weiwei Liu:
Sequential Kernel Goodness-of-fit Testing. - Weiming Zhuang, Jian Xu, Chen Chen, Jingtao Li, Lingjuan Lyu:
COALA: A Practical and Vision-Centric Federated Learning Platform. - Hyunouk Ko, Xiaoming Huo:
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes. - Jeevan Thapa, Rui Li:
Bayesian Adaptation of Network Depth and Width for Continual Learning. - Zhining Liu, Ruizhong Qiu, Zhichen Zeng, Hyunsik Yoo, David Zhou, Zhe Xu, Yada Zhu, Kommy Weldemariam, Jingrui He, Hanghang Tong:
Class-Imbalanced Graph Learning without Class Rebalancing. - Ziqin Chen, Yongqiang Wang:
Locally Differentially Private Decentralized Stochastic Bilevel Optimization with Guaranteed Convergence Accuracy. - Amir Bar, Florian Bordes, Assaf Shocher, Mido Assran, Pascal Vincent, Nicolas Ballas, Trevor Darrell, Amir Globerson, Yann LeCun:
Stochastic positional embeddings improve masked image modeling. - Amirmojtaba Sabour, Sanja Fidler, Karsten Kreis:
Align Your Steps: Optimizing Sampling Schedules in Diffusion Models. - Hong Jun Jeon, Jason D. Lee, Qi Lei, Benjamin Van Roy:
An Information-Theoretic Analysis of In-Context Learning. - Songtao Liu, Jinghui Chen, Tianfan Fu, Lu Lin, Marinka Zitnik, Dinghao Wu:
Graph Adversarial Diffusion Convolution. - Homanga Bharadhwaj:
Position: Scaling Simulation is Neither Necessary Nor Sufficient for In-the-Wild Robot Manipulation. - Kirill Vishniakov, Zhiqiang Shen, Zhuang Liu:
ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy. - Ali Behrouz, Parsa Delavari, Farnoosh Hashemi:
Unsupervised Representation Learning of Brain Activity via Bridging Voxel Activity and Functional Connectivity. - Sungmin Cha, Kyunghyun Cho, Taesup Moon:
Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning. - Hannes Stärk, Bowen Jing, Regina Barzilay, Tommi S. Jaakkola:
Harmonic Self-Conditioned Flow Matching for joint Multi-Ligand Docking and Binding Site Design. - Chenhui Xu, Fuxun Yu, Zirui Xu, Nathan Inkawhich, Xiang Chen:
Out-of-Distribution Detection via Deep Multi-Comprehension Ensemble. - Ka-Hei Hui, Aditya Sanghi, Arianna Rampini, Kamal Rahimi Malekshan, Zhengzhe Liu, Hooman Shayani, Chi-Wing Fu:
Make-A-Shape: a Ten-Million-scale 3D Shape Model. - Jaejun Lee, Minsung Hwang, Joyce Jiyoung Whang:
PAC-Bayesian Generalization Bounds for Knowledge Graph Representation Learning. - Zhongzhan Huang, Mingfu Liang, Shanshan Zhong, Liang Lin:
AttNS: Attention-Inspired Numerical Solving For Limited Data Scenarios. - Aleksandr Beznosikov, David Dobre, Gauthier Gidel:
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features. - Yifei Ming, Yixuan Li:
Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models. - Shiyao Li, Xuefei Ning, Luning Wang, Tengxuan Liu, Xiangsheng Shi, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang:
Evaluating Quantized Large Language Models. - Elen Vardanyan, Sona Hunanyan, Tigran Galstyan, Arshak Minasyan, Arnak S. Dalalyan:
Statistically Optimal Generative Modeling with Maximum Deviation from the Empirical Distribution. - Rongjie Huang, Ruofan Hu, Yongqi Wang, Zehan Wang, Xize Cheng, Ziyue Jiang, Zhenhui Ye, Dongchao Yang, Luping Liu, Peng Gao, Zhou Zhao:
InstructSpeech: Following Speech Editing Instructions via Large Language Models. - Yuxuan Bian, Xuan Ju, Jiangtong Li, Zhijian Xu, Dawei Cheng, Qiang Xu:
Multi-Patch Prediction: Adapting Language Models for Time Series Representation Learning. - Hemanth Saratchandran, Sameera Ramasinghe, Violetta Shevchenko, Alexander Long, Simon Lucey:
A sampling theory perspective on activations for implicit neural representations. - Khai Nguyen, Shujian Zhang, Tam Le, Nhat Ho:
Sliced Wasserstein with Random-Path Projecting Directions. - Daeun Lee, Jaehong Yoon, Sung Ju Hwang:
BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation. - Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, Hao Zhang:
Online Speculative Decoding. - Hanlin Zhang, Benjamin L. Edelman, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, Boaz Barak:
Watermarks in the Sand: Impossibility of Strong Watermarking for Language Models. - Yuxiao Wen, Arthur Jacot:
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning. - Jiang-Xin Shi, Tong Wei, Zhi Zhou, Jie-Jing Shao, Xin-Yan Han, Yufeng Li:
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts. - Yuda Song, Drew Bagnell, Aarti Singh:
Hybrid Reinforcement Learning from Offline Observation Alone. - Xiaoyu Zhou, Xingjian Ran, Yajiao Xiong, Jinlin He, Zhiwei Lin, Yongtao Wang, Deqing Sun, Ming-Hsuan Yang:
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting. - Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, Bin Cui:
Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs. - Dyah Adila, Shuai Zhang, Boran Han, Bernie Wang:
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach. - Lei Zhao, Mengdi Wang, Yu Bai:
Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective. - Xi Han, Fei Hou, Hong Qin:
UGrid: An Efficient-And-Rigorous Neural Multigrid Solver for Linear PDEs. - Yunsheng Tian, Ane Zuniga, Xinwei Zhang, Johannes P. Dürholt, Payel Das, Jie Chen, Wojciech Matusik, Mina Konakovic Lukovic:
Boundary Exploration for Bayesian Optimization With Unknown Physical Constraints. - Pingchuan Ma, Tsun-Hsuan Wang, Minghao Guo, Zhiqing Sun, Joshua B. Tenenbaum, Daniela Rus, Chuang Gan, Wojciech Matusik:
LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery. - Pablo Lemos, Nikolay Malkin, Will Handley, Yoshua Bengio, Yashar Hezaveh, Laurence Perreault Levasseur:
Improving Gradient-Guided Nested Sampling for Posterior Inference. - Binchi Zhang, Zihan Chen, Cong Shen, Jundong Li:
Verification of Machine Unlearning is Fragile. - Song Bian, Xiating Ouyang, Zhiwei Fan, Paraschos Koutris:
Naive Bayes Classifiers over Missing Data: Decision and Poisoning. - Zachary Robertson, Sanmi Koyejo:
Implicit Regularization in Feedback Alignment Learning Mechanisms for Neural Networks. - Orin Levy, Asaf B. Cassel, Alon Cohen, Yishay Mansour:
Eluder-based Regret for Stochastic Contextual MDPs. - Yash P. Patel, Declan McNamara, Jackson Loper, Jeffrey Regier, Ambuj Tewari:
Variational Inference with Coverage Guarantees in Simulation-Based Inference. - Yefan Zhou, Jianlong Chen, Qinxue Cao, Konstantin Schürholt, Yaoqing Yang:
MD tree: a model-diagnostic tree grown on loss landscape. - Yuji Roh, Qingyun Liu, Huan Gui, Zhe Yuan, Yujin Tang, Steven Euijong Whang, Liang Liu, Shuchao Bi, Lichan Hong, Ed H. Chi, Zhe Zhao:
LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views. - Alexandre Ramé, Nino Vieillard, Léonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, Johan Ferret:
WARM: On the Benefits of Weight Averaged Reward Models. - Wei Huang, Yangdong Liu, Haotong Qin, Ying Li, Shiming Zhang, Xianglong Liu, Michele Magno, Xiaojuan Qi:
BiLLM: Pushing the Limit of Post-Training Quantization for LLMs. - Eran Malach:
Auto-Regressive Next-Token Predictors are Universal Learners. - Chenxu Zhao, Wei Qian, Yangyi Li, Aobo Chen, Mengdi Huai:
Rethinking Adversarial Robustness in the Context of the Right to be Forgotten. - Tijin Yan, Hengheng Gong, Yongping He, Yufeng Zhan, Yuanqing Xia:
Probabilistic Time Series Modeling with Decomposable Denoising Diffusion Model. - Amit Dhurandhar, Swagatam Haldar, Dennis Wei, Karthikeyan Natesan Ramamurthy:
Trust Regions for Explanations via Black-Box Probabilistic Certification. - Wei Deng, Weijian Luo, Yixin Tan, Marin Bilos, Yu Chen, Yuriy Nevmyvaka, Ricky T. Q. Chen:
Variational Schrödinger Diffusion Models. - Ziyi Liu, Idan Attias, Daniel M. Roy:
Causal Bandits: The Pareto Optimal Frontier of Adaptivity, a Reduction to Linear Bandits, and Limitations around Unknown Marginals. - Young Wu, Jeremy McMahan, Yiding Chen, Yudong Chen, Jerry Zhu, Qiaomin Xie:
Minimally Modifying a Markov Game to Achieve Any Nash Equilibrium and Value. - Jordi Grau-Moya, Tim Genewein, Marcus Hutter, Laurent Orseau, Grégoire Delétang, Elliot Catt, Anian Ruoss, Li Kevin Wenliang, Christopher Mattern, Matthew Aitchison, Joel Veness:
Learning Universal Predictors. - Gabriel Arpino, Xiaoqi Liu, Ramji Venkataramanan:
Inferring Change Points in High-Dimensional Linear Regression via Approximate Message Passing. - Linus Bleistein, Van-Tuan Nguyen, Adeline Fermanian, Agathe Guilloux:
Dynamic Survival Analysis with Controlled Latent States. - Peter Schmitt-Förster, Tobias Sutter:
Regularized Q-learning through Robust Averaging. - Yatin Dandi, Emanuele Troiani, Luca Arnaboldi, Luca Pesce, Lenka Zdeborová, Florent Krzakala:
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents. - Rémi Leluc, Aymeric Dieuleveut, François Portier, Johan Segers, Aigerim Zhuman:
Sliced-Wasserstein Estimation with Spherical Harmonics as Control Variates. - Shuyu Cheng, Yibo Miao, Yinpeng Dong, Xiao Yang, Xiao-Shan Gao, Jun Zhu:
Efficient Black-box Adversarial Attacks via Bayesian Optimization Guided by a Function Prior. - Jiahao Liu, Yipeng Zhou, Di Wu, Miao Hu, Mohsen Guizani, Quan Z. Sheng:
FedLMT: Tackling System Heterogeneity of Federated Learning via Low-Rank Model Training with Theoretical Guarantees. - Manuel Brenner, Florian Hess, Georgia Koppe, Daniel Durstewitz:
Integrating Multimodal Data for Joint Generative Modeling of Complex Dynamics. - Alexandra Hotti, Oskar Kviman, Ricky Molén, Víctor Elvira, Jens Lagergren:
Efficient Mixture Learning in Black-Box Variational Inference. - Zhuojun Chen, Xinghua Zhu, Dongzhe Su, Justin C. I. Chuang:
Stacking Deep Set Networks and Pooling by Quantiles. - Berken Utku Demirel, Christian Holz:
An Unsupervised Approach for Periodic Source Detection in Time Series. - Yunhao Ni, Yuxin Guo, Junlong Jia, Lei Huang:
On the Nonlinearity of Layer Normalization. - Yaxin Li, Qi Xu, Jiangrong Shen, Hongming Xu, Long Chen, Gang Pan:
Towards efficient deep spiking neural networks construction with spiking activity based pruning. - Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, Jingren Zhou:
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism. - Yeonsung Jung, Heecheol Yun, Joonhyung Park, Jin-Hwa Kim, Eunho Yang:
PruNeRF: Segment-Centric Dataset Pruning via 3D Spatial Consistency. - Ziyi Zhang, Sen Zhang, Yibing Zhan, Yong Luo, Yonggang Wen, Dacheng Tao:
Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases. - Changchun Li, Yuanchao Dai, Lei Feng, Ximing Li, Bing Wang, Jihong Ouyang:
Positive and Unlabeled Learning with Controlled Probability Boundary Fence. - Bo Peng, Zhen Fang, Guangquan Zhang, Jie Lu:
Knowledge Distillation with Auxiliary Variable. - Jiafei Lyu, Chenjia Bai, Jingwen Yang, Zongqing Lu, Xiu Li:
Cross-Domain Policy Adaptation by Capturing Representation Mismatch. - Yen-Ju Chen, Nai-Chieh Huang, Ching-pei Lee, Ping-Chun Hsieh:
Accelerated Policy Gradient: On the Convergence Rates of the Nesterov Momentum for Reinforcement Learning. - Chohee Kim, Mihaela van der Schaar, Changhee Lee:
Discovering Features with Synergistic Interactions in Multiple Views. - Kaiwen Wang, Owen Oertell, Alekh Agarwal, Nathan Kallus, Wen Sun:
More Benefits of Being Distributional: Second-Order Bounds for Reinforcement Learning. - Sunjae Yoon, Gwanhyeong Koo, Geonwoo Kim, Chang D. Yoo:
FRAG: Frequency Adapting Group for Diffusion Video Editing. - Junchao Gong, Lei Bai, Peng Ye, Wanghan Xu, Na Liu, Jianhua Dai, Xiaokang Yang, Wanli Ouyang:
CasCast: Skillful High-resolution Precipitation Nowcasting via Cascaded Modelling. - Suppakit Waiwitlikhit, Ion Stoica, Yi Sun, Tatsunori Hashimoto, Daniel Kang:
Trustless Audits without Revealing Data or Models. - Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin G. Mixon, Chong You, Zhihui Zhu:
Generalized Neural Collapse for a Large Number of Classes. - Shion Takeno, Yu Inatsu, Masayuki Karasuyama, Ichiro Takeuchi:
Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds. - Jungeun Kim, Junwon You, Dongjin Lee, Ha Young Kim, Jae-Hun Jung:
Do Topological Characteristics Help in Knowledge Distillation? - Yixiong Zou, Shanghang Zhang, Haichen Zhou, Yuhua Li, Ruixuan Li:
Compositional Few-Shot Class-Incremental Learning. - Yehui Tang, Nianzu Yang, Mabiao Long, Junchi Yan:
SSL4Q: Semi-Supervised Learning of Quantum Data with Application to Quantum State Classification. - Jun Cen, Chenfei Wu, Xiao Liu, Shengming Yin, Yixuan Pei, Jinglong Yang, Qifeng Chen, Nan Duan, Jianguo Zhang:
Using Left and Right Brains Together: Towards Vision and Language Planning. - Shawn Im, Yixuan Li:
Understanding the Learning Dynamics of Alignment with Human Feedback. - Shengchao Hu, Ziqing Fan, Li Shen, Ya Zhang, Yanfeng Wang, Dacheng Tao:
HarmoDT: Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning. - Charles Andrew Dickens, Changyu Gao, Connor Pryor, Stephen J. Wright, Lise Getoor:
Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning. - Jesse C. Cresswell, Yi Sui, Bhargava Kumar, Noël Vouitsis:
Conformal Prediction Sets Improve Human Decision Making. - Jay Gala, Pengtao Xie:
Leverage Class-Specific Accuracy to Guide Data Generation for Improving Image Classification. - Inwoo Hwang, Yesong Choe, Yeahoon Kwon, Sanghack Lee:
On Positivity Condition for Causal Inference. - Sidharth Mudgal, Jong Lee, Harish Ganapathy, YaGuang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, Jilin Chen, Alex Beutel, Ahmad Beirami:
Controlled Decoding from Language Models. - Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, Mor Geva:
Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models. - Emmanuel Akeweje, Mimi Zhang:
Learning Mixtures of Gaussian Processes through Random Projection. - Haofan Lu, Christopher Vattheuer, Baharan Mirzasoleiman, Omid Abari:
NeWRF: A Deep Learning Framework for Wireless Radiation Field Reconstruction and Channel Prediction. - Youlong Ding, Xueyang Wu, Yining Meng, Yonggang Luo, Hao Wang, Weike Pan:
Delving into Differentially Private Transformer. - Przemyslaw Biecek, Wojciech Samek:
Position: Explain to Question not to Justify. - Agustinus Kristiadi, Felix Strieth-Kalthoff, Marta Skreta, Pascal Poupart, Alán Aspuru-Guzik, Geoff Pleiss:
A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules? - Kihyuk Hong, Ambuj Tewari:
A Primal-Dual Algorithm for Offline Constrained Reinforcement Learning with Linear MDPs. - Pranav Singh Chib, Achintya Nath, Paritosh Kabra, Ishu Gupta, Pravendra Singh:
MS-TIP: Imputation Aware Pedestrian Trajectory Prediction. - Wanli Shi, Yi Chang, Bin Gu:
Double Momentum Method for Lower-Level Constrained Bilevel Optimization. - Francisco Eiras, Adel Bibi, Rudy Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar:
Efficient Error Certification for Physics-Informed Neural Networks. - Robert Geirhos, Roland S. Zimmermann, Blair L. Bilodeau, Wieland Brendel, Been Kim:
Don't trust your eyes: on the (un)reliability of feature visualizations. - Kexin Huang, Ziqian Chen, Xue Wang, Chongming Gao, Jinyang Gao, Bolin Ding, Xiang Wang:
Auctionformer: A Unified Deep Learning Algorithm for Solving Equilibrium Strategies in Auction Games. - Mikael Møller Høgsgaard, Lior Kamma, Kasper Green Larsen, Jelani Nelson, Chris Schwiegelshohn:
Sparse Dimensionality Reduction Revisited. - Hao-Yuan He, Hui Sun, Zheng Xie, Ming Li:
Ambiguity-Aware Abductive Learning. - Wanpeng Zhang, Yilin Li, Boyu Yang, Zongqing Lu:
Tackling Non-Stationarity in Reinforcement Learning via Causal-Origin Representation. - Yuan Gao, Anton Rodomanov, Sebastian U. Stich:
Non-convex Stochastic Composite Optimization with Polyak Momentum. - Xiaowen Jiang, Anton Rodomanov, Sebastian U. Stich:
Federated Optimization with Doubly Regularized Drift Correction. - Martin Mihelich, François Castagnos, Charles Dognin:
Interplay of ROC and Precision-Recall AUCs: Theoretical Limits and Practical Implications in Binary Classification. - Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel:
Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution. - Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis:
Differentially Private Bias-Term Fine-tuning of Foundation Models. - David T. Hoffmann, Simon Schrodi, Jelena Bratulic, Nadine Behrmann, Volker Fischer, Thomas Brox:
Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems. - Mimoun Mohamed, François Malgouyres, Valentin Emiya, Caroline Chaux:
Straight-Through Meets Sparse Recovery: the Support Exploration Algorithm. - Jinlu Zhang, Yiyi Zhou, Qiancheng Zheng, Xiaoxiong Du, Gen Luo, Jun Peng, Xiaoshuai Sun, Rongrong Ji:
Fast Text-to-3D-Aware Face Generation and Manipulation via Direct Cross-modal Mapping and Geometric Regularization. - Pei Liu, Luping Ji:
Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation. - Yuxin Dong, Tieliang Gong, Hong Chen, Zhongjiang He, Mengxiang Li, Shuangyong Song, Chen Li:
Towards Generalization beyond Pointwise Learning: A Unified Information-theoretic Perspective. - Xiaoqi Zhao, Youwei Pang, Wei Ji, Baicheng Sheng, Jiaming Zuo, Lihe Zhang, Huchuan Lu:
Spider: A Unified Framework for Context-dependent Concept Segmentation. - Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji:
CaM: Cache Merging for Memory-efficient LLMs Inference. - Yuxin Li, Yaoxuan Feng, Bo Chen, Wenchao Chen, Yubiao Wang, Xinyue Hu, Baolin Sun, Chunhui Qu, Mingyuan Zhou:
Vague Prototype-Oriented Diffusion Model for Multi-Class Anomaly Detection. - Cédric Malherbe, Emilio Domínguez-Sánchez, Merwan Barlier, Igor Colin, Haitham Bou-Ammar, Tom Diethe:
Measures of diversity and space-filling designs for categorical data. - Yannik Mahlau, Frederik Schubert, Bodo Rosenhahn:
Mastering Zero-Shot Interactions in Cooperative and Competitive Simultaneous Games. - Xiao Luo, Yiyang Gu, Huiyu Jiang, Hang Zhou, Jinsheng Huang, Wei Ju, Zhiping Xiao, Ming Zhang, Yizhou Sun:
PGODE: Towards High-quality System Dynamics Modeling. - Shida Wang, Qianxiao Li:
StableSSM: Alleviating the Curse of Memory in State-space Models through Stable Reparameterization. - Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao:
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. - Zhihe Lu, Jiawang Bai, Xin Li, Zeyu Xiao, Xinchao Wang:
Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models. - Benjamin Scellier:
A fast algorithm to simulate nonlinear resistive networks. - Zixuan Hu, Yongxian Wei, Li Shen, Zhenyi Wang, Lei Li, Chun Yuan, Dacheng Tao:
Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications. - Zhengbo Wang, Jian Liang, Ran He, Zilei Wang, Tieniu Tan:
Connecting the Dots: Collaborative Fine-tuning for Black-Box Vision-Language Models. - Chenyang Zhao, Kun Wang, Xingyu Zeng, Rui Zhao, Antoni B. Chan:
Gradient-based Visual Explanation for Transformer-based CLIP. - Depeng Li, Tianqi Wang, Junwei Chen, Wei Dai, Zhigang Zeng:
Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning. - Liulu He, Yufei Zhao, Rui Gao, Yuan Du, Li Du:
SFC: Achieve Accurate Fast Convolution under Low-precision Arithmetic. - Sreyan Ghosh, Chandra Kiran Reddy Evuru, Sonal Kumar, Ramaneswaran S., Deepali Aneja, Zeyu Jin, Ramani Duraiswami, Dinesh Manocha:
A Closer Look at the Limitations of Instruction Tuning. - Lin Zhu, Yifeng Yang, Qinying Gu, Xinbing Wang, Chenghu Zhou, Nanyang Ye:
CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD Generalization and Open-Set OOD Detection. - Spandan Pyakurel, Qi Yu:
Hierarchical Novelty Detection via Fine-Grained Evidence Allocation. - Pan Xu:
Parameter-Dependent Competitive Analysis for Online Capacitated Coverage Maximization through Boostings and Attenuations. - Federico Bianchi, Patrick John Chia, Mert Yüksekgönül, Jacopo Tagliabue, Dan Jurafsky, James Zou:
How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis. - Xingyou Song, Yingtao Tian, Robert Tjarko Lange, Chansoo Lee, Yujin Tang, Yutian Chen:
Position: Leverage Foundational Models for Black-Box Optimization. - Maxime Heuillet, Ola Ahmad, Audrey Durand:
Randomized Confidence Bounds for Stochastic Partial Monitoring. - Yifeng Tian, Nishant Panda, Yen Ting Lin:
Liouville Flow Importance Sampler. - Talip Ucar, Aubin Ramon, Dino Oglic, Rebecca Croasdale-Wood, Tom Diethe, Pietro Sormanni:
Improving Antibody Humanness Prediction using Patent Data. - Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Phuong Ha Nguyen:
Proactive DP: A Multiple Target Optimization Framework for DP-SGD. - Alex Gu, Baptiste Rozière, Hugh James Leather, Armando Solar-Lezama, Gabriel Synnaeve, Sida Wang:
CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution. - Tianjun Yao, Yongqiang Chen, Zhenhao Chen, Kai Hu, Zhiqiang Shen, Kun Zhang:
Empowering Graph Invariance Learning with Deep Spurious Infomax. - Xiyuan Wang, Pan Li, Muhan Zhang:
Graph As Point Set. - Haoran You, Yichao Fu, Zheng Wang, Amir Yazdanbakhsh, Yingyan Celine Lin:
When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models. - Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie Zhou, Kai-Wei Chang, Minlie Huang, Nanyun Peng:
On Prompt-Driven Safeguarding for Large Language Models. - Tom Sander, Yaodong Yu, Maziar Sanjabi, Alain Oliviero Durmus, Yi Ma, Kamalika Chaudhuri, Chuan Guo:
Differentially Private Representation Learning via Image Captioning. - Dongze Lian, Weihao Yu, Xinchao Wang:
Receptive Fields As Experts in Convolutional Neural Architectures. - Moritz Herrmann, F. Julian D. Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, Bernd Bischl:
Position: Why We Must Rethink Empirical Research in Machine Learning. - Sattar Vakili, Farhang Nabiei, Da-shan Shiu, Alberto Bernacchia:
Reward-Free Kernel-Based Reinforcement Learning. - Jaeyun Kim, Saerom Park, Joohee Lee, Jung Hee Cheon:
Privacy-Preserving Embedding via Look-up Table Evaluation with Fully Homomorphic Encryption. - Hugo Thimonier, Fabrice Popineau, Arpad Rimmel, Bich-Liên Doan:
Beyond Individual Input for Deep Anomaly Detection on Tabular Data. - Anay Majee, Suraj Kothawade, Krishnateja Killamsetty, Rishabh K. Iyer:
SCoRe: Submodular Combinatorial Representation Learning. - Yuexiao Ma, Huixia Li, Xiawu Zheng, Feng Ling, Xuefeng Xiao, Rui Wang, Shilei Wen, Fei Chao, Rongrong Ji:
Outlier-aware Slicing for Post-Training Quantization in Vision Transformer. - Konstantin Mishchenko, Aaron Defazio:
Prodigy: An Expeditiously Adaptive Parameter-Free Learner. - Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang:
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities. - Justin Deschenaux, Igor Krawczuk, Grigorios Chrysos, Volkan Cevher:
Going beyond Compositions, DDPMs Can Produce Zero-Shot Interpolations. - Jen Ning Lim, Juan Kuntz, Samuel Power, Adam M. Johansen:
Momentum Particle Maximum Likelihood. - Yusong Hu, De Cheng, Dingwen Zhang, Nannan Wang, Tongliang Liu, Xinbo Gao:
Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning. - Yunzhuang Shen, Yuan Sun, Xiaodong Li, Zhiguang Cao, Andrew C. Eberhard, Guangquan Zhang:
Adaptive Stabilization Based on Machine Learning for Column Generation. - Nora Belrose, Quintin Pope, Lucia Quirke, Alex Mallen, Xiaoli Z. Fern:
Neural Networks Learn Statistics of Increasing Complexity. - Zhiwei Hao, Jianyuan Guo, Chengcheng Wang, Yehui Tang, Han Wu, Han Hu, Kai Han, Chang Xu:
Data-efficient Large Vision Models through Sequential Autoregression. - Xinjie Yao, Yu Wang, Pengfei Zhu, Wanyu Lin, Jialu Li, Weihao Li, Qinghua Hu:
Socialized Learning: Making Each Other Better Through Multi-Agent Collaboration. - Zhongwei Yu, Jingqing Ruan, Dengpeng Xing:
Learning Causal Dynamics Models in Object-Oriented Environments. - Davin Choo, Themistoklis Gouleakis, Chun Kai Ling, Arnab Bhattacharyya:
Online bipartite matching with imperfect advice. - Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang:
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model. - Honghao Chen, Yurong Zhang, Xiaokun Feng, Xiangxiang Chu, Kaiqi Huang:
Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness. - Xu Cheng, Lei Cheng, Zhaoran Peng, Yang Xu, Tian Han, Quanshi Zhang:
Layerwise Change of Knowledge in Neural Networks. - Yair Schiff, Zhong Yi Wan, Jeffrey B. Parker, Stephan Hoyer, Volodymyr Kuleshov, Fei Sha, Leonardo Zepeda-Núñez:
DySLIM: Dynamics Stable Learning by Invariant Measure for Chaotic Systems. - Seohong Park, Tobias Kreiman, Sergey Levine:
Foundation Policies with Hilbert Representations. - Wei Yu, Jianing Li, Shengping Zhang, Xiangyang Ji:
Learning Scale-Aware Spatio-temporal Implicit Representation for Event-based Motion Deblurring. - Junyu Gao, Xuan Yao, Changsheng Xu:
Fast-Slow Test-Time Adaptation for Online Vision-and-Language Navigation. - Runfa Chen, Ling Wang, Yu Du, Tianrui Xue, Fuchun Sun, Jianwei Zhang, Wenbing Huang:
Subequivariant Reinforcement Learning in 3D Multi-Entity Physical Environments. - Yue Xu, Zhilin Lin, Yusong Qiu, Cewu Lu, Yong-Lu Li:
Low-Rank Similarity Mining for Multimodal Dataset Distillation. - Zhiyuan Cheng, Zhaoyi Liu, Tengda Guo, Shiwei Feng, Dongfang Liu, Mingjie Tang, Xiangyu Zhang:
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks. - Boyuan Wang, Yun Qu, Yuhang Jiang, Jianzhun Shao, Chang Liu, Wenming Yang, Xiangyang Ji:
LLM-Empowered State Representation for Reinforcement Learning. - Haihan Gao, Rui Zhang, Qi Yi, Hantao Yao, Haochen Li, Jiaming Guo, Shaohui Peng, Yunkai Gao, QiCheng Wang, Xing Hu, Yuanbo Wen, Zihao Zhang, Zidong Du, Ling Li, Qi Guo, Yunji Chen:
Prompt-based Visual Alignment for Zero-shot Policy Transfer. - Navdeep Kumar, Kaixin Wang, Kfir Yehuda Levy, Shie Mannor:
Efficient Value Iteration for s-rectangular Robust Markov Decision Processes. - Dave Epstein, Ben Poole, Ben Mildenhall, Alexei A. Efros, Aleksander Holynski:
Disentangled 3D Scene Generation with Layout Learning. - Theodore Papamarkou, Tolga Birdal, Michael M. Bronstein, Gunnar E. Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Lio, Paolo Di Lorenzo, Vasileios Maroulas, Nina Miolane, Farzana Nasrin, Karthikeyan Natesan Ramamurthy, Bastian Rieck, Simone Scardapane, Michael T. Schaub, Petar Velickovic, Bei Wang, Yusu Wang, Guo-Wei Wei, Ghada Zamzmi:
Position: Topological Deep Learning is the New Frontier for Relational Learning. - Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David B. Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang:
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI. - Jianke Yang, Nima Dehmamy, Robin Walters, Rose Yu:
Latent Space Symmetry Discovery. - Hyungjin Chung, Jong Chul Ye, Peyman Milanfar, Mauricio Delbracio:
Prompt-tuning Latent Diffusion Models for Inverse Problems. - Hao Zhang, Fang Li, Samyak Rawlekar, Narendra Ahuja:
S3O: A Dual-Phase Approach for Reconstructing Dynamic Shape and Skeleton of Articulated Objects from Single Monocular Video. - Kiho Park, Yo Joong Choe, Victor Veitch:
The Linear Representation Hypothesis and the Geometry of Large Language Models. - Di Chang, Yichun Shi, Quankai Gao, Hongyi Xu, Jessica Fu, Guoxian Song, Qing Yan, Yizhe Zhu, Xiao Yang, Mohammad Soleymani:
MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion. - Kyoungseok Jang, Chicheng Zhang, Kwang-Sung Jun:
Efficient Low-Rank Matrix Estimation, Experimental Design, and Arm-Set-Dependent Low-Rank Bandits. - Kwang-Sung Jun, Jungtaek Kim:
Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization. - Songhua He, Periklis A. Papakonstantinou:
The Effect of Weight Precision on the Neuron Count in Deep ReLU Networks. - Luofeng Liao, Christian Kroer:
Bootstrapping Fisher Market Equilibrium and First-Price Pacing Equilibrium. - Dan Friedman, Andrew Kyle Lampinen, Lucas Dixon, Danqi Chen, Asma Ghandeharioun:
Interpretability Illusions in the Generalization of Simplified Models. - Bing Cao, Yinan Xia, Yi Ding, Changqing Zhang, Qinghua Hu:
Predictive Dynamic Fusion. - Guoqi Yu, Jing Zou, Xiaowei Hu, Angelica I. Avilés-Rivero, Jing Qin, Shujun Wang:
Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling. - Dachuan Shi, Chaofan Tao, Anyi Rao, Zhendong Yang, Chun Yuan, Jiaqi Wang:
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers. - Weitao Feng, Wenbo Zhou, Jiyan He, Jie Zhang, Tianyi Wei, Guanlin Li, Tianwei Zhang, Weiming Zhang, Nenghai Yu:
AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA. - Philipp Holl, Nils Thuerey:
ΦFlow: Differentiable Simulations for PyTorch, TensorFlow and Jax. - Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, Jia Li:
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform. - Konstantin Donhauser, Javier Abad Martinez, Neha Hulkund, Fanny Yang:
Privacy-Preserving Data Release Leveraging Optimal Transport and Particle Gradient Descent. - Christian Igel:
Smooth Min-Max Monotonic Networks. - Yunheng Li, Zhongyu Li, Quansheng Zeng, Qibin Hou, Ming-Ming Cheng:
Cascade-CLIP: Cascaded Vision-Language Embeddings Alignment for Zero-Shot Semantic Segmentation. - Hyeyoon Lee, Kanghyun Choi, Dain Kwon, Sunjong Park, Mayoore Selvarasa Jaiswal, Noseong Park, Jonghyun Choi, Jinho Lee:
DataFreeShield: Defending Adversarial Attacks without Training Data. - Elías Abad-Rocamora, Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher:
Revisiting Character-level Adversarial Attacks for Language Models. - Yi-Xuan Sun, Ya-Lin Zhang, Bin Han, Longfei Li, Jun Zhou:
Self-cognitive Denoising in the Presence of Multiple Noisy Label Sources. - Haihui Pan, Zhifang Liao, Wenrui Xie, Kun Han:
RMIB: Representation Matching Information Bottleneck for Matching Text Representations. - Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, Siyuan Huang:
An Embodied Generalist Agent in 3D World. - Lorenz K. Müller:
Is Kernel Prediction More Powerful than Gating in Convolutional Neural Networks? - Haoyu Zhang, Meng Liu, Zixin Liu, Xuemeng Song, Yaowei Wang, Liqiang Nie:
Multi-Factor Adaptive Vision Selection for Egocentric Video Question Answering. - Zicheng Liu, Siyuan Li, Li Wang, Zedong Wang, Yunfan Liu, Stan Z. Li:
Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences. - Guy Horowitz, Yonatan Sommer, Moran Koren, Nir Rosenfeld:
Classification Under Strategic Self-Selection. - Pengwei Liu, Zhongkai Hao, Xingyu Ren, Hangjie Yuan, Jiayang Ren, Dong Ni:
PAPM: A Physics-aware Proxy Model for Process Systems. - Yuhao Wu, Jiangchao Yao, Xiaobo Xia, Jun Yu, Ruxin Wang, Bo Han, Tongliang Liu:
Mitigating Label Noise on Graphs via Topological Sample Selection. - Yuhao Wu, Jiangchao Yao, Bo Han, Lina Yao, Tongliang Liu:
Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning. - Zhiquan Tan, Kaipeng Zheng, Weiran Huang:
OTMatch: Improving Semi-Supervised Learning with Optimal Transport. - Jiashuo Liu, Jiayun Wu, Tianyu Wang, Hao Zou, Bo Li, Peng Cui:
Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications. - Zhiquan Tan, Jingqin Yang, Weiran Huang, Yang Yuan, Yifan Zhang:
Information Flow in Self-Supervised Learning. - Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed:
Memory Efficient Neural Processes via Constant Memory Attention Block. - Yifan Zhang, Zhiquan Tan, Jingqin Yang, Weiran Huang, Yang Yuan:
Matrix Information Theory for Self-Supervised Learning. - Yichen Wen, Zhiquan Tan, Kaipeng Zheng, Chuanlong Xie, Weiran Huang:
Provable Contrastive Continual Learning. - Jiaming Zhao, Wenbo Qiao, Peng Zhang, Hui Gao:
Quantum Implicit Neural Representations. - Yuhang He, Anoop Cherian, Gordon Wichern, Andrew Markham:
Deep Neural Room Acoustics Primitive. - Kyurae Kim, Joohwan Ko, Yian Ma, Jacob R. Gardner:
Demystifying SGD with Doubly Stochastic Gradients. - Joohwan Ko, Kyurae Kim, Woochang Kim, Jacob R. Gardner:
Provably Scalable Black-Box Variational Inference with Structured Variational Families. - Kun Song, Zhiquan Tan, Bochao Zou, Huimin Ma, Weiran Huang:
Unveiling the Dynamics of Information Interplay in Supervised Learning. - Shengchao Hu, Ziqing Fan, Chaoqin Huang, Li Shen, Ya Zhang, Yanfeng Wang, Dacheng Tao:
Q-value Regularized Transformer for Offline Reinforcement Learning. - Abhinab Acharya, Dayou Yu, Qi Yu, Xumin Liu:
Balancing Feature Similarity and Label Variability for Optimal Size-Aware One-shot Subset Selection. - Hitesh Sapkota, Krishna Prasad Neupane, Qi Yu:
Meta Evidential Transformer for Few-Shot Open-Set Recognition. - Victor Quintas-Martinez, Mohammad Taha Bahadori, Eduardo Santiago, Jeff Mu, David Heckerman:
Multiply-Robust Causal Change Attribution. - S. Chandra Mouli, Danielle C. Maddix, Shima Alizadeh, Gaurav Gupta, Andrew Stuart, Michael W. Mahoney, Bernie Wang:
Using Uncertainty Quantification to Characterize and Improve Out-of-Domain Learning for PDEs. - Tianren Zhang, Chujie Zhao, Guanyu Chen, Yizhou Jiang, Feng Chen:
Feature Contamination: Neural Networks Learn Uncorrelated Features and Fail to Generalize. - Hyeong Kyu Choi, Yixuan Li:
PICLe: Eliciting Diverse Behaviors from Large Language Models with Persona In-Context Learning. - Weihan Li, Chengrui Li, Yule Wang, Anqi Wu:
Multi-Region Markovian Gaussian Process: An Efficient Method to Discover Directional Communications Across Multiple Brain Regions. - Gianluca Detommaso, Martin Bertran Lopez, Riccardo Fogliato, Aaron Roth:
Multicalibration for Confidence Scoring in LLMs. - Yuhui Wang, Weida Li, Francesco Faccio, Qingyuan Wu, Jürgen Schmidhuber:
Highway Value Iteration Networks. - Ziyad Benomar, Vianney Perchet:
Non-clairvoyant Scheduling with Partial Predictions. - Kaiyuan Chen, Xingzhuo Guo, Yu Zhang, Jianmin Wang, Mingsheng Long:
CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding. - Yongxian Wei, Zixuan Hu, Li Shen, Zhenyi Wang, Yu Li, Chun Yuan, Dacheng Tao:
Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models. - Marco Mussi, Simone Drago, Marcello Restelli, Alberto Maria Metelli:
Factored-Reward Bandits with Intermediate Observations. - Yunhao Zhang, Minghao Liu, Shengyang Zhou, Junchi Yan:
UP2ME: Univariate Pre-training to Multivariate Fine-tuning as a General-purpose Framework for Multivariate Time Series Analysis. - Luca Viano, Stratis Skoulakis, Volkan Cevher:
Imitation Learning in Discounted Linear MDPs without exploration assumptions. - Zelai Xu, Chao Yu, Fei Fang, Yu Wang, Yi Wu:
Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game. - Kai Li, Runxuan Yang, Fuchun Sun, Xiaolin Hu:
IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation. - Qinglin Liu, Xiaoqian Lv, Quanling Meng, Zonglin Li, Xiangyuan Lan, Shuo Yang, Shengping Zhang, Liqiang Nie:
Revisiting Context Aggregation for Image Matting. - Guibin Zhang, Yanwei Yue, Kun Wang, Junfeng Fang, Yongduo Sui, Kai Wang, Yuxuan Liang, Dawei Cheng, Shirui Pan, Tianlong Chen:
Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness. - Danil Provodin, Maurits Clemens Kaptein, Mykola Pechenizkiy:
Efficient Exploration in Average-Reward Constrained Reinforcement Learning: Achieving Near-Optimal Regret With Posterior Sampling. - Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen:
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark. - Wentao Zhu, Zhining Zhang, Yizhou Wang:
Language Models Represent Beliefs of Self and Others. - Shiwei Li, Wenchao Xu, Haozhao Wang, Xing Tang, Yining Qi, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li:
FedBAT: Communication-Efficient Federated Learning via Learnable Binarization. - Edwige Cyffers, Aurélien Bellet, Jalaj Upadhyay:
Differentially Private Decentralized Learning with Random Walks. - Tianyu Cui, Hongxia Li, Jingya Wang, Ye Shi:
Harmonizing Generalization and Personalization in Federated Prompt Learning. - Fang Wu, Stan Z. Li:
Surface-VQMAE: Vector-quantized Masked Auto-encoders on Molecular Surfaces. - Chia-Cheng Chiang, Li-Cheng Lan, Wei-Fang Sun, Chien Feng, Cho-Jui Hsieh, Chun-Yi Lee:
Expert Proximity as Surrogate Rewards for Single Demonstration Imitation Learning. - Yuka Hashimoto, Masahiro Ikeda, Hachem Kadri:
Position: C∗-Algebraic Machine Learning - Moving in a New Direction. - Yixing Xu, Chao Li, Dong Li, Xiao Sheng, Fan Jiang, Lu Tian, Ashish Sirasao, Emad Barsoum:
Enhancing Vision Transformer: Amplifying Non-Linearity in Feedforward Network Module. - Muzhi Zhu, Chengxiang Fan, Hao Chen, Yang Liu, Weian Mao, Xiaogang Xu, Chunhua Shen:
Generative Active Learning for Long-tailed Instance Segmentation. - Zeyang Zhang, Xin Wang, Yijian Qin, Hong Chen, Ziwei Zhang, Xu Chu, Wenwu Zhu:
Disentangled Continual Graph Neural Architecture Search with Invariant Modular Supernet. - Guangyi Chen, Yifan Shen, Zhenhao Chen, Xiangchen Song, Yuewen Sun, Weiran Yao, Xiao Liu, Kun Zhang:
CaRiNG: Learning Temporal Causal Representation under Non-Invertible Generation Process. - Yaoming Wang, Jin Li, Wenrui Dai, Bowen Shi, Xiaopeng Zhang, Chenglin Li, Hongkai Xiong:
Bootstrap AutoEncoders With Contrastive Paradigm for Self-supervised Gaze Estimation. - Duo Peng, Qiuhong Ke, Jun Liu:
UPAM: Unified Prompt Attack in Text-to-Image Generation Models Against Both Textual Filters and Visual Checkers. - Qinyuan Cheng, Tianxiang Sun, Xiangyang Liu, Wenwei Zhang, Zhangyue Yin, Shimin Li, Linyang Li, Zhengfu He, Kai Chen, Xipeng Qiu:
Can AI Assistants Know What They Don't Know? - Danny Halawi, Alexander Wei, Eric Wallace, Tony Tong Wang, Nika Haghtalab, Jacob Steinhardt:
Covert Malicious Finetuning: Challenges in Safeguarding LLM Adaptation. - Jaehyeong Jo, Sung Ju Hwang:
Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes. - Jaehyeong Jo, Dongki Kim, Sung Ju Hwang:
Graph Generation with Diffusion Mixture. - Manjie Xu, Chenxing Li, Duzhen Zhang, Dan Su, Wei Liang, Dong Yu:
Prompt-guided Precise Audio Editing with Diffusion Models. - Seul Lee, Seanie Lee, Kenji Kawaguchi, Sung Ju Hwang:
Drug Discovery with Dynamic Goal-aware Fragments. - Zhen Qin, Daoyuan Chen, Bingchen Qian, Bolin Ding, Yaliang Li, Shuiguang Deng:
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes. - Chenyin Gao, Zhiming Zhang, Shu Yang:
Causal Customer Churn Analysis with Low-rank Tensor Block Hazard Model. - Maya Bechler-Speicher, Ido Amos, Ran Gilad-Bachrach, Amir Globerson:
Graph Neural Networks Use Graphs When They Shouldn't. - Nate Gillman, Michael Freeman, Daksh Aggarwal, Chia-Hong Hsu, Calvin Luo, Yonglong Tian, Chen Sun:
Self-Correcting Self-Consuming Loops for Generative Model Training. - Raghav Singhal, Mark Goldstein, Rajesh Ranganath:
What's the score? Automated Denoising Score Matching for Nonlinear Diffusions. - R. Kenny Jones, Siddhartha Chaudhuri, Daniel Ritchie:
Learning to Infer Generative Template Programs for Visual Concepts. - Jaewoo Lee, Jaehong Yoon, Wonjae Kim, Yunji Kim, Sung Ju Hwang:
STELLA: Continual Audio-Video Pre-training with SpatioTemporal Localized Alignment. - Zirou Qiu, Abhijin Adiga, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard Edwin Stearns, Anil Kumar S. Vullikanti:
Efficient PAC Learnability of Dynamical Systems Over Multilayer Networks. - Mingrui Wu, Jiayi Ji, Oucheng Huang, Jiale Li, Yuhang Wu, Xiaoshuai Sun, Rongrong Ji:
Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models. - Sheng Xu, Guiliang Liu:
Robust Inverse Constrained Reinforcement Learning under Model Misspecification. - Ding Jia, Jianyuan Guo, Kai Han, Han Wu, Chao Zhang, Chang Xu, Xinghao Chen:
GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer. - Ganzhao Yuan:
Smoothing Proximal Gradient Methods for Nonsmooth Sparsity Constrained Optimization: Optimality Conditions and Global Convergence. - Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen:
LoCoCo: Dropping In Convolutions for Long Context Compression. - Chenwei Xu, Yu-Chao Huang, Jerry Yao-Chieh Hu, Weijian Li, Ammar Gilani, Hsi-Sheng Goan, Han Liu:
BiSHop: Bi-Directional Cellular Learning for Tabular Data with Generalized Sparse Modern Hopfield Model. - Guiyang Chan, Pengcheng Zhang, Hai Dong, Shunhui Ji, Bainian Chen:
Scribble-Supervised Semantic Segmentation with Prototype-based Feature Augmentation. - Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li:
Towards Certified Unlearning for Deep Neural Networks. - Junhao Yu, Yan Zhuang, Zhenya Huang, Qi Liu, Xin Li, Rui Li, Enhong Chen:
A Unified Adaptive Testing System Enabled by Hierarchical Structure Search. - Runqi Lin, Chaojian Yu, Bo Han, Hang Su, Tongliang Liu:
Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency. - Zhenyi Wang, Yihan Wu, Heng Huang:
Defense against Model Extraction Attack by Bayesian Active Watermarking. - Chen-Yu Yen, Raghav Singhal, Umang Sharma, Rajesh Ranganath, Sumit Chopra, Lerrel Pinto:
Adaptive Sampling of k-Space in Magnetic Resonance for Rapid Pathology Prediction. - Guojun Xiong, Jian Li:
Provably Efficient Reinforcement Learning for Adversarial Restless Multi-Armed Bandits with Unknown Transitions and Bandit Feedback. - Riqiang Gao, Florin-Cristian Ghesu, Simon Arberet, Shahab Basiri, Esa Kuusela, Martin Kraus, Dorin Comaniciu, Ali Kamen:
Multi-Agent Reinforcement Learning Meets Leaf Sequencing in Radiotherapy. - Zechun Liu, Changsheng Zhao, Forrest N. Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, Vikas Chandra:
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. - Yash Akhauri, Mohamed S. Abdelfattah:
Encodings for Prediction-based Neural Architecture Search. - Philip Amortila, Dylan J. Foster, Akshay Krishnamurthy:
Scalable Online Exploration via Coverability. - Xuefeng Du, Yiyou Sun, Yixuan Li:
When and How Does In-Distribution Label Help Out-of-Distribution Detection? - Yu Huang, Yuan Cheng, Yingbin Liang:
In-context Convergence of Transformers. - Kaizhao Liu, José H. Blanchet, Lexing Ying, Yiping Lu:
Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty. - Chenghao Liu, Minghua Chen:
ReLU Network with Width d+O(1) Can Achieve Optimal Approximation Rate. - Chenghao Liu, Enming Liang, Minghua Chen:
Characterizing ResNet's Universal Approximation Capability. - Taoran Fang, Wei Chow, Yifei Sun, Kaiqiao Han, Lvbin Ma, Yang Yang:
Exploring Correlations of Self-Supervised Tasks for Graphs. - Zhifa Ke, Zaiwen Wen, Junyu Zhang:
An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks. - Ziyin Liu:
Symmetry Induces Structure and Constraint of Learning. - Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu:
Robust Classification via a Single Diffusion Model. - Zhibin Gu, Zhendong Li, Songhe Feng:
EDISON: Enhanced Dictionary-Induced Tensorized Incomplete Multi-View Clustering with Gaussian Error Rank Minimization. - Ji Li, Chao Wang:
DiffFPR: Diffusion Prior for Oversampled Fourier Phase Retrieval. - Li Zhang, Youwei Liang, Ruiyi Zhang, Amirhosein Javadi, Pengtao Xie:
BLO-SAM: Bi-level Optimization Based Finetuning of the Segment Anything Model for Overfitting-Preventing Semantic Segmentation. - Minkai Xu, Jiaqi Han, Aaron Lou, Jean Kossaifi, Arvind Ramanathan, Kamyar Azizzadenesheli, Jure Leskovec, Stefano Ermon, Anima Anandkumar:
Equivariant Graph Neural Operator for Modeling 3D Dynamics. - Fabian Raoul Pieroth, Katherine E. Fitch, Lenz Belzner:
Detecting Influence Structures in Multi-Agent Reinforcement Learning. - Yunhao Luo, Chen Sun, Joshua B. Tenenbaum, Yilun Du:
Potential Based Diffusion Motion Planning. - Chengrui Li, Weihan Li, Yule Wang, Anqi Wu:
A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing. - Raunaq M. Bhirangi, Chenyu Wang, Venkatesh Pattabiraman, Carmel Majidi, Abhinav Gupta, Tess Lee Hellebrekers, Lerrel Pinto:
Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling. - Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig, Yaman Umuroglu:
A2Q+: Improving Accumulator-Aware Weight Quantization. - Jingwen Fu, Tao Yang, Yuwang Wang, Yan Lu, Nanning Zheng:
Breaking through the learning plateaus of in-context learning in Transformer. - Zhiqiu Lin, Xinyue Chen, Deepak Pathak, Pengchuan Zhang, Deva Ramanan:
Revisiting the Role of Language Priors in Vision-Language Models. - Wenshuai Zhao, Yi Zhao, Zhiyuan Li, Juho Kannala, Joni Pajarinen:
Optimistic Multi-Agent Policy Gradient. - Hila Manor, Tomer Michaeli:
Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion. - Limin Liu, Shuai He, Anlong Ming, Rui Xie, Huadong Ma:
ELTA: An Enhancer against Long-Tail for Aesthetics-oriented Models. - Mingqing Xiao, Yixin Zhu, Di He, Zhouchen Lin:
Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning. - Tianlang Chen, Shengjie Luo, Di He, Shuxin Zheng, Tie-Yan Liu, Liwei Wang:
GeoMFormer: A General Architecture for Geometric Molecular Representation Learning. - Xin Wang, Shengfei Lyu, Lishan Yang, Yibing Zhan, Huanhuan Chen:
A Dual-module Framework for Counterfactual Estimation over Time. - Chi Zhang, Jingpu Cheng, Yanyu Xu, Qianxiao Li:
Parameter-Efficient Fine-Tuning with Controls. - Elan Rosenfeld, Nir Rosenfeld:
One-Shot Strategic Classification Under Unknown Costs. - Jaehoon Hahm, Junho Lee, Sunghyun Kim, Joonseok Lee:
Isometric Representation Learning for Disentangled Latent Space of Diffusion Models. - Xingzhuo Guo, Junwei Pan, Ximei Wang, Baixu Chen, Jie Jiang, Mingsheng Long:
On the Embedding Collapse when Scaling up Recommendation Models. - Zhiyu Yao, Jian Wang, Haixu Wu, Jingdong Wang, Mingsheng Long:
Mobile Attention: Mobile-Friendly Linear-Attention for Vision Transformers. - Yuyang Wang, Ahmed A. A. Elhag, Navdeep Jaitly, Joshua M. Susskind, Miguel Ángel Bautista:
Swallowing the Bitter Pill: Simplified Scalable Conformer Generation. - Abdellah El Mrini, Edwige Cyffers, Aurélien Bellet:
Privacy Attacks in Decentralized Learning. - Amirhossein Farzam, Allen R. Tannenbaum, Guillermo Sapiro:
From Geometry to Causality- Ricci Curvature and the Reliability of Causal Inference on Networks. - Yue Yang, Yuqi Lin, Hong Liu, Wenqi Shao, Runjian Chen, Hailong Shang, Yu Wang, Yu Qiao, Kaipeng Zhang, Ping Luo:
Position: Towards Implicit Prompt For Text-To-Image Models. - Daqian Shao, Ashkan Soleymani, Francesco Quinzan, Marta Kwiatkowska:
Learning Decision Policies with Instrumental Variables through Double Machine Learning. - Yuming Shao, Zhixuan Fang:
On Multi-Armed Bandit with Impatient Arms. - Étienne Bamas, Sai Ganesh Nagarajan, Ola Svensson:
Analyzing Dα seeding for k-means. - Junlong Lyu, Zhitang Chen, Shoubo Feng:
Sampling is as easy as keeping the consistency: convergence guarantee for Consistency Models. - Weixuan Liang, En Zhu, Shengju Yu, Huiying Xu, Xinzhong Zhu, Xinwang Liu:
Scalable Multiple Kernel Clustering: Learning Clustering Structure from Expectation. - Tom Bewley, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, Manuela Veloso:
Counterfactual Metarules for Local and Global Recourse. - Jiachen Li, Qiaozi Gao, Michael Johnston, Xiaofeng Gao, Xuehai He, Hangjie Shi, Suhaila Shakiah, Reza Ghanadan, William Yang Wang:
Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning. - Junrong Lian, Ziyue Dong, Pengxu Wei, Wei Ke, Chang Liu, Qixiang Ye, Xiangyang Ji, Liang Lin:
Kepler codebook. - Taira Tsuchiya, Shinji Ito, Junya Honda:
Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring. - Kai Cheng, Xiaoxiao Long, Kaizhi Yang, Yao Yao, Wei Yin, Yuexin Ma, Wenping Wang, Xuejin Chen:
GaussianPro: 3D Gaussian Splatting with Progressive Propagation. - Jiaxiang Dong, Haixu Wu, Yuxuan Wang, Yunzhong Qiu, Li Zhang, Jianmin Wang, Mingsheng Long:
TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling. - Kai Yang, Jian Tao, Jiafei Lyu, Xiu Li:
Exploration and Anti-Exploration with Distributional Random Network Distillation. - Kai Liu, Ruohui Wang, Jianfei Gao, Kai Chen:
Differentiable Model Scaling using Differentiable Topk. - Lijia Yu, Shuang Liu, Yibo Miao, Xiao-Shan Gao, Lijun Zhang:
Generalization Bound and New Algorithm for Clean-Label Backdoor Attack. - Jan van den Brand, Zhao Song, Tianyi Zhou:
Algorithm and Hardness for Dynamic Attention Maintenance in Large Language Models. - Sungwon Han, Jinsung Yoon, Sercan Ö. Arik, Tomas Pfister:
Large Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning. - Daniel McDuff, Tim Korjakow, Scott Cambo, Jesse Josua Benjamin, Jenny Lee, Yacine Jernite, Carlos Muñoz Ferrandis, Aaron Gokaslan, Alek Tarkowski, Joseph Lindley, A. Feder Cooper, Danish Contractor:
Position: Standardization of Behavioral Use Clauses is Necessary for the Adoption of Responsible Licensing of AI. - Siyuan Li, Zedong Wang, Zicheng Liu, Di Wu, Cheng Tan, Jiangbin Zheng, Yufei Huang, Stan Z. Li:
VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling. - Zhilin Huang, Ling Yang, Xiangxin Zhou, Chujun Qin, Yijie Yu, Xiawu Zheng, Zikun Zhou, Wentao Zhang, Yu Wang, Wenming Yang:
Interaction-based Retrieval-augmented Diffusion Models for Protein-specific 3D Molecule Generation. - Ifigeneia Apostolopoulou, Benjamin Eysenbach, Frank Nielsen, Artur Dubrawski:
A Rate-Distortion View of Uncertainty Quantification. - Ruifeng Chen, Xiong-Hui Chen, Yihao Sun, Siyuan Xiao, Minhui Li, Yang Yu:
Policy-conditioned Environment Models are More Generalizable. - Xiong-Hui Chen, Junyin Ye, Hang Zhao, Yi-Chen Li, XuHui Liu, Haoran Shi, Yu-Yan Xu, Zhihao Ye, Si-Hang Yang, Yang Yu, Anqi Huang, Kai Xu, Zongzhang Zhang:
Deep Demonstration Tracing: Learning Generalizable Imitator Policy for Runtime Imitation from a Single Demonstration. - Hyomin Kim, Yunhui Jang, Jaeho Lee, Sungsoo Ahn:
Hybrid Neural Representations for Spherical Data. - Hagyeong Lee, Minkyu Kim, Jun-Hyuk Kim, Seungeon Kim, Dokwan Oh, Jaeho Lee:
Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity. - Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens:
Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes. - Wei Li, Hehe Fan, Yongkang Wong, Yi Yang, Mohan S. Kankanhalli:
Improving Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning. - Wanteng Ma, Dong Xia, Jiashuo Jiang:
High-dimensional Linear Bandits with Knapsacks. - Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen:
Recovering the Pre-Fine-Tuning Weights of Generative Models. - Xudong Li, Timin Gao, Runze Hu, Yan Zhang, Shengchuan Zhang, Xiawu Zheng, Jingyuan Zheng, Yunhang Shen, Ke Li, Yutao Liu, Pingyang Dai, Rongrong Ji:
Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity. - Egor Shulgin, Peter Richtárik:
Towards a Better Theoretical Understanding of Independent Subnetwork Training. - Chenjia Bai, Rushuai Yang, Qiaosheng Zhang, Kang Xu, Yi Chen, Ting Xiao, Xuelong Li:
Constrained Ensemble Exploration for Unsupervised Skill Discovery. - Zongxin Yang, Guikun Chen, Xiaodi Li, Wenguan Wang, Yi Yang:
DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models (Exemplified as A Video Agent). - Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang:
DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching. - Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu:
Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning. - Seongsu Kim, Sungsoo Ahn:
Gaussian Plane-Wave Neural Operator for Electron Density Estimation. - Hyunseok Oh, Youngki Lee:
Sign Gradient Descent-based Neuronal Dynamics: ANN-to-SNN Conversion Beyond ReLU Network. - Yi Ma, Jianye Hao, Hebin Liang, Chenjun Xiao:
Rethinking Decision Transformer via Hierarchical Reinforcement Learning. - Haoqi Wu, Wenjing Fang, Yancheng Zheng, Junming Ma, Jin Tan, Lei Wang:
Ditto: Quantization-aware Secure Inference of Transformers upon MPC. - Yehui Tang, Kai Han, Fangcheng Liu, Yunsheng Ni, Yuchuan Tian, Zheyuan Bai, Yi-Qi Hu, Sichao Liu, Shangling Jui, Yunhe Wang:
Rethinking Optimization and Architecture for Tiny Language Models. - Buyun Zhang, Liang Luo, Yuxin Chen, Jade Nie, Xi Liu, Shen Li, Yanli Zhao, Yuchen Hao, Yantao Yao, Ellie Dingqiao Wen, Jongsoo Park, Maxim Naumov, Wenlin Chen:
Wukong: Towards a Scaling Law for Large-Scale Recommendation. - Antoine Dedieu, Wolfgang Lehrach, Guangyao Zhou, Dileep George, Miguel Lázaro-Gredilla:
Learning Cognitive Maps from Transformer Representations for Efficient Planning in Partially Observed Environments. - Yiyou Sun, Junjie Hu, Wei Cheng, Haifeng Chen:
DFA-RAG: Conversational Semantic Router for Large Language Model with Definite Finite Automaton. - Shiv Shankar, Ritwik Sinha, Madalina Fiterau:
On Online Experimentation without Device Identifiers. - Logan Murphy, Kaiyu Yang, Jialiang Sun, Zhaoyu Li, Anima Anandkumar, Xujie Si:
Autoformalizing Euclidean Geometry. - Pavlo Melnyk, Michael Felsberg, Mårten Wadenbäck, Andreas Robinson, Cuong Le:
On Learning Deep O(n)-Equivariant Hyperspheres. - Carl Hvarfner, Erik Orm Hellsten, Luigi Nardi:
Vanilla Bayesian Optimization Performs Great in High Dimensions. - Anthony Chen, Huanrui Yang, Yulu Gan, Denis A. Gudovskiy, Zhen Dong, Haofan Wang, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Shanghang Zhang:
Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting. - Yongqiang Chen, Yatao Bian, Bo Han, James Cheng:
How Interpretable Are Interpretable Graph Neural Networks? - Xiang Li, Yinpeng Chen, Chung-Ching Lin, Hao Chen, Kai Hu, Rita Singh, Bhiksha Raj, Lijuan Wang, Zicheng Liu:
Completing Visual Objects via Bridging Generation and Segmentation. - Haoyu Ma, Jialong Wu, Ningya Feng, Chenjun Xiao, Dong Li, Jianye Hao, Jianmin Wang, Mingsheng Long:
HarmonyDream: Task Harmonization Inside World Models. - Dapeng Hu, Jian Liang, Xinchao Wang, Chuan-Sheng Foo:
Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Unsupervised Domain Adaptation. - Deliang Wei, Peng Chen, Fang Li:
Learning Pseudo-Contractive Denoisers for Inverse Problems. - Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, Heung-Yeung Shum:
HumanTOMATO: Text-aligned Whole-body Motion Generation. - Jifei Luo, Hantao Yao, Changsheng Xu:
Cluster-Aware Similarity Diffusion for Instance Retrieval. - Xueliang Zhao, Wenda Li, Lingpeng Kong:
Subgoal-based Demonstration Learning for Formal Theorem Proving. - Mingzhou Liu, Xinwei Sun, Yu Qiao, Yizhou Wang:
Causal Discovery via Conditional Independence Testing with Proxy Variables. - Jiawei Huang, Niao He, Andreas Krause:
Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL. - Chen Zhang, Qiang He, Yuan Zhou, Elvis S. Liu, Hong Wang, Jian Zhao, Yang Wang:
Advancing DRL Agents in Commercial Fighting Games: Training, Integration, and Agent-Human Alignment. - Po-An Wang, Kaito Ariu, Alexandre Proutière:
On Universally Optimal Algorithms for A/B Testing. - Yi Yu, Yufei Wang, Song Xia, Wenhan Yang, Shijian Lu, Yap-Peng Tan, Alex C. Kot:
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders. - Chenze Shao, Fandong Meng, Yijin Liu, Jie Zhou:
Language Generation with Strictly Proper Scoring Rules. - Yihao Liu, Xiangyu Chen, Xianzheng Ma, Xintao Wang, Jiantao Zhou, Yu Qiao, Chao Dong:
Unifying Image Processing as Visual Prompting Question Answering. - Xiangjian Jiang, Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik:
ProtoGate: Prototype-based Neural Networks with Global-to-local Feature Selection for Tabular Biomedical Data. - Nikita Doikov, Sebastian U. Stich, Martin Jaggi:
Spectral Preconditioning for Gradient Methods on Graded Non-convex Functions. - Yong Liu, Haoran Zhang, Chenyu Li, Xiangdong Huang, Jianmin Wang, Mingsheng Long:
Timer: Generative Pre-trained Transformers Are Large Time Series Models. - Guohao Shen:
Exploring the Complexity of Deep Neural Networks through Functional Equivalence. - Yuzi Yan, Jialian Li, Yipin Zhang, Dong Yan:
Exploring the LLM Journey from Cognition to Expression with Linear Representations. - Zelin Zang, Hao Luo, Kai Wang, Panpan Zhang, Fan Wang, Stan Z. Li, Yang You:
DiffAug: Enhance Unsupervised Contrastive Learning with Domain-Knowledge-Free Diffusion-based Data Augmentation. - Zhenyu He, Guhao Feng, Shengjie Luo, Kai Yang, Liwei Wang, Jingjing Xu, Zhi Zhang, Hongxia Yang, Di He:
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation. - Han Fang, Zhihao Song, Paul Weng, Yutong Ban:
INViT: A Generalizable Routing Problem Solver with Invariant Nested View Transformer. - Kai Yan, Alexander G. Schwing, Yu-Xiong Wang:
Offline Imitation from Observation via Primal Wasserstein State Occupancy Matching. - Jinyang Guo, Jianyu Wu, Zining Wang, Jiaheng Liu, Ge Yang, Yifu Ding, Ruihao Gong, Haotong Qin, Xianglong Liu:
Compressing Large Language Models by Joint Sparsification and Quantization. - Hyeongjin Kim, Sangwon Kim, Dasom Ahn, Jong Taek Lee, Byoung Chul Ko:
Scene Graph Generation Strategy with Co-occurrence Knowledge and Learnable Term Frequency. - Linan Yue, Qi Liu, Weibo Gao, Ye Liu, Kai Zhang, Yichao Du, Li Wang, Fangzhou Yao:
Federated Self-Explaining GNNs with Anti-shortcut Augmentations. - Bin Zhang, Hangyu Mao, Lijuan Li, Zhiwei Xu, Dapeng Li, Rui Zhao, Guoliang Fan:
Sequential Asynchronous Action Coordination in Multi-Agent Systems: A Stackelberg Decision Transformer Approach. - Inwoo Hwang, Yunhyeok Kwak, Suhyung Choi, Byoung-Tak Zhang, Sanghack Lee:
Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in Reinforcement Learning. - Weijie Tu, Weijian Deng, Dylan Campbell, Stephen Gould, Tom Gedeon:
An Empirical Study Into What Matters for Calibrating Vision-Language Models. - Luyuan Xie, Manqing Lin, Tianyu Luan, Cong Li, Yuejian Fang, Qingni Shen, Zhonghai Wu:
MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysis. - Deng-Bao Wang, Min-Ling Zhang:
Calibration Bottleneck: Over-compressed Representations are Less Calibratable. - Xinlei Niu, Christian Walder, Jing Zhang, Charles Patrick Martin:
Latent Optimal Paths by Gumbel Propagation for Variational Bayesian Dynamic Programming. - Shitong Luo, Wenhao Gao, Zuofan Wu, Jian Peng, Connor W. Coley, Jianzhu Ma:
Projecting Molecules into Synthesizable Chemical Spaces. - Marius Lindauer, Florian Karl, Anne Klier, Julia Moosbauer, Alexander Tornede, Andreas Müller, Frank Hutter, Matthias Feurer, Bernd Bischl:
Position: A Call to Action for a Human-Centered AutoML Paradigm. - Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, Dimitrios Myrisiotis, A. Pavan, N. V. Vinodchandran:
Total Variation Distance Meets Probabilistic Inference. - Guanchu Wang, Yu-Neng Chuang, Fan Yang, Mengnan Du, Chia-Yuan Chang, Shaochen Zhong, Zirui Liu, Zhaozhuo Xu, Kaixiong Zhou, Xuanting Cai, Xia Hu:
TVE: Learning Meta-attribution for Transferable Vision Explainer. - Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, Yongbin Li:
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch. - Zhao-Rong Lai, Weiwen Wang:
Invariant Risk Minimization Is A Total Variation Model. - David Rügamer, Chris Kolb, Tobias Weber, Lucas Kook, Thomas Nagler:
Generalizing Orthogonalization for Models with Non-Linearities. - Lujun Li, Yufan Bao, Peijie Dong, Chuanguang Yang, Anggeng Li, Wenhan Luo, Qifeng Liu, Wei Xue, Yike Guo:
DetKDS: Knowledge Distillation Search for Object Detectors. - Dachun Kai, Jiayao Lu, Yueyi Zhang, Xiaoyan Sun:
EvTexture: Event-driven Texture Enhancement for Video Super-Resolution. - Zhenmei Shi, Junyi Wei, Zhuoyan Xu, Yingyu Liang:
Why Larger Language Models Do In-context Learning Differently? - Sanjoy Dasgupta, Eduardo Sany Laber:
New Bounds on the Cohesion of Complete-link and Other Linkage Methods for Agglomerative Clustering. - Xutong Liu, Siwei Wang, Jinhang Zuo, Han Zhong, Xuchuang Wang, Zhiyong Wang, Shuai Li, Mohammad Hajiesmaili, John C. S. Lui, Wei Chen:
Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond. - Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guangtao Zhai, Weisi Lin:
Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels. - Yuke Li, Guangyi Chen, Ben Abramowitz, Stefano Anzellotti, Donglai Wei:
Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition. - Yuan Gao, Shuguo Jiang, Moran Li, Jin-Gang Yu, Gui-Song Xia:
DMTG: One-Shot Differentiable Multi-Task Grouping. - Bor-Jiun Lin, Chun-Yi Lee:
HGAP: Boosting Permutation Invariant and Permutation Equivariant in Multi-Agent Reinforcement Learning via Graph Attention Network. - Jie Xiao, Kai Zhu, Han Zhang, Zhiheng Liu, Yujun Shen, Zhantao Yang, Ruili Feng, Yu Liu, Xueyang Fu, Zheng-Jun Zha:
CCM: Real-Time Controllable Visual Content Creation Using Text-to-Image Consistency Models. - Zhaoliang Wan, Yonggen Ling, Senlin Yi, Lu Qi, Wang Wei Lee, Minglei Lu, Sicheng Yang, Xiao Teng, Peng Lu, Xu Yang, Ming-Hsuan Yang, Hui Cheng:
VinT-6D: A Large-Scale Object-in-hand Dataset from Vision, Touch and Proprioception. - Yuchen Zhang, Tianle Zhang, Kai Wang, Ziyao Guo, Yuxuan Liang, Xavier Bresson, Wei Jin, Yang You:
Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching. - Trung X. Pham, Kang Zhang, Chang D. Yoo:
Cross-view Masked Diffusion Transformers for Person Image Synthesis. - Tarun Kalluri, Bodhisattwa Prasad Majumder, Manmohan Chandraker:
Tell, Don't Show: Language Guidance Eases Transfer Across Domains in Images and Videos. - Jianan Zhou, Zhiguang Cao, Yaoxin Wu, Wen Song, Yining Ma, Jie Zhang, Chi Xu:
MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts. - Lei Wang, Xiuyuan Yuan, Tom Gedeon, Liang Zheng:
Taylor Videos for Action Recognition. - Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn:
Optimal Differentially Private Model Training with Public Data. - Zhicheng Zheng, Xin Yan, Zhenfang Chen, Jingzhou Wang, Qin Zhi Eddie Lim, Joshua B. Tenenbaum, Chuang Gan:
ContPhy: Continuum Physical Concept Learning and Reasoning from Videos. - Zhanpeng Zeng, Karthikeyan Sankaralingam, Vikas Singh:
IM-Unpack: Training and Inference with Arbitrarily Low Precision Integers. - Nathaniel Li, Alexander Pan, Anjali Gopal, Summer Yue, Daniel Berrios, Alice Gatti, Justin D. Li, Ann-Kathrin Dombrowski, Shashwat Goel, Gabriel Mukobi, Nathan Helm-Burger, Rassin Lababidi, Lennart Justen, Andrew B. Liu, Michael Chen, Isabelle Barrass, Oliver Zhang, Xiaoyuan Zhu, Rishub Tamirisa, Bhrugu Bharathi, Ariel Herbert-Voss, Cort B. Breuer, Andy Zou, Mantas Mazeika, Zifan Wang, Palash Oswal, Weiran Lin, Adam A. Hunt, Justin Tienken-Harder, Kevin Y. Shih, Kemper Talley, John Guan, Ian Steneker, David Campbell, Brad Jokubaitis, Steven Basart, Stephen Fitz, Ponnurangam Kumaraguru, Kallol Krishna Karmakar, Uday Kiran Tupakula, Vijay Varadharajan, Yan Shoshitaishvili, Jimmy Ba, Kevin M. Esvelt, Alexandr Wang, Dan Hendrycks:
The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning. - Ziyang Song, Jinxi Li, Bo Yang:
OSN: Infinite Representations of Dynamic 3D Scenes from Monocular Videos. - Lanxiang Xing, Haixu Wu, Yuezhou Ma, Jianmin Wang, Mingsheng Long:
HelmFluid: Learning Helmholtz Dynamics for Interpretable Fluid Prediction. - Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, Qingyun Wu:
Offline Training of Language Model Agents with Functions as Learnable Weights. - Liyan Chen, Yan Zheng, Yang Li, Lohit Anirudh Jagarapu, Haoxiang Li, Hao Kang, Gang Hua, Qixing Huang:
Enhancing Implicit Shape Generators Using Topological Regularizations. - Long Zhao, Nitesh Bharadwaj Gundavarapu, Liangzhe Yuan, Hao Zhou, Shen Yan, Jennifer J. Sun, Luke Friedman, Rui Qian, Tobias Weyand, Yue Zhao, Rachel Hornung, Florian Schroff, Ming-Hsuan Yang, David A. Ross, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko, Ting Liu, Boqing Gong:
VideoPrism: A Foundational Visual Encoder for Video Understanding. - Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Ángel Bautista, Vaishaal Shankar, Alexander T. Toshev, Joshua M. Susskind, Armand Joulin:
Scalable Pre-training of Large Autoregressive Image Models. - Guangyi Liu, Yu Wang, Zeyu Feng, Qiyu Wu, Liping Tang, Yuan Gao, Zhen Li, Shuguang Cui, Julian J. McAuley, Zichao Yang, Eric P. Xing, Zhiting Hu:
Unified Generation, Reconstruction, and Representation: Generalized Diffusion with Adaptive Latent Encoding-Decoding. - Yu Wang, Yifan Gao, Xiusi Chen, Haoming Jiang, Shiyang Li, Jingfeng Yang, Qingyu Yin, Zheng Li, Xian Li, Bing Yin, Jingbo Shang, Julian J. McAuley:
MEMORYLLM: Towards Self-Updatable Large Language Models. - Nathaniel Cohen, Vladimir Kulikov, Matan Kleiner, Inbar Huberman-Spiegelglas, Tomer Michaeli:
Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices. - Ulyana Piterbarg, Lerrel Pinto, Rob Fergus:
diff History for Neural Language Agents. - Shashank Agnihotri, Steffen Jung, Margret Keuper:
CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks. - Nicolas Alder, Ralf Herbrich:
Energy-Efficient Gaussian Processes Using Low-Precision Arithmetic. - Fabian Schaipp, Ruben Ohana, Michael Eickenberg, Aaron Defazio, Robert M. Gower:
MoMo: Momentum Models for Adaptive Learning Rates. - Alex Ayoub, Kaiwen Wang, Vincent Liu, Samuel Robertson, James McInerney, Dawen Liang, Nathan Kallus, Csaba Szepesvári:
Switching the Loss Reduces the Cost in Batch Reinforcement Learning. - Yifan Xu, Xiaoshan Yang, Yaguang Song, Changsheng Xu:
Libra: Building Decoupled Vision System on Large Language Models. - Lihao Wang, Zhaofei Yu:
Autaptic Synaptic Circuit Enhances Spatio-temporal Predictive Learning of Spiking Neural Networks. - Jikun Kang, Romain Laroche, Xingdi Yuan, Adam Trischler, Xue Liu, Jie Fu:
Think Before You Act: Decision Transformers with Working Memory. - Yujia Liu, Tong Bu, Jianhao Ding, Zecheng Hao, Tiejun Huang, Zhaofei Yu:
Enhancing Adversarial Robustness in SNNs with Sparse Gradients. - Sanae Lotfi, Marc Anton Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson:
Non-Vacuous Generalization Bounds for Large Language Models. - Leo Klarner, Tim G. J. Rudner, Garrett M. Morris, Charlotte M. Deane, Yee Whye Teh:
Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design. - Theodor Westny, Arman Mohammadi, Daniel Jung, Erik Frisk:
Stability-Informed Initialization of Neural Ordinary Differential Equations. - Cheng Han, Yawen Lu, Guohao Sun, James Chenhao Liang, Zhiwen Cao, Qifan Wang, Qiang Guan, Sohail A. Dianat, Raghuveer Rao, Tong Geng, Zhiqiang Tao, Dongfang Liu:
Prototypical Transformer As Unified Motion Learners. - Conghan Yue, Zhengwei Peng, Junlong Ma, Shiyan Du, Pengxu Wei, Dongyu Zhang:
Image Restoration Through Generalized Ornstein-Uhlenbeck Bridge. - Haoyu Deng, Zijing Xu, Yule Duan, Xiao Wu, Wenjie Shu, Liang-Jian Deng:
Exploring the Low-Pass Filtering Behavior in Image Super-Resolution. - Candi Zheng, Yuan Lan:
Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance Scale. - Zixiang Zhao, Lilun Deng, Haowen Bai, Yukun Cui, Zhipeng Zhang, Yulun Zhang, Haotong Qin, Dongdong Chen, Jiangshe Zhang, Peng Wang, Luc Van Gool:
Image Fusion via Vision-Language Model. - Xinhang Wan, Jiyuan Liu, Xinwang Liu, Yi Wen, Hao Yu, Siwei Wang, Shengju Yu, Tianjiao Wan, Jun Wang, En Zhu:
Decouple then Classify: A Dynamic Multi-view Labeling Strategy with Shared and Specific Information. - Youssef Allouah, Anastasia Koloskova, Aymane El Firdoussi, Martin Jaggi, Rachid Guerraoui:
The Privacy Power of Correlated Noise in Decentralized Learning. - Yiwei Ma, Zhekai Lin, Jiayi Ji, Yijun Fan, Xiaoshuai Sun, Rongrong Ji:
X-Oscar: A Progressive Framework for High-quality Text-guided 3D Animatable Avatar Generation. - Yuheng Ma, Ke Jia, Hanfang Yang:
Better Locally Private Sparse Estimation Given Multiple Samples Per User. - Dan Zhang, Jingjing Wang, Feng Luo:
Directly Denoising Diffusion Models. - Pengying Wu, Yao Mu, Bingxian Wu, Yi Hou, Ji Ma, Shanghang Zhang, Chang Liu:
VoroNav: Voronoi-based Zero-shot Object Navigation with Large Language Model. - Xiaole Tang, Xin Hu, Xiang Gu, Jian Sun:
Residual-Conditioned Optimal Transport: Towards Structure-Preserving Unpaired and Paired Image Restoration. - Rui Li, Chaozhuo Li, Yanming Shen, Zeyu Zhang, Xu Chen:
Generalizing Knowledge Graph Embedding with Universal Orthogonal Parameterization. - Long Qian, Juncheng Li, Yu Wu, Yaobo Ye, Hao Fei, Tat-Seng Chua, Yueting Zhuang, Siliang Tang:
Momentor: Advancing Video Large Language Model with Fine-Grained Temporal Reasoning. - Yibo Zhou, Hai-Miao Hu, Yirong Xiang, Xiaokang Zhang, Haotian Wu:
Pedestrian Attribute Recognition as Label-balanced Multi-label Learning. - Ritwik Gupta, Shufan Li, Tyler Zhu, Jitendra Malik, Trevor Darrell, Karttikeya Mangalam:
xT: Nested Tokenization for Larger Context in Large Images. - Yu Zhang, Qi Zhang, Zixuan Gong, Yiwei Shi, Yepeng Liu, Duoqian Miao, Yang Liu, Ke Liu, Kun Yi, Wei Fan, Liang Hu, Changwei Wang:
MLIP: Efficient Multi-Perspective Language-Image Pretraining with Exhaustive Data Utilization.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.