ICLR 2026
The Fourteenth International Conference on Learning Representations
Latest Announcements
Stay updated with conference news
-
The Financial Assistance Application is open until March 1.
-
The Socials Application is open until March 17.
-
The Volunteer Application is open until March 8.
-
Visa Application Information is now available.
-
The ICLR 2026 Policy for LLM Usage has been announced.
-
ICLR 2026 confirmed: April 23–27, 2026 at the Riocentro Convention and Event Center, Rio de Janeiro, Brazil.
Important Dates
Key deadlines and events
Exhibitors
We thank our exhibitors for all their support!
About the Conference
The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.
ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
ICLR is one of the fastest growing artificial intelligence conferences in the world. Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
The rapidly developing field of deep learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. ICLR takes a broad view of the field and includes topics such as feature learning, metric learning, compositional modeling, structured prediction, reinforcement learning, and issues regarding large-scale learning and non-convex optimization.
A non-exhaustive list of relevant topics explored at the conference include:
- Unsupervised, Semi-supervised, and Supervised Representation Learning
- Representation Learning for Planning and Reinforcement Learning
- Metric Learning and Kernel Learning
- Sparse Coding and Dimensionality Expansion
- Hierarchical Models
- Optimization for Representation Learning
- Learning Representations of Outputs or States
- Implementation Issues, Parallelization, Software Platforms, Hardware
- Applications in Vision, Audio, Speech, Natural Language Processing, Robotics, Neuroscience, or Any Other Field