default search action
Joshua Ainslie
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c22]Shanda Li, Chong You, Guru Guruganesh, Joshua Ainslie, Santiago Ontañón, Manzil Zaheer, Sumit Sanghai, Yiming Yang, Sanjiv Kumar, Srinadh Bhojanapalli:
Functional Interpolation for Relative Positions improves Long Context Transformers. ICLR 2024 - [c21]Yury Zemlyanskiy, Michiel de Jong, Luke Vilnis, Santiago Ontañón, William W. Cohen, Sumit Sanghai, Joshua Ainslie:
MEMORY-VQ: Compression for Tractable Internet-Scale Memory. NAACL (Short Papers) 2024: 737-744 - 2023
- [c20]Chen-Yu Lee, Chun-Liang Li, Hao Zhang, Timothy Dozat, Vincent Perot, Guolong Su, Xiang Zhang, Kihyuk Sohn, Nikolay Glushnev, Renshen Wang, Joshua Ainslie, Shangbang Long, Siyang Qin, Yasuhisa Fujii, Nan Hua, Tomas Pfister:
FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction. ACL (1) 2023: 9011-9026 - [c19]Michiel de Jong, Yury Zemlyanskiy, Joshua Ainslie, Nicholas FitzGerald, Sumit Sanghai, Fei Sha, William W. Cohen:
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference. ACL (Findings) 2023: 11534-11547 - [c18]Andrea Burns, Krishna Srinivasan, Joshua Ainslie, Geoff Brown, Bryan A. Plummer, Kate Saenko, Jianmo Ni, Mandy Guo:
A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding. EMNLP 2023: 1917-1947 - [c17]Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai:
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints. EMNLP 2023: 4895-4901 - [c16]Joshua Ainslie, Tao Lei, Michiel de Jong, Santiago Ontañón, Siddhartha Brahma, Yury Zemlyanskiy, David C. Uthus, Mandy Guo, James Lee-Thorp, Yi Tay, Yun-Hsuan Sung, Sumit Sanghai:
CoLT5: Faster Long-Range Transformers with Conditional Computation. EMNLP 2023: 5085-5100 - [c15]David C. Uthus, Santiago Ontañón, Joshua Ainslie, Mandy Guo:
mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences. EMNLP (Findings) 2023: 9380-9386 - [c14]Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, Neil Houlsby:
Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints. ICLR 2023 - [c13]Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Joshua Ainslie, Sumit Sanghai, Fei Sha, William W. Cohen:
Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute. ICML 2023: 7329-7342 - [c12]Tao Lei, Junwen Bai, Siddhartha Brahma, Joshua Ainslie, Kenton Lee, Yanqi Zhou, Nan Du, Vincent Y. Zhao, Yuexin Wu, Bo Li, Yu Zhang, Ming-Wei Chang:
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference. NeurIPS 2023 - [i27]Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Joshua Ainslie, Sumit Sanghai, Fei Sha, William W. Cohen:
Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute. CoRR abs/2301.10448 (2023) - [i26]Joshua Ainslie, Tao Lei, Michiel de Jong, Santiago Ontañón, Siddhartha Brahma, Yury Zemlyanskiy, David C. Uthus, Mandy Guo, James Lee-Thorp, Yi Tay, Yun-Hsuan Sung, Sumit Sanghai:
CoLT5: Faster Long-Range Transformers with Conditional Computation. CoRR abs/2303.09752 (2023) - [i25]Tao Lei, Junwen Bai, Siddhartha Brahma, Joshua Ainslie, Kenton Lee, Yanqi Zhou, Nan Du, Vincent Y. Zhao, Yuexin Wu, Bo Li, Yu Zhang, Ming-Wei Chang:
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference. CoRR abs/2304.04947 (2023) - [i24]Chen-Yu Lee, Chun-Liang Li, Hao Zhang, Timothy Dozat, Vincent Perot, Guolong Su, Xiang Zhang, Kihyuk Sohn, Nikolai Glushnev, Renshen Wang, Joshua Ainslie, Shangbang Long, Siyang Qin, Yasuhisa Fujii, Nan Hua, Tomas Pfister:
FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction. CoRR abs/2305.02549 (2023) - [i23]Andrea Burns, Krishna Srinivasan, Joshua Ainslie, Geoff Brown, Bryan A. Plummer, Kate Saenko, Jianmo Ni, Mandy Guo:
A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding. CoRR abs/2305.03668 (2023) - [i22]Andrea Burns, Krishna Srinivasan, Joshua Ainslie, Geoff Brown, Bryan A. Plummer, Kate Saenko, Jianmo Ni, Mandy Guo:
WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset. CoRR abs/2305.05432 (2023) - [i21]David C. Uthus, Santiago Ontañón, Joshua Ainslie, Mandy Guo:
mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences. CoRR abs/2305.11129 (2023) - [i20]Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai:
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints. CoRR abs/2305.13245 (2023) - [i19]Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Sumit Sanghai, William W. Cohen, Joshua Ainslie:
GLIMMER: generalized late-interaction memory reranker. CoRR abs/2306.10231 (2023) - [i18]Yury Zemlyanskiy, Michiel de Jong, Luke Vilnis, Santiago Ontañón, William W. Cohen, Sumit Sanghai, Joshua Ainslie:
MEMORY-VQ: Compression for Tractable Internet-Scale Memory. CoRR abs/2308.14903 (2023) - [i17]Shanda Li, Chong You, Guru Guruganesh, Joshua Ainslie, Santiago Ontañón, Manzil Zaheer, Sumit Sanghai, Yiming Yang, Sanjiv Kumar, Srinadh Bhojanapalli:
Functional Interpolation for Relative Positions Improves Long Context Transformers. CoRR abs/2310.04418 (2023) - 2022
- [c11]Santiago Ontañón, Joshua Ainslie, Zachary Fisher, Vaclav Cvicek:
Making Transformers Solve Compositional Tasks. ACL (1) 2022: 3591-3607 - [c10]Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, Tomas Pfister:
FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction. ACL (1) 2022: 3735-3754 - [c9]Yury Zemlyanskiy, Michiel de Jong, Joshua Ainslie, Panupong Pasupat, Peter Shaw, Linlu Qiu, Sumit Sanghai, Fei Sha:
Generate-and-Retrieve: Use Your Predictions to Improve Retrieval for Semantic Parsing. COLING 2022: 4946-4951 - [c8]James Lee-Thorp, Joshua Ainslie:
Sparse Mixers: Combining MoE and Mixing to build a more efficient BERT. EMNLP (Findings) 2022: 58-75 - [c7]Mandy Guo, Joshua Ainslie, David C. Uthus, Santiago Ontañón, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang:
LongT5: Efficient Text-To-Text Transformer for Long Sequences. NAACL-HLT (Findings) 2022: 724-736 - [c6]James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontañón:
FNet: Mixing Tokens with Fourier Transforms. NAACL-HLT 2022: 4296-4313 - [i16]Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, Tomas Pfister:
FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction. CoRR abs/2203.08411 (2022) - [i15]Santiago Ontañón, Joshua Ainslie, Vaclav Cvicek, Zachary Fisher:
LogicInference: A New Dataset for Teaching Logical Inference to seq2seq Models. CoRR abs/2203.15099 (2022) - [i14]James Lee-Thorp, Joshua Ainslie:
Sparse Mixers: Combining MoE and Mixing to build a more efficient BERT. CoRR abs/2205.12399 (2022) - [i13]Yury Zemlyanskiy, Michiel de Jong, Joshua Ainslie, Panupong Pasupat, Peter Shaw, Linlu Qiu, Sumit Sanghai, Fei Sha:
Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing. CoRR abs/2209.14899 (2022) - [i12]Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, Neil Houlsby:
Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints. CoRR abs/2212.05055 (2022) - [i11]Michiel de Jong, Yury Zemlyanskiy, Joshua Ainslie, Nicholas FitzGerald, Sumit Sanghai, Fei Sha, William W. Cohen:
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference. CoRR abs/2212.08153 (2022) - 2021
- [c5]Juyong Kim, Pradeep Ravikumar, Joshua Ainslie, Santiago Ontañón:
Improving Compositional Generalization in Classification Tasks via Structure Annotations. ACL/IJCNLP (2) 2021: 637-645 - [c4]Ruining He, Anirudh Ravula, Bhargav Kanagal, Joshua Ainslie:
RealFormer: Transformer Likes Residual Attention. ACL/IJCNLP (Findings) 2021: 929-943 - [c3]Yury Zemlyanskiy, Joshua Ainslie, Michiel de Jong, Philip Pham, Ilya Eckstein, Fei Sha:
ReadTwice: Reading Very Large Documents with Memories. NAACL-HLT 2021: 5189-5195 - [i10]James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontañón:
FNet: Mixing Tokens with Fourier Transforms. CoRR abs/2105.03824 (2021) - [i9]Yury Zemlyanskiy, Joshua Ainslie, Michiel de Jong, Philip Pham, Ilya Eckstein, Fei Sha:
ReadTwice: Reading Very Large Documents with Memories. CoRR abs/2105.04241 (2021) - [i8]Juyong Kim, Pradeep Ravikumar, Joshua Ainslie, Santiago Ontañón:
Improving Compositional Generalization in Classification Tasks via Structure Annotations. CoRR abs/2106.10434 (2021) - [i7]Santiago Ontañón, Joshua Ainslie, Vaclav Cvicek, Zachary Fisher:
Making Transformers Solve Compositional Tasks. CoRR abs/2108.04378 (2021) - [i6]Gurmeet Manku, James Lee-Thorp, Bhargav Kanagal, Joshua Ainslie, Jingchen Feng, Zach Pearson, Ebenezer Anjorin, Sudeep Gandhe, Ilya Eckstein, Jim Rosswog, Sumit Sanghai, Michael Pohl, Larry Adams, D. Sivakumar:
ShopTalk: A System for Conversational Faceted Search. CoRR abs/2109.00702 (2021) - [i5]Luana Ruiz, Joshua Ainslie, Santiago Ontañón:
Iterative Decoding for Compositional Generalization in Transformers. CoRR abs/2110.04169 (2021) - [i4]Mandy Guo, Joshua Ainslie, David C. Uthus, Santiago Ontañón, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang:
LongT5: Efficient Text-To-Text Transformer for Long Sequences. CoRR abs/2112.07916 (2021) - 2020
- [c2]Joshua Ainslie, Santiago Ontañón, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang:
ETC: Encoding Long and Structured Inputs in Transformers. EMNLP (1) 2020: 268-284 - [c1]Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed:
Big Bird: Transformers for Longer Sequences. NeurIPS 2020 - [i3]Joshua Ainslie, Santiago Ontañón, Chris Alberti, Philip Pham, Anirudh Ravula, Sumit Sanghai:
ETC: Encoding Long and Structured Data in Transformers. CoRR abs/2004.08483 (2020) - [i2]Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed:
Big Bird: Transformers for Longer Sequences. CoRR abs/2007.14062 (2020) - [i1]Ruining He, Anirudh Ravula, Bhargav Kanagal, Joshua Ainslie:
RealFormer: Transformer Likes Residual Attention. CoRR abs/2012.11747 (2020)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-04 20:59 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint