default search action
Journal of Machine Learning Research, Volume 21
Volume 21, 2020
- Hao Yu, Michael J. Neely:
A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints. 1:1-1:24 - Yunlong Feng, Jun Fan, Johan A. K. Suykens:
A Statistical Learning Approach to Modal Regression. 2:1-2:35 - Xiaofan Li, Andrew B. Whinston:
A Model of Fake Data in Data-driven Analysis. 3:1-3:26 - Zhuang Ma, Zongming Ma, Hongsong Yuan:
Universal Latent Space Model Fitting for Large Networks with Edge Covariates. 4:1-4:67 - Jelena Diakonikolas, Cristóbal Guzmán:
Lower Bounds for Parallel and Randomized Convex Optimization. 5:1-5:31 - Anna V. Little, Mauro Maggioni, James M. Murphy:
Path-Based Spectral Clustering: Guarantees, Robustness to Outliers, and Fast Algorithms. 6:1-6:66 - Nikolay Manchev, Michael W. Spratling:
Target Propagation in Recurrent Neural Networks. 7:1-7:33 - Rafael M. O. Cruz, Luiz G. Hafemann, Robert Sabourin, George D. C. Cavalcanti:
DESlib: A Dynamic ensemble selection library in Python. 8:1-8:5 - José R. Berrendero, Beatriz Bueno-Larraz, Antonio Cuevas:
On Mahalanobis Distance in Functional Settings. 9:1-9:33 - Zhanrui Cai, Runze Li, Liping Zhu:
Online Sufficient Dimension Reduction Through Sliced Inverse Regression. 10:1-10:25 - T. Tony Cai, Tengyuan Liang, Alexander Rakhlin:
Weighted Message Passing and Minimum Energy Flow for Heterogeneous Stochastic Block Models with Side Information. 11:1-11:34 - Xin Tong, Lucy Xia, Jiacheng Wang, Yang Feng:
Neyman-Pearson classification: parametrics and sample size requirement. 12:1-12:48 - Mengyang Gu, Weining Shen:
Generalized probabilistic principal component analysis of correlated data. 13:1-13:41 - Víctor Blanco, Justo Puerto, Antonio M. Rodríguez-Chía:
On lp-Support Vector Machines and Multidimensional Kernels. 14:1-14:29 - Ery Arias-Castro, Adel Javanmard, Bruno Pelletier:
Perturbation Bounds for Procrustes, Classical Scaling, and Trilateration, with Applications to Manifold Learning. 15:1-15:37 - Raef Bassily, Kobbi Nissim, Uri Stemmer, Abhradeep Thakurta:
Practical Locally Private Heavy Hitters. 16:1-16:42 - Aki Vehtari, Andrew Gelman, Tuomas Sivula, Pasi Jylänki, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John P. Cunningham, David Schiminovich, Christian P. Robert:
Expectation Propagation as a Way of Life: A Framework for Bayesian Inference on Partitioned Data. 17:1-17:53 - David P. Hofmeyr:
Connecting Spectral Clustering to Maximum Margins and Level Sets. 18:1-18:35 - Cheng Yong Tang, Ethan X. Fang, Yuexiao Dong:
High-Dimensional Interactions Detection with Sparse Principal Hessian Matrix. 19:1-19:25 - Junhong Lin, Volkan Cevher:
Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections. 20:1-20:44 - Dhruv Malik, Ashwin Pananjady, Kush Bhatia, Koulik Khamaru, Peter L. Bartlett, Martin J. Wainwright:
Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems. 21:1-21:51 - Sandeep Kumar, Jiaxi Ying, José Vinícius de Miranda Cardoso, Daniel P. Palomar:
A Unified Framework for Structured Graph Learning via Spectral Constraints. 22:1-22:60 - Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, Shuai Zheng, Yi Zhu:
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing. 23:1-23:7 - Xingxiang Li, Runze Li, Zhiming Xia, Chen Xu:
Distributed Feature Screening via Componentwise Debiasing. 24:1-24:32 - Ivona Bezáková, Antonio Blanca, Zongchen Chen, Daniel Stefankovic, Eric Vigoda:
Lower Bounds for Testing Graphical Models: Colorings and Antiferromagnetic Ising Models. 25:1-25:62 - Anders Ellern Bilgrau, Carel F. W. Peeters, Poul Svante Eriksen, Martin Bøgsted, Wessel N. van Wieringen:
Targeted Fused Ridge Estimation of Inverse Covariance Matrices from Multiple High-Dimensional Data Classes. 26:1-26:52 - Sinead A. Williamson, Michael Minyi Zhang, Paul Damien:
A New Class of Time Dependent Latent Factor Models with Applications. 27:1-27:24 - Nicolás García Trillos, Zachary Kaplan, Thabo Samakhoana, Daniel Sanz-Alonso:
On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms. 28:1-28:47 - Xin Zhang, Qing Mai, Hui Zou:
The Maximum Separation Subspace in Sufficient Dimension Reduction with Categorical Response. 29:1-29:36 - Alexander Novikov, Pavel Izmailov, Valentin Khrulkov, Michael Figurnov, Ivan V. Oseledets:
Tensor Train Decomposition on TensorFlow (T3F). 30:1-30:7 - Emmanuel Abbe, Sanjeev R. Kulkarni, Eun Jee Lee:
Generalized Nonbacktracking Bounds on the Influence. 31:1-31:36 - Mihai Cucuringu, Hemant Tyagi:
Provably robust estimation of modulo 1 samples of a smooth function with applications to phase unwrapping. 32:1-32:77 - Huan Li, Zhouchen Lin:
On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent. 33:1-33:45 - Dominic Richards, Patrick Rebeschini:
Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent. 34:1-34:44 - Mathieu Blondel, André F. T. Martins, Vlad Niculae:
Learning with Fenchel-Young losses. 35:1-35:69 - Miriam R. Elman, Jessica Minnier, Xiaohui Chang, Dongseok Choi:
Noise Accumulation in High Dimensional Classification and Total Signal Index. 36:1-36:23 - Diviyan Kalainathan, Olivier Goudet, Ritik Dutta:
Causal Discovery Toolbox: Uncovering causal relationships in Python. 37:1-37:5 - Leo L. Duan:
Latent Simplex Position Model: High Dimensional Multi-view Clustering with Uncertainty Quantification. 38:1-38:25 - Saber Salehkaleybar, AmirEmad Ghassami, Negar Kiyavash, Kun Zhang:
Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables. 39:1-39:24 - Zhixin Zhou, Arash A. Amini:
Optimal Bipartite Network Clustering. 40:1-40:68 - Rune Christiansen, Jonas Peters:
Switching Regression Models and Causal Inference in the Presence of Discrete Latent Variables. 41:1-41:46 - Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar:
Branch and Bound for Piecewise Linear Neural Network Verification. 42:1-42:39 - Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, Michael I. Jordan:
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data. 43:1-43:36 - Peter Tiño:
Dynamical Systems as Temporal Feature Spaces. 44:1-44:42 - Brendon K. Colbert, Matthew M. Peet:
A Convex Parametrization of a New Class of Universal Kernel Functions. 45:1-45:29 - Johann Faouzi, Hicham Janati:
pyts: A Python Package for Time Series Classification. 46:1-46:6 - Wouter Kool, Herke van Hoof, Max Welling:
Ancestral Gumbel-Top-k Sampling for Sampling Without Replacement. 47:1-47:36 - Thomas Ricatte, Rémi Gilleron, Marc Tommasi:
Skill Rating for Multiplayer Games. Introducing Hypernode Graphs and their Spectral Theory. 48:1-48:18 - Hoda Eldardiry, Jennifer Neville, Ryan A. Rossi:
Ensemble Learning for Relational Data. 49:1-49:37 - Emmanuel Bacry, Martin Bompaire, Stéphane Gaïffas, Jean-François Muzy:
Sparse and low-rank multivariate Hawkes processes. 50:1-50:32 - Kuang-Yao Lee, Tianqi Liu, Bing Li, Hongyu Zhao:
Learning Causal Networks via Additive Faithfulness. 51:1-51:38 - Kamil Ciosek, Shimon Whiteson:
Expected Policy Gradients for Reinforcement Learning. 52:1-52:51 - Carson Eisenach, Florentina Bunea, Yang Ning, Claudiu Dinicu:
High-Dimensional Inference for Cluster-Based Graphical Models. 53:1-53:55 - Giannis Siglidis, Giannis Nikolentzos, Stratis Limnios, Christos Giatsidis, Konstantinos Skianis, Michalis Vazirgiannis:
GraKeL: A Graph Kernel Library in Python. 54:1-54:5 - Simon Bartels, Philipp Hennig:
Conjugate Gradients for Kernel Machines. 55:1-55:42 - Peter D. Grünwald, Nishant A. Mehta:
Fast Rates for General Unbounded Loss Functions: From ERM to Generalized Bayes. 56:1-56:80 - Fan Ma, Deyu Meng, Xuanyi Dong, Yi Yang:
Self-paced Multi-view Co-training. 57:1-57:38 - Artin Spiridonoff, Alex Olshevsky, Ioannis Ch. Paschalidis:
Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions. 58:1-58:47 - Salar Fattahi, Somayeh Sojoudi:
Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis. 59:1-59:51 - Mathieu Andreux, Tomás Angles, Georgios Exarchakis, Roberto Leonarduzzi, Gaspar Rochette, Louis Thiry, John Zarka, Stéphane Mallat, Joakim Andén, Eugene Belilovsky, Joan Bruna, Vincent Lostanlen, Muawiz Chaudhary, Matthew J. Hirn, Edouard Oyallon, Sixin Zhang, Carmine-Emanuele Cella, Michael Eickenberg:
Kymatio: Scattering Transforms in Python. 60:1-60:6 - Oliver Vipond:
Multiparameter Persistence Landscapes. 61:1-61:38 - Nathan Kallus:
Generalized Optimal Matching Methods for Causal Inference. 62:1-62:54 - Yu Wang, Siqi Wu, Bin Yu:
Unique Sharp Local Minimum in L1-minimization Complete Dictionary Learning. 63:1-63:52 - Eugen Pircalabelu, Gerda Claeskens:
Community-Based Group Graphical Lasso. 64:1-64:32 - Yu Liu, Kris De Brabanter:
Smoothed Nonparametric Derivative Estimation using Weighted Difference Quotients. 65:1-65:45 - Edgar Dobriban, Yue Sheng:
WONDER: Weighted One-shot Distributed Ridge Regression in High Dimensions. 66:1-66:52 - Romain Azaïs, Florian Ingels:
The weight function in the subtree kernel is decisive. 67:1-67:36 - Xi Chen, Simon S. Du, Xin T. Tong:
On Stationary-Point Hitting Time and Ergodicity of Stochastic Gradient Langevin Dynamics. 68:1-68:41 - Morteza Ashraphijuo, Xiaodong Wang:
Union of Low-Rank Tensor Spaces: Clustering and Completion. 69:1-69:36 - Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, Pascal Poupart:
Representation Learning for Dynamic Graphs: A Survey. 70:1-70:73 - Ming Yu, Varun Gupta, Mladen Kolar:
Estimation of a Low-rank Topic-Based Model for Information Cascades. 71:1-71:47 - Maxim Borisyak, Artem Ryzhikov, Andrey Ustyuzhanin, Denis Derkach, Fedor Ratnikov, Olga Mineeva:
(1 + epsilon)-class Classification: an Anomaly Detection Method for Highly Imbalanced or Incomplete Data Sets. 72:1-72:22 - James E. Johndrow, Paulo Orenstein, Anirban Bhattacharya:
Scalable Approximate MCMC Algorithms for the Horseshoe Prior. 73:1-73:61 - Tianxi Li, Cheng Qian, Elizaveta Levina, Ji Zhu:
High-dimensional Gaussian graphical models on network-linked data. 74:1-74:45 - Gunwoong Park:
Identifiability of Additive Noise Models Using Conditional Variances. 75:1-75:34 - Anis Elgabli, Jihong Park, Amrit S. Bedi, Mehdi Bennis, Vaneet Aggarwal:
GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning. 76:1-76:39 - Pragnya Alatur, Kfir Y. Levy, Andreas Krause:
Multi-Player Bandits: The Adversarial Case. 77:1-77:23 - Malte Probst, Franz Rothlauf:
Harmless Overfitting: Using Denoising Autoencoders in Estimation of Distribution Algorithms. 78:1-78:31 - Nilabja Guha, Veera Baladandayuthapani, Bani K. Mallick:
Quantile Graphical Models: a Bayesian Approach. 79:1-79:47 - Rafael M. Frongillo, Andrew B. Nobel:
Memoryless Sequences for General Losses. 80:1-80:28 - Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabás Póczos, Eric P. Xing:
Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly. 81:1-81:27 - Hossein Keshavarz, George Michailidis, Yves F. Atchadé:
Sequential change-point detection in high-dimensional Gaussian graphical models. 82:1-82:57 - Xiaoming Yuan, Shangzhi Zeng, Jin Zhang:
Discerning the Linear Convergence of ADMM for Structured Convex Optimization through the Lens of Variational Analysis. 83:1-83:75 - Christiane Görgen, Manuele Leonelli:
Model-Preserving Sensitivity Analysis for Families of Gaussian Distributions. 84:1-84:32 - Humza Haider, Bret Hoehn, Sarah Davis, Russell Greiner:
Effective Ways to Build and Evaluate Individual Survival Distributions. 85:1-85:63 - Yating Liu, Gilles Pagès:
Convergence Rate of Optimal Quantization and Application to the Clustering Performance of the Empirical Measure. 86:1-86:36 - Toby Dylan Hocking, Guillem Rigaill, Paul Fearnhead, Guillaume Bourque:
Constrained Dynamic Programming and Supervised Penalty Learning Algorithms for Peak Detection in Genomic Data. 87:1-87:40 - Tom Rainforth, Adam Golinski, Frank Wood, Sheheryar Zaidi:
Target-Aware Bayesian Inference: How to Beat Optimal Conventional Estimators. 88:1-88:54 - Biwei Huang, Kun Zhang, Jiji Zhang, Joseph D. Ramsey, Ruben Sanchez-Romero, Clark Glymour, Bernhard Schölkopf:
Causal Discovery from Heterogeneous/Nonstationary Data. 89:1-89:53 - Benjamin Bloem-Reddy, Yee Whye Teh:
Probabilistic Symmetries and Invariant Neural Networks. 90:1-90:61 - Ming Yu, Varun Gupta, Mladen Kolar:
Simultaneous Inference for Pairwise Graphical Models with Generalized Score Matching. 91:1-91:51 - Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu:
Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients. 92:1-92:72 - Shao-Bo Lin, Di Wang, Ding-Xuan Zhou:
Distributed Kernel Ridge Regression with Communications. 93:1-93:38 - Xin Xing, Meimei Liu, Ping Ma, Wenxuan Zhong:
Minimax Nonparametric Parallelism Test. 94:1-94:47 - Aghiles Salah, Quoc-Tuan Truong, Hady W. Lauw:
Cornac: A Comparative Framework for Multimodal Recommender Systems. 95:1-95:5 - Juan-Luis Suárez, Salvador García, Francisco Herrera:
pyDML: A Python Library for Distance Metric Learning. 96:1-96:7 - Zhao-Rong Lai, Liming Tan, Xiaotian Wu, Liangda Fang:
Loss Control with Rank-one Covariance Estimate for Short-term Portfolio Optimization. 97:1-97:37 - Carlo Ciliberto, Lorenzo Rosasco, Alessandro Rudi:
A General Framework for Consistent Structured Prediction with Implicit Loss Embeddings. 98:1-98:67 - Joris M. Mooij, Sara Magliacane, Tom Claassen:
Joint Causal Inference from Multiple Contexts. 99:1-99:108 - Isabel Valera, Melanie F. Pradier, Maria Lomeli, Zoubin Ghahramani:
General Latent Feature Models for Heterogeneous Datasets. 100:1-100:49 - Francois Kamper, Sarel Steel, Johan A. du Preez:
Regularized Gaussian Belief Propagation with Nodes of Arbitrary Size. 101:1-101:42 - Eugenio Bargiacchi, Diederik M. Roijers, Ann Nowé:
AI-Toolbox: A C++ library for Reinforcement Learning and Planning (with Python Bindings). 102:1-102:12 - Dongruo Zhou, Pan Xu, Quanquan Gu:
Stochastic Nested Variance Reduction for Nonconvex Optimization. 103:1-103:63 - Tyler M. Tomita, James Browne, Cencheng Shen, Jaewon Chung, Jesse Patsolic, Benjamin Falk, Carey E. Priebe, Jason Yim, Randal C. Burns, Mauro Maggioni, Joshua T. Vogelstein:
Sparse Projection Oblique Randomer Forests. 104:1-104:39 - Aryan Mokhtari, Hamed Hassani, Amin Karbasi:
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization. 105:1-105:49 - Pan Li, Niao He, Olgica Milenkovic:
Quadratic Decomposable Submodular Function Minimization: Theory and Practice. 106:1-106:49 - Monika Bhattacharjee, Moulinath Banerjee, George Michailidis:
Change Point Estimation in a Dynamic Stochastic Block Model. 107:1-107:59 - Zeyi Wen, Hanfeng Liu, Jiashuai Shi, Qinbin Li, Bingsheng He, Jian Chen:
ThunderGBM: Fast GBDTs and Random Forests on GPUs. 108:1-108:5 - Youngseok Kim, Chao Gao:
Bayesian Model Selection with Graph Structured Sparsity. 109:1-109:61 - Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Quoc Tran-Dinh:
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization. 110:1-110:48 - Edesio Alcobaça, Felipe Siqueira, Adriano Rivolli, Luís Paulo F. Garcia, Jefferson Tales Oliva, André C. P. L. F. de Carvalho:
MFE: Towards reproducible meta-feature extraction. 111:1-111:5 - Houssem Sifaou, Abla Kammoun, Mohamed-Slim Alouini:
High-dimensional Linear Discriminant Analysis Classifier for Spiked Covariance Model. 112:1-112:24 - Emilie Devijver, Émeline Perthame:
Prediction regions through Inverse Regression. 113:1-113:24 - Bidisha Samanta, Abir De, Gourhari Jana, Vicenç Gómez, Pratim Kumar Chattaraj, Niloy Ganguly, Manuel Gomez-Rodriguez:
NEVAE: A Deep Generative Model for Molecular Graphs. 114:1-114:33 - Elisabeth Gassiat, Sylvain Le Corff, Luc Lehéricy:
Identifiability and Consistent Estimation of Nonparametric Translation Hidden Markov Models with General State Space. 115:1-115:40 - Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Sundar Rangapuram, David Salinas, Jasper Schulz, Lorenzo Stella, Ali Caner Türkmen, Yuyang Wang:
GluonTS: Probabilistic and Neural Time Series Modeling in Python. 116:1-116:6 - Jiahe Lin, George Michailidis:
Regularized Estimation of High-dimensional Factor-Augmented Vector Autoregressive (FAVAR) Models. 117:1-117:51 - Romain Tavenard, Johann Faouzi, Gilles Vandewiele, Felix Divo, Guillaume Androz, Chester Holtz, Marie Payne, Roman Yurchak, Marc Rußwurm, Kushal Kolar, Eli Woods:
Tslearn, A Machine Learning Toolkit for Time Series Data. 118:1-118:6 - Olivier Binette, Debdeep Pati, David B. Dunson:
Bayesian Closed Surface Fitting Through Tensor Products. 119:1-119:26 - Aryan Mokhtari, Alec Koppel, Martin Takác, Alejandro Ribeiro:
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning. 120:1-120:51 - Matey Neykov, Zhaoran Wang, Han Liu:
Agnostic Estimation for Phase Retrieval. 121:1-121:39 - Israel Almodóvar-Rivera, Ranjan Maitra:
Kernel-estimated Nonparametric Overlap-Based Syncytial Clustering. 122:1-122:54 - Jean Kossaifi, Zachary C. Lipton, Arinbjörn Kolbeinsson, Aran Khanna, Tommaso Furlanello, Anima Anandkumar:
Tensor Regression Networks. 123:1-123:21 - Hang Yu, Songwei Wu, Luyin Xin, Justin Dauwels:
Fast Bayesian Inference of Sparse Networks with Automatic Sparsity Determination. 124:1-124:54 - Rad Niazadeh, Tim Roughgarden, Joshua R. Wang:
Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization. 125:1-125:31 - Xin Guo, Ting Hu, Qiang Wu:
Distributed Minimum Error Entropy Algorithms. 126:1-126:31 - Robin Anil, Gökhan Çapan, Isabel Drost-Fromm, Ted Dunning, Ellen Friedman, Trevor Grant, Shannon Quinn, Paritosh Ranjan, Sebastian Schelter, Özgür Yilmazel:
Apache Mahout: Machine Learning on Distributed Dataflow Systems. 127:1-127:6 - Chong Wu, Gongjun Xu, Xiaotong Shen, Wei Pan:
A Regularization-Based Adaptive Test for High-Dimensional GLMs. 128:1-128:67 - André Belotto da Silva, Maxime Gazeau:
A General System of Differential Equations to Model First-Order Adaptive Algorithms. 129:1-129:42 - Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models. 130:1-130:6 - David R. Burt, Carl Edward Rasmussen, Mark van der Wilk:
Convergence of Sparse Variational Inference in Gaussian Processes Regression. 131:1-131:63 - Shakir Mohamed, Mihaela Rosca, Michael Figurnov, Andriy Mnih:
Monte Carlo Gradient Estimation in Machine Learning. 132:1-132:62 - Yao Ma, Alex Olshevsky, Csaba Szepesvári, Venkatesh Saligrama:
Gradient Descent for Sparse Rank-One Matrix Completion for Crowd-Sourced Aggregation of Sparsely Interacting Workers. 133:1-133:36 - Davide Bacciu, Federico Errica, Alessio Micheli:
Probabilistic Learning on Graphs via Contextual Architectures. 134:1-134:39 - Owen Marschall, Kyunghyun Cho, Cristina Savin:
A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks. 135:1-135:34 - Benjamin J. Fehrman, Benjamin Gess, Arnulf Jentzen:
Convergence Rates for the Stochastic Gradient Descent Method for Non-Convex Objective Functions. 136:1-136:48 - Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng Zhang:
Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting. 137:1-137:45 - William de Vazelhes, CJ Carey, Yuan Tang, Nathalie Vauquier, Aurélien Bellet:
metric-learn: Metric Learning Algorithms in Python. 138:1-138:6 - Amir-Reza Asadi, Emmanuel Abbe:
Chaining Meets Chain Rule: Multilevel Entropic Regularization and Training of Neural Networks. 139:1-139:32 - Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu:
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. 140:1-140:67 - Alberto Maria Metelli, Matteo Papini, Nico Montali, Marcello Restelli:
Importance Sampling Techniques for Policy Optimization. 141:1-141:75 - Haishan Ye, Luo Luo, Zhihua Zhang:
Nesterov's Acceleration for Approximate Newton. 142:1-142:37 - Qihang Lin, Selvaprabu Nadarajah, Negar Soheili, Tianbao Yang:
A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints. 143:1-143:45 - Ryan Martin, Yiqi Tang:
Empirical Priors for Prediction in Sparse High-dimensional Linear Regression. 144:1-144:30 - Linda Chamakh, Emmanuel Gobet, Zoltán Szabó:
Orlicz Random Fourier Features. 145:1-145:37 - James Martens:
New Insights and Perspectives on the Natural Gradient Method. 146:1-146:76 - Junhong Lin, Volkan Cevher:
Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms. 147:1-147:63 - Yue Liu, Zhuangyan Fang, Yangbo He, Zhi Geng, Chunchen Liu:
Local Causal Network Learning for Finding Pairs of Total and Direct Effects. 148:1-148:37 - Nikitas Rontsis, Michael A. Osborne, Paul J. Goulart:
Distributionally Ambiguous Optimization for Batch Bayesian Optimization. 149:1-149:26 - Mickaël Binois, Victor Picheny, Patrick Taillandier, Abderrahmane Habbal:
The Kalai-Smorodinsky solution for many-objective Bayesian optimization. 150:1-150:42 - Supratik Paul, Konstantinos I. Chatzilygeroudis, Kamil Ciosek, Jean-Baptiste Mouret, Michael A. Osborne, Shimon Whiteson:
Robust Reinforcement Learning with Bayesian Optimisation and Quadrature. 151:1-151:31 - Xiao-Tong Yuan, Bo Liu, Lezi Wang, Qingshan Liu, Dimitris N. Metaxas:
Dual Iterative Hard Thresholding. 152:1-152:50 - Zhe Wang, Yingbin Liang, Pengsheng Ji:
Spectral Algorithms for Community Detection in Directed Networks. 153:1-153:45 - Miaoyan Wang, Lexin Li:
Learning from Binary Multiway Data: Probabilistic Tensor Decomposition and its Statistical Optimality. 154:1-154:38 - Andrei Kulunchakov, Julien Mairal:
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise. 155:1-155:52 - Prateek Jaiswal, Vinayak A. Rao, Harsha Honnappa:
Asymptotic Consistency of α-Rényi-Approximate Posteriors. 156:1-156:42 - Tui H. Nolan, Marianne Menictas, Matt P. Wand:
Streamlined Variational Inference with Higher Level Random Effects. 157:1-157:62 - Jiaying Gu, Qing Zhou:
Learning Big Gaussian Bayesian Networks: Partition, Estimation and Fusion. 158:1-158:31 - Yan Ru Pei, Haik Manukian, Massimiliano Di Ventra:
Generating Weighted MAX-2-SAT Instances with Frustrated Loops: an RBM Case Study. 159:1-159:55 - Chao Gao, Yuan Yao, Weizhi Zhu:
Generative Adversarial Nets for Robust Scatter Estimation: A Proper Scoring Rule Perspective. 160:1-160:48 - Jacob M. Schreiber, Jeffrey A. Bilmes, William Stafford Noble:
apricot: Submodular selection for data summarization in Python. 161:1-161:6 - Yichong Xu, Sivaraman Balakrishnan, Aarti Singh, Artur Dubrawski:
Regression with Comparisons: Escaping the Curse of Dimensionality with Ordinal Information. 162:1-162:54 - Oleg Arenz, Mingjun Zhong, Gerhard Neumann:
Trust-Region Variational Inference with Gaussian Mixture Models. 163:1-163:60 - Szymon Knop, Przemyslaw Spurek, Jacek Tabor, Igor T. Podolak, Marcin Mazur, Stanislaw Jastrzebski:
Cramer-Wold Auto-Encoder. 164:1-164:28 - Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, Yi Ma:
Complete Dictionary Learning via L4-Norm Maximization over the Orthogonal Group. 165:1-165:68 - William B. Nicholson, Ines Wilms, Jacob Bien, David S. Matteson:
High Dimensional Forecasting via Interpretable Vector Autoregression. 166:1-166:52 - Nathan Kallus, Masatoshi Uehara:
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes. 167:1-167:63 - Hyebin Song, Ran Dai, Garvesh Raskutti, Rina Foygel Barber:
Convex and Non-Convex Approaches for Statistical Inference with Class-Conditional Noisy Labels. 168:1-168:58 - Dmitry Kobak, Jonathan Lomond, Benoit Sanchez:
The Optimal Ridge Penalty for Real-world High-dimensional Data Can Be Zero or Negative due to the Implicit Ridge Regularization. 169:1-169:16 - William Hoiles, Vikram Krishnamurthy, Kunal Pattanayak:
Rationally Inattentive Inverse Reinforcement Learning Explains YouTube Commenting Behavior. 170:1-170:39 - Lucas Mentch, Siyu Zhou:
Randomization as Regularization: A Degrees of Freedom Explanation for Random Forest Success. 171:1-171:36 - Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Yoichi Matsuo, Yoshinobu Kawahara:
Krylov Subspace Method for Nonlinear Dynamical Systems with Random Noise. 172:1-172:29 - Emily C. Hector, Peter X.-K. Song:
Doubly Distributed Supervised Learning and Inference with High-Dimensional Correlated Outcomes. 173:1-173:35 - Ryumei Nakada, Masaaki Imaizumi:
Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality. 174:1-174:38 - Devanshu Agrawal, Theodore Papamarkou, Jacob D. Hinkle:
Wide Neural Networks with Bottlenecks are Deep Gaussian Processes. 175:1-175:66 - Chengchun Shi, Wenbin Lu, Rui Song:
Breaking the Curse of Nonregularity with Subagging - Inference of the Mean Outcome under Optimal Treatment Regimes. 176:1-176:67 - Xin Bing, Florentina Bunea, Marten H. Wegkamp:
Optimal Estimation of Sparse Topic Models. 177:1-177:45 - Tabish Rashid, Mikayel Samvelyan, Christian Schröder de Witt, Gregory Farquhar, Jakob N. Foerster, Shimon Whiteson:
Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning. 178:1-178:51 - Francesco Tonolini, Jack Radford, Alex Turpin, Daniele Faccio, Roderick Murray-Smith:
Variational Inference for Computational Imaging Inverse Problems. 179:1-179:46 - Boyue Li, Shicong Cen, Yuxin Chen, Yuejie Chi:
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction. 180:1-180:51 - Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, Peter Stone:
Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey. 181:1-181:50 - Xi Chen, Weidong Liu, Xiaojun Mao, Zhuoyi Yang:
Distributed High-dimensional Regression Under a Quantile Loss Function. 182:1-182:43 - Haomiao Meng, Ying-Qi Zhao, Haoda Fu, Xingye Qiao:
Near-optimal Individualized Treatment Recommendations. 183:1-183:28 - Gregory Naitzat, Andrey Zhitnikov, Lek-Heng Lim:
Topology of Deep Neural Networks. 184:1-184:40 - Thomas Bonald, Nathan de Lara, Quentin Lutz, Bertrand Charpentier:
Scikit-network: Graph Analysis in Python. 185:1-185:6 - Franca Hoffmann, Bamdad Hosseini, Zhi Ren, Andrew M. Stuart:
Consistency of Semi-Supervised Learning Algorithms on Graphs: Probit and One-Hot Methods. 186:1-186:55 - Rui Tuo, Wenjia Wang:
Kriging Prediction with Isotropic Matern Correlations: Robustness and Experimental Designs. 187:1-187:38 - Andrea Rotnitzky, Ezequiel Smucler:
Efficient Adjustment Sets for Population Average Causal Treatment Effect Estimation in Graphical Models. 188:1-188:86 - Yaniv Tenzer, Amit Moscovich, Mary Frances Dorn, Boaz Nadler, Clifford H. Spiegelman:
Beyond Trees: Classification with Sparse Pairwise Dependencies. 189:1-189:33 - Bin Gu, Wenhan Xian, Zhouyuan Huo, Cheng Deng, Heng Huang:
A Unified q-Memorization Framework for Asynchronous Stochastic Optimization. 190:1-190:53 - Dominik Thalmeier, Hilbert J. Kappen, Simone Totaro, Vicenç Gómez:
Adaptive Smoothing for Path Integral Control. 191:1-191:37 - Ganggang Xu, Ming Wang, Jiangze Bian, Hui Huang, Timothy R. Burch, Sandro C. Andrade, Jingfei Zhang, Yongtao Guan:
Semi-parametric Learning of Structured Temporal Point Processes. 192:1-192:39 - Alessandro Tibo, Manfred Jaeger, Paolo Frasconi:
Learning and Interpreting Multi-Multi-Instance Learning Networks. 193:1-193:60 - Maruan Al-Shedivat, Avinava Dubey, Eric P. Xing:
Contextual Explanation Networks. 194:1-194:44 - Igor Molybog, Ramtin Madani, Javad Lavaei:
Conic Optimization for Quadratic Regression Under Sparse Noise. 195:1-195:36 - Lucas Lehnert, Michael L. Littman:
Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning. 196:1-196:53 - Ayoub Belhadji, Rémi Bardenet, Pierre Chainais:
A determinantal point process for column subset selection. 197:1-197:62 - Haoran Wang, Thaleia Zariphopoulou, Xun Yu Zhou:
Reinforcement Learning in Continuous Time and Space: A Stochastic Control Approach. 198:1-198:34 - Yazhen Wang, Shang Wu:
Asymptotic Analysis via Stochastic Differential Equations of Gradient Descent Algorithms in Statistical and Computational Paradigms. 199:1-199:103 - Di Wang, Marco Gaboardi, Adam D. Smith, Jinhui Xu:
Empirical Risk Minimization in the Non-interactive Local Model of Differential Privacy. 200:1-200:39 - Reza Mohammadi, Matthew T. Pratola, Maurits Kaptein:
Continuous-Time Birth-Death MCMC for Bayesian Regression Tree Models. 201:1-201:26 - Francisco Belchí Guillamón, Jacek Brodzki, Matthew Burfitt, Mahesan Niranjan:
A Numerical Measure of the Instability of Mapper-Type Algorithms. 202:1-202:45 - Dalit Engelhardt:
Dynamic Control of Stochastic Evolution: A Deep Reinforcement Learning Approach to Adaptively Targeting Emergent Drug Resistance. 203:1-203:30 - Martin Slawski, Emanuel Ben-David, Ping Li:
Two-Stage Approach to Multivariate Linear Regression with Sparsely Mismatched Data. 204:1-204:42 - Simon Fischer, Ingo Steinwart:
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithms. 205:1-205:38 - Xiao-Tong Yuan, Ping Li:
On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond. 206:1-206:51 - Baihua He, Yanyan Liu, Yuanshan Wu, Guosheng Yin, Xingqiu Zhao:
Functional Martingale Residual Process for High-Dimensional Cox Regression with Model Averaging. 207:1-207:37 - Fanghui Liu, Xiaolin Huang, Chen Gong, Jie Yang, Li Li:
Learning Data-adaptive Non-parametric Kernels. 208:1-208:39 - Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem:
A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation. 209:1-209:62 - Sercan Ömer Arik, Tomas Pfister:
ProtoAttend: Attention-Based Prototypical Learning. 210:1-210:35 - Avrim Blum, Travis Dick, Naren Manoj, Hongyang Zhang:
Random Smoothing Might be Unable to Certify L∞ Robustness for High-Dimensional Images. 211:1-211:21 - Sebastian Pölsterl:
scikit-survival: A Library for Time-to-Event Analysis Built on Top of scikit-learn. 212:1-212:6 - Alistair Shilton, Sutharshan Rajasegarar, Marimuthu Palaniswami:
Multiclass Anomaly Detector: the CS++ Support Vector Machine. 213:1-213:39 - Eric C. Chi, Brian J. Gaines, Will Wei Sun, Hua Zhou, Jian Yang:
Provable Convex Co-clustering of Tensors. 214:1-214:58 - Robin Vandaele, Yvan Saeys, Tijl De Bie:
Mining Topological Structure in Graphs through Forest Representations. 215:1-215:68 - Xi Chen, Yining Wang, Yuan Zhou:
Dynamic Assortment Optimization with Changing Contextual Information. 216:1-216:44
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.