default search action
Journal of Machine Learning Research, Volume 22
Volume 22, 2021
- Krishnakumar Balasubramanian, Tong Li, Ming Yuan:
On the Optimality of Kernel-Embedding Based Goodness-of-Fit Tests. 1:1-1:45 - Gilles Blanchard, Aniket Anand Deshmukh, Ürün Dogan, Gyemin Lee, Clayton Scott:
Domain Generalization by Marginal Transfer Learning. 2:1-2:55 - Stefano Tracà, Cynthia Rudin, Weiyu Yan:
Regulating Greed Over Time in Multi-Armed Bandits. 3:1-3:99 - Erich Merrill, Alan Fern, Xiaoli Z. Fern, Nima Dolatnia:
An Empirical Study of Bayesian Optimization: Acquisition Versus Partition. 4:1-4:25 - Carlos Alberto Gomez-Uribe, Brian Karrer:
The Decoupled Extended Kalman Filter for Dynamic Exponential-Family Factorization Models. 5:1-5:25 - Fadhel Ayed, Marco Battiston, Federico Camerlenghi, Stefano Favaro:
Consistent estimation of small masses in feature sampling. 6:1-6:28 - Viktor Bengs, Róbert Busa-Fekete, Adil El Mesaoudi-Paul, Eyke Hüllermeier:
Preference-based Online Learning with Dueling Bandits: A Survey. 7:1-7:108 - Benjamin Lu, Johanna Hardin:
A Unified Framework for Random Forest Prediction Error Estimation. 8:1-8:41 - Defeng Sun, Kim-Chuan Toh, Yancheng Yuan:
Convex Clustering: Model, Theoretical Guarantee and Efficient Algorithm. 9:1-9:32 - Bumeng Zhuo, Chao Gao:
Mixing Time of Metropolis-Hastings for Bayesian Community Detection. 10:1-10:89 - Yunxiao Chen, Zhiliang Ying, Haoran Zhang:
Unfolding-Model-Based Visualization: Theory, Method and Applications. 11:1-11:51 - Shenglong Zhou, Naihua Xiu, Hou-Duo Qi:
Global and Quadratic Convergence of Newton Hard-Thresholding Pursuit. 12:1-12:45 - Xiao Di, Yuan Ke, Runze Li:
Homogeneity Structure Learning in Large-scale Panel Data with Heavy-tailed Errors. 13:1-13:42 - Maryam Aziz, Emilie Kaufmann, Marie-Karelle Riviere:
On Multi-Armed Bandit Designs for Dose-Finding Trials. 14:1-14:38 - Jagdeep Singh Bhatia:
Simple and Fast Algorithms for Interactive Machine Learning with Random Counter-examples. 15:1-15:30 - Shih-Yuan Yu, Sujit Rokka Chhetri, Arquimedes Canedo, Palash Goyal, Mohammad Abdullah Al Faruque:
Pykg2vec: A Python Library for Knowledge Graph Embedding. 16:1-16:6 - Nikola B. Kovachki, Andrew M. Stuart:
Continuous Time Analysis of Momentum Methods. 17:1-17:40 - Gaoxia Jiang, Wenjian Wang, Yuhua Qian, Jiye Liang:
A Unified Sample Selection Framework for Output Noise Filtering: An Error-Bound Perspective. 18:1-18:66 - Alexandre d'Aspremont, Mihai Cucuringu, Hemant Tyagi:
Ranking and synchronization from pairwise measurements via SVD. 19:1-19:63 - Guillaume Maillard, Sylvain Arlot, Matthieu Lerasle:
Aggregated Hold-Out. 20:1-20:55 - Lei Yang, Jia Li, Defeng Sun, Kim-Chuan Toh:
A Fast Globally Linearly Convergent Algorithm for the Computation of Wasserstein Barycenters. 21:1-21:37 - Purnamrita Sarkar, Y. X. Rachel Wang, Soumendu Sundar Mukherjee:
When random initializations help: a study of variational inference for community detection. 22:1-22:46 - Giulio Galvan, Matteo Lapucci, Chih-Jen Lin, Marco Sciandrone:
A Two-Level Decomposition Framework Exploiting First and Second Order Information for SVM Training Problems. 23:1-23:38 - Riikka Huusari, Hachem Kadri:
Entangled Kernels - Beyond Separability. 24:1-24:40 - Yunwen Lei, Ting Hu, Ke Tang:
Generalization Performance of Multi-pass Stochastic Gradient Descent with Convex Loss Functions. 25:1-25:41 - Tuhin Sarkar, Alexander Rakhlin, Munther A. Dahleh:
Finite Time LTI System Identification. 26:1-26:61 - Hamid Eftekhari, Moulinath Banerjee, Yaacov Ritov:
Inference In High-dimensional Single-Index Models Under Symmetric Designs. 27:1-27:63 - Julian Zimmert, Yevgeny Seldin:
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits. 28:1-28:49 - Wanrong Zhang, Sara Krehbiel, Rui Tuo, Yajun Mei, Rachel Cummings:
Single and Multiple Change-Point Detection with Differential Privacy. 29:1-29:36 - Oliver Kroemer, Scott Niekum, George Konidaris:
A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms. 30:1-30:82 - Tianyu Wang, Marco Morucci, M. Usaid Awan, Yameng Liu, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky:
FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference. 31:1-31:41 - Fei Lu, Mauro Maggioni, Sui Tang:
Learning interaction kernels in heterogeneous systems of agents from multiple trajectories. 32:1-32:67 - Tijana Zrnic, Aaditya Ramdas, Michael I. Jordan:
Asynchronous Online Testing of Multiple Hypotheses. 33:1-33:39 - Imtiaz Ahmed, Xia Ben Hu, Mithun P. Acharya, Yu Ding:
Neighborhood Structure Assisted Non-negative Matrix Factorization and Its Application in Unsupervised Point-wise Anomaly Detection. 34:1-34:32 - Melkior Ornik, Ufuk Topcu:
Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation. 35:1-35:40 - Carlos Villacampa-Calvo, Bryan Zaldivar, Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato:
Multi-class Gaussian Process Classification with Noisy Inputs. 36:1-36:52 - Zhao Tang Luo, Huiyan Sang, Bani K. Mallick:
A Bayesian Contiguous Partitioning Method for Learning Clustered Latent Variables. 37:1-37:52 - Umit Kose, Andrzej Ruszczynski:
Risk-Averse Learning by Temporal Difference Methods with Markov Risk Measures. 38:1-38:34 - Guillaume Tauzin, Umberto Lupo, Lewis Tunstall, Julian Burella Pérez, Matteo Caorsi, Anibal M. Medina-Mardones, Alberto Dassatti, Kathryn Hess:
giotto-tda: : A Topological Data Analysis Toolkit for Machine Learning and Data Exploration. 39:1-39:6 - Anton Bakhtin, Yuntian Deng, Sam Gross, Myle Ott, Marc'Aurelio Ranzato, Arthur Szlam:
Residual Energy-Based Models for Text. 40:1-40:41 - Henning Lange, Steven L. Brunton, J. Nathan Kutz:
From Fourier to Koopman: Spectral Methods for Long-term Time Series Prediction. 41:1-41:38 - Wenlong Mou, Yi-An Ma, Martin J. Wainwright, Peter L. Bartlett, Michael I. Jordan:
High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm. 42:1-42:41 - Rahul Parhi, Robert D. Nowak:
Banach Space Representer Theorems for Neural Networks and Ridge Splines. 43:1-43:40 - Jason M. Altschuler, Enric Boix-Adserà:
Wasserstein barycenters can be computed in polynomial time in fixed dimension. 44:1-44:19 - Ye Tian, Yang Feng:
RaSE: Random Subspace Ensemble Classification. 45:1-45:93 - T. Tony Cai, Hongzhe Li, Rong Ma:
Optimal Structured Principal Subspace Estimation: Metric Entropy and Minimax Rates. 46:1-46:45 - Soon Hoe Lim:
Understanding Recurrent Neural Networks Using Nonequilibrium Response Theory. 47:1-47:48 - Behzad Azmi, Dante Kalise, Karl Kunisch:
Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression. 48:1-48:32 - Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, Junyu Zhang:
From Low Probability to High Confidence in Stochastic Convex Optimization. 49:1-49:38 - Nguyen Thi Kim Hue, Monica Chiogna:
Structure Learning of Undirected Graphical Models for Count Data. 50:1-50:53 - Junlong Zhu, Qingtao Wu, Mingchuan Zhang, Ruijuan Zheng, Keqin Li:
Projection-free Decentralized Online Learning for Submodular Maximization over Time-Varying Networks. 51:1-51:42 - Alper Atamtürk, Andrés Gómez, Shaoning Han:
Sparse and Smooth Signal Estimation: Convexification of L0-Formulations. 52:1-52:43 - Weiwei Li, Jan Hannig, Sayan Mukherjee:
Subspace Clustering through Sub-Clusters. 53:1-53:37 - Xinming Yang, Lingrui Gan, Naveen N. Narisetty, Feng Liang:
GemBag: Group Estimation of Multiple Bayesian Graphical Models. 54:1-54:48 - Minjie Wang, Genevera I. Allen:
Integrative Generalized Convex Clustering Optimization and Feature Selection for Mixed Multi-View Data. 55:1-55:73 - Charlie Frogner, Sebastian Claici, Edward Chien, Justin Solomon:
Incorporating Unlabeled Data into Distributionally Robust Learning. 56:1-56:46 - George Papamakarios, Eric T. Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan:
Normalizing Flows for Probabilistic Modeling and Inference. 57:1-57:64 - Zhe Fei, Yi Li:
Estimation and Inference for High Dimensional Generalized Linear Models: A Splitting and Smoothing Approach. 58:1-58:32 - Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate:
Predictive Learning on Hidden Tree-Structured Ising Models. 59:1-59:82 - Jonathan Tuck, Shane T. Barratt, Stephen P. Boyd:
A Distributed Method for Fitting Laplacian Regularized Stratified Models. 60:1-60:37 - Yunwen Lei, Yiming Ying:
Stochastic Proximal AUC Maximization. 61:1-61:45 - Mariusz Kubkowski, Jan Mielniczuk, Pawel Teisseyre:
How to Gain on Power: Novel Conditional Independence Tests Based on Short Expansion of Conditional Mutual Information. 62:1-62:57 - Nicolás García Trillos, Franca Hoffmann, Bamdad Hosseini:
Geometric structure of graph Laplacian embeddings. 63:1-63:55 - Botao Hao, Boxiang Wang, Pengyuan Wang, Jingfei Zhang, Jian Yang, Will Wei Sun:
Sparse Tensor Additive Regression. 64:1-64:43 - Yanqing Zhang, Xuan Bi, Niansheng Tang, Annie Qu:
Dynamic Tensor Recommender Systems. 65:1-65:35 - Haishan Ye, Luo Luo, Zhihua Zhang:
Approximate Newton Methods. 66:1-66:41 - Trambak Banerjee, Qiang Liu, Gourab Mukherjee, Wengunag Sun:
A General Framework for Empirical Bayes Estimation in Discrete Linear Exponential Family. 67:1-67:46 - Chirag Gupta, Sivaraman Balakrishnan, Aaditya Ramdas:
Path Length Bounds for Gradient Descent and Flow. 68:1-68:63 - Shujie Ma, Liangjun Su, Yichong Zhang:
Determining the Number of Communities in Degree-corrected Stochastic Block Models. 69:1-69:63 - Lasse Petersen, Niels Richard Hansen:
Testing Conditional Independence via Quantile Regression Based Partial Copulas. 70:1-70:47 - Tao Luo, Zhi-Qin John Xu, Zheng Ma, Yaoyu Zhang:
Phase Diagram for Two-layer ReLU Neural Networks at Infinite-width Limit. 71:1-71:47 - Erhan Bayraktar, Ibrahim Ekren, Xin Zhang:
Prediction against a limited adversary. 72:1-72:33 - Michael Muehlebach, Michael I. Jordan:
Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic Perspectives. 73:1-73:50 - Benjamin Charlier, Jean Feydy, Joan Alexis Glaunès, François-David Collin, Ghislain Durif:
Kernel Operations on the GPU, with Autodiff, without Memory Overflows. 74:1-74:6 - Jorge Pérez, Pablo Barceló, Javier Marinkovic:
Attention is Turing-Complete. 75:1-75:35 - Alain Celisse, Martin Wahl:
Analyzing the discrepancy principle for kernelized spectral filter learning algorithms. 76:1-76:59 - Yasuhiro Fujita, Prabhat Nagarajan, Toshiki Kataoka, Takahiro Ishikawa:
ChainerRL: A Deep Reinforcement Learning Library. 77:1-77:14 - Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T. H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, Titouan Vayer:
POT: Python Optimal Transport. 78:1-78:8 - Chris Mingard, Guillermo Valle Pérez, Joar Skalse, Ard A. Louis:
Is SGD a Bayesian sampler? Well, almost. 79:1-79:64 - Zengfeng Huang, Xuemin Lin, Wenjie Zhang, Ying Zhang:
Communication-Efficient Distributed Covariance Sketch, with Application to Distributed PCA. 80:1-80:38 - Maxime Cauchois, Suyash Gupta, John C. Duchi:
Knowing what You Know: valid and validated confidence sets in multiclass and multilabel prediction. 81:1-81:42 - Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Sahand Sharifzadeh, Volker Tresp, Jens Lehmann:
PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings. 82:1-82:6 - Rishabh Dudeja, Daniel Hsu:
Statistical Query Lower Bounds for Tensor PCA. 83:1-83:51 - Jiyuan Tu, Weidong Liu, Xiaojun Mao, Xi Chen:
Variance Reduced Median-of-Means Estimator for Byzantine-Robust Distributed Inference. 84:1-84:67 - Ohad Shamir:
Gradient Methods Never Overfit On Separable Data. 85:1-85:20 - Andreas C. Damianou, Neil D. Lawrence, Carl Henrik Ek:
Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis. 86:1-86:51 - Patrick Kreitzberg, Oliver Serang:
On Solving Probabilistic Linear Diophantine Equations. 87:1-87:24 - Can M. Le:
Edge Sampling Using Local Network Information. 88:1-88:29 - Feifei Wang, Junni L. Zhang, Yichao Li, Ke Deng, Jun S. Liu:
Bayesian Text Classification and Summarization via A Class-Specified Topic Model. 89:1-89:48 - Tomer Galanti, Sagie Benaim, Lior Wolf:
Risk Bounds for Unsupervised Cross-Domain Mapping with IPMs. 90:1-90:42 - Tingting Zhao, Alexandre Bouchard-Côté:
Analysis of high-dimensional Continuous Time Markov Chains using the Local Bouncy Particle Sampler. 91:1-91:41 - Anastasis Kratsios, Cody B. Hyndman:
NEU: A Meta-Algorithm for Universal UAP-Invariant Feature Representation. 92:1-92:51 - Zhengrong Xing, Peter Carbonetto, Matthew Stephens:
Flexible Signal Denoising via Flexible Empirical Bayes Shrinkage. 93:1-93:28 - Xiaoyi Mai, Romain Couillet:
Consistent Semi-Supervised Graph Regularization for High Dimensional Data. 94:1-94:48 - Hanyuan Hang, Zhouchen Lin, Xiaoyu Liu, Hongwei Wen:
Histogram Transform Ensembles for Large-scale Regression. 95:1-95:87 - Kai Puolamäki, Emilia Oikarinen, Andreas Henelius:
Guided Visual Exploration of Relations in Data Sets. 96:1-96:32 - Alberto Maria Metelli, Matteo Pirotta, Daniele Calandriello, Marcello Restelli:
Safe Policy Iteration: A Monotonically Improving Approximate Policy Iteration Approach. 97:1-97:83 - Alekh Agarwal, Sham M. Kakade, Jason D. Lee, Gaurav Mahajan:
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift. 98:1-98:76 - Lin Liu, Rajarshi Mukherjee, James M. Robins, Eric Tchetgen Tchetgen:
Adaptive estimation of nonparametric functionals. 99:1-99:66 - Matthias Feurer, Jan N. van Rijn, Arlind Kadra, Pieter Gijsbers, Neeratyoy Mallik, Sahithya Ravi, Andreas Müller, Joaquin Vanschoren, Frank Hutter:
OpenML-Python: an extensible Python API for OpenML. 100:1-100:5 - Baoxun Wang, Zhen Xu, Huan Zhang, Kexin Qiu, Deyuan Zhang, Chengjie Sun:
LocalGAN: Modeling Local Distributions for Adversarial Response Generation. 101:1-101:29 - Gunwoong Park, Sang Jun Moon, Sion Park, Jong-June Jeon:
Learning a High-dimensional Linear Structural Equation Model via l1-Regularized Regression. 102:1-102:41 - Guodong Zhang, Xuchan Bao, Laurent Lessard, Roger B. Grosse:
A Unified Analysis of First-Order Methods for Smooth Games via Integral Quadratic Constraints. 103:1-103:39 - Joseph D. Janizek, Pascal Sturmfels, Su-In Lee:
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks. 104:1-104:54 - James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth:
Pathwise Conditioning of Gaussian Processes. 105:1-105:47 - Gérard Ben Arous, Reza Gheissari, Aukosh Jagannath:
Online stochastic gradient descent on non-convex losses from high-dimensional inference. 106:1-106:51 - Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Michael Auli, Armand Joulin:
Beyond English-Centric Multilingual Machine Translation. 107:1-107:48 - Zhu Li, Jean-Francois Ton, Dino Oglic, Dino Sejdinovic:
Towards a Unified Analysis of Random Fourier Features. 108:1-108:51 - Ronan Perry, Gavin Mischler, Richard Guo, Theo Lee, Alexander Chang, Arman Koul, Cameron Franz, Hugo Richard, Iain Carmichael, Pierre Ablin, Alexandre Gramfort, Joshua T. Vogelstein:
mvlearn: Multiview Machine Learning in Python. 109:1-109:7 - Jacob Montiel, Max Halford, Saulo Martiello Mastelini, Geoffrey Bolmier, Raphaël Sourty, Robin Vaysse, Adil Zouitine, Heitor Murilo Gomes, Jesse Read, Talel Abdessalem, Albert Bifet:
River: machine learning for streaming data in Python. 110:1-110:8 - Steven Siwei Ye, Oscar Hernan Madrid Padilla:
Non-parametric Quantile Regression via the K-NN Fused Lasso. 111:1-111:38 - Xun Qian, Zheng Qu, Peter Richtárik:
L-SVRG and L-Katyusha with Arbitrary Sampling. 112:1-112:47 - Ashia C. Wilson, Ben Recht, Michael I. Jordan:
A Lyapunov Analysis of Accelerated Methods in Optimization. 113:1-113:34 - Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, Daniel M. Roy:
NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization. 114:1-114:43 - Michael R. Metel, Akiko Takeda:
Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization. 115:1-115:36 - Victor Hamer, Pierre Dupont:
An Importance Weighted Feature Selection Stability Measure. 116:1-116:57 - Shaofeng Deng, Shuyang Ling, Thomas Strohmer:
Strong Consistency, Graph Laplacians, and the Stochastic Block Model. 117:1-117:44 - Chidubem Arachie, Bert Huang:
A General Framework for Adversarial Label Learning. 118:1-118:33 - Gérard Biau, Maxime Sangnier, Ugo Tanielian:
Some Theoretical Insights into Wasserstein GANs. 119:1-119:45 - Wei Wang, Matthew Stephens:
Empirical Bayes Matrix Factorization. 120:1-120:40 - Vikram Krishnamurthy, George Yin:
Langevin Dynamics for Adaptive Inverse Reinforcement Learning of Stochastic Gradient Algorithms. 121:1-121:49 - Kyriakos Axiotis, Maxim Sviridenko:
Sparse Convex Optimization via Adaptively Regularized Hard Thresholding. 122:1-122:47 - George Wynne, François-Xavier Briol, Mark Girolami:
Convergence Guarantees for Gaussian Process Means With Misspecified Likelihoods and Smoothness. 123:1-123:40 - Jingyi Jessica Li, Yiling Elaine Chen, Xin Tong:
A flexible model-free prediction-based framework for feature ranking. 124:1-124:54 - Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou:
Bandit Convex Optimization in Non-stationary Environments. 125:1-125:45 - Molei Liu, Yin Xia, Kelly Cho, Tianxi Cai:
Integrative High Dimensional Multiple Testing with Heterogeneity under Data Sharing Constraints. 126:1-126:26 - Ismael Lemhadri, Feng Ruan, Louis Abraham, Robert Tibshirani:
LassoNet: A Neural Network with Feature Sparsity. 127:1-127:29 - Rohit Agrawal, Thibaut Horel:
Optimal Bounds between f-Divergences and Integral Probability Metrics. 128:1-128:59 - Niladri S. Chatterji, Philip M. Long:
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime. 129:1-129:30