


default search action
35th COLT 2022: London, UK
- Po-Ling Loh, Maxim Raginsky:

Conference on Learning Theory, 2-5 July 2022, London, UK. Proceedings of Machine Learning Research 178, PMLR 2022 - Sinho Chewi, Murat A. Erdogdu, Mufan (Bill) Li, Ruoqi Shen, Shunshi Zhang:

Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev. 1-2 - Itay Safran, Jason D. Lee:

Optimization-Based Separations for Neural Networks. 3-64 - Nuri Mert Vural, Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu:

Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance. 65-102 - Tristan Milne, Adrian I. Nachman:

Wasserstein GANs with Gradient Penalty Compute Congested Transport. 103-129 - Jayadev Acharya, Ayush Jain, Gautam Kamath, Ananda Theertha Suresh, Huanyu Zhang:

Robust Estimation for Random Graphs. 130-166 - Xizhi Liu, Sayan Mukherjee:

Tight query complexity bounds for learning graph partitions. 167-181 - Julian Zimmert, Naman Agarwal, Satyen Kale:

Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States. 182-226 - Laura Tinsi, Arnak S. Dalalyan:

Risk bounds for aggregated shallow neural networks using Gaussian priors. 227-253 - Gaspard Beugnot, Julien Mairal, Alessandro Rudi:

On the Benefits of Large Learning Rates for Kernel Methods. 254-282 - Daniel J. Hsu, Clayton Hendrick Sanford, Rocco A. Servedio, Emmanouil-Vasileios Vlatakis-Gkaragkounis

:
Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals. 283-312 - Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel A. Ward:

The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance. 313-355 - Yeshwanth Cherapanamjeri, Nilesh Tripuraneni, Peter L. Bartlett, Michael I. Jordan:

Optimal Mean Estimation without a Variance. 356-357 - Andrew J. Wagenmaker, Max Simchowitz, Kevin Jamieson:

Beyond No Regret: Instance-Dependent PAC Reinforcement Learning. 358-418 - Eric Balkanski, Oussama Hanguir, Shatian Wang:

Learning Low Degree Hypergraphs. 419-420 - Carles Domingo-Enrich:

Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators. 421-447 - Ohad Shamir:

The Implicit Bias of Benign Overfitting. 448-478 - Moïse Blanchard, Romain Cosson:

Universal Online Learning with Bounded Loss: Reduction to Binary Classification. 479-495 - Christopher Criscitiello, Nicolas Boumal:

Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oracles. 496-542 - Jibang Wu, Haifeng Xu, Fan Yao:

Multi-Agent Learning for Iterative Dominance Elimination: Formal Barriers and New Algorithms. 543 - Gautam Kamath, Argyris Mouzakis, Vikrant Singhal, Thomas Steinke, Jonathan R. Ullman:

A Private and Computationally-Efficient Estimator for Unbounded Gaussians. 544-572 - Clément L. Canonne, Ayush Jain, Gautam Kamath, Jerry Li:

The Price of Tolerance in Distribution Testing. 573-624 - Yuval Dagan, Gil Kur:

A bounded-noise mechanism for differential privacy. 625-661 - Dan Tsir Cohen, Aryeh Kontorovich:

Learning with metric losses. 662-700 - Adam Klukowski:

Rate of Convergence of Polynomial Networks to Gaussian Processes. 701-722 - Pravesh Kothari, Pasin Manurangsi, Ameya Velingker:

Private Robust Estimation by Stabilizing Convex Relaxations. 723-777 - Ahmet Alacaoglu, Yura Malitsky:

Stochastic Variance Reduction for Variational Inequality Methods. 778-816 - Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani:

Self-Consistency of the Fokker Planck Equation. 817-841 - Olivier Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer:

Monotone Learning. 842-866 - Nicolas Christianson, Tinashe Handina, Adam Wierman:

Chasing Convex Bodies and Functions with Black-Box Advice. 867-908 - Chris Junchi Li, Wenlong Mou, Martin J. Wainwright, Michael I. Jordan:

ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm. 909-981 - Liyu Chen, Haipeng Luo, Aviv Rosenberg:

Policy Optimization for Stochastic Shortest Path. 982-1046 - Rajai Nasser, Stefan Tiegel:

Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise. 1047-1074 - Hassan Ashtiani, Christopher Liaw:

Private and polynomial time algorithms for learning Gaussians and beyond. 1075-1076 - Moïse Blanchard:

Universal Online Learning: an Optimistically Universal Learning Rule. 1077-1125 - Prateek Varshney, Abhradeep Thakurta, Prateek Jain:

(Nearly) Optimal Private Linear Regression for Sub-Gaussian Data via Adaptive Clipping. 1126-1166 - Xiyang Liu, Weihao Kong, Sewoong Oh:

Differential privacy and robust statistics in high dimensions. 1167-1246 - Ilias Zadik, Min Jae Song, Alexander S. Wein, Joan Bruna:

Lattice-Based Methods Surpass Sum-of-Squares in Clustering. 1247-1248 - Gal Vardi, Gilad Yehudai, Ohad Shamir:

Width is Less Important than Depth in ReLU Neural Networks. 1249-1281 - Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan:

Computational-Statistical Gap in Reinforcement Learning. 1282-1302 - Etienne Boursier

, Mikhail Konobeev, Nicolas Flammarion:
Trace norm regularization for multi-task learning with scarce data. 1303-1327 - Jayadev Acharya, Clément L. Canonne, Himanshu Tyagi, Ziteng Sun:

The Role of Interactivity in Structured Estimation. 1328-1355 - Boris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki:

Dimension-free convergence rates for gradient Langevin dynamics in RKHS. 1356-1420 - Shinji Ito, Taira Tsuchiya, Junya Honda:

Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret Bounds. 1421-1422 - Arpit Agarwal, Sanjeev Khanna, Prathamesh Patil:

A Sharp Memory-Regret Trade-off for Multi-Pass Streaming Bandits. 1423-1462 - Buddhima Gamlath, Silvio Lattanzi, Ashkan Norouzi-Fard, Ola Svensson:

Approximate Cluster Recovery from Noisy Labels. 1463-1509 - Gil Kur, Eli Putterman:

An Efficient Minimax Optimal Estimator For Multivariate Convex Regression. 1510-1546 - Tor Lattimore:

Minimax Regret for Partial Monitoring: Infinite Outcomes and Rustichini's Regret. 1547-1575 - Haipeng Luo, Mengxiao Zhang, Peng Zhao:

Adaptive Bandit Convex Optimization with Heterogeneous Curvature. 1576-1612 - Xiang Li, Jiadong Liang, Xiangyu Chang, Zhihua Zhang:

Statistical Estimation and Online Inference via Local SGD. 1613-1661 - Vincent Cohen-Addad, Frederik Mallmann-Trenn

, David Saulpic:
Community Recovery in the Degree-Heterogeneous Stochastic Block Model. 1662-1692 - Nazar Buzun, Nikolay Shvetsov, Dmitry V. Dylov:

Strong Gaussian Approximation for the Sum of Random Vectors. 1693-1715 - Adam Block, Yuval Dagan, Noah Golowich, Alexander Rakhlin:

Smoothed Online Learning is as Easy as Statistical Learning. 1716-1786 - Erwin Bolthausen, Shuta Nakajima, Nike Sun, Changji Xu:

Gardner formula for Ising perceptron models at small densities. 1787-1911 - Pierre C. Bellec, Yiwei Shen:

Derivatives and residual distribution of regularized M-estimators with application to adaptive tuning. 1912-1947 - Sivakanth Gopi, Yin Tat Lee, Daogao Liu:

Private Convex Optimization via Exponential Mechanism. 1948-1989 - Wei Huang

, Richard Combes, Cindy Trinh:
Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information. 1990-2012 - Peter L. Bartlett, Piotr Indyk, Tal Wagner:

Generalization Bounds for Data-Driven Numerical Linear Algebra. 2013-2040 - Sinho Chewi, Patrik R. Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet:

The query complexity of sampling from strongly log-concave distributions in one dimension. 2041-2059 - Wenlong Mou, Ashwin Pananjady, Martin J. Wainwright, Peter L. Bartlett:

Optimal and instance-dependent guarantees for Markovian linear stochastic approximation. 2060-2061 - Aditya Varre, Nicolas Flammarion:

Accelerated SGD for Non-Strongly-Convex Least Squares. 2062-2126 - Loucas Pillaud-Vivien, Julien Reygner, Nicolas Flammarion:

Label noise (stochastic) gradient descent implicitly solves the Lasso for quadratic parametrisation. 2127-2159 - Joe Suk, Samory Kpotufe:

Tracking Most Significant Arm Switches in Bandits. 2160-2182 - Julia Gaudio, Miklós Z. Rácz, Anirudh Sridhar:

Exact Community Recovery in Correlated Stochastic Block Models. 2183-2241 - Rentian Yao, Xiaohui Chen, Yun Yang:

Mean-field nonparametric estimation of interacting particle systems. 2242-2275 - Meena Jagadeesan, Ilya P. Razenshteyn, Suriya Gunasekar:

Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm. 2276-2325 - Dan Garber, Ben Kretzu:

New Projection-free Algorithms for Online Convex Optimization with Adaptive Regret Guarantees. 2326-2359 - Yair Carmon, Oliver Hinder:

Making SGD Parameter-Free. 2360-2389 - Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant:

Efficient Convex Optimization Requires Superlinear Memory. 2390-2430 - Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan:

Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales. 2431-2540 - Sitan Chen, Jerry Li, Ryan O'Donnell:

Toward Instance-Optimal State Certification With Incoherent Measurements. 2541-2596 - Yuval Dagan, Anthimos Vardis Kandiros, Constantinos Daskalakis:

EM's Convergence in Gaussian Latent Tree Models. 2597-2667 - Spencer Frei, Niladri S. Chatterji, Peter L. Bartlett:

Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data. 2668-2703 - Alekh Agarwal, Tong Zhang:

Minimax Regret Optimization for Robust Machine Learning under Distribution Shift. 2704-2729 - Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee:

Offline Reinforcement Learning with Realizability and Single-policy Concentrability. 2730-2775 - Alekh Agarwal, Tong Zhang:

Non-Linear Reinforcement Learning in Large Action Spaces: Structural Conditions and Sample-efficiency of Posterior Sampling. 2776-2814 - Allen Liu, Ankur Moitra:

Learning GMMs with Nearly Optimal Robustness Guarantees. 2815-2895 - Krishna Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Adil Salim, Shunshi Zhang:

Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo. 2896-2923 - Jikai Jin, Suvrit Sra:

Understanding Riemannian Acceleration via a Proximal Extragradient Framework. 2924-2962 - Jun Liu, Ye Yuan:

On Almost Sure Convergence Rates of Stochastic Gradient Methods. 2963-2983 - Yongxin Chen, Sinho Chewi, Adil Salim, Andre Wibisono:

Improved analysis for a proximal algorithm for sampling. 2984-3014 - Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan:

Realizable Learning is All You Need. 3015-3069 - Yury Makarychev, Naren Sarayu Manoj, Max Ovsiankin:

Streaming Algorithms for Ellipsoidal Approximation of Convex Polytopes. 3070-3093 - Allen Liu, Mark Sellke:

The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player Multi-Armed Bandits with no Communication. 3094 - Jennifer Tang:

Minimax Regret on Patterns Using Kullback-Leibler Divergence Covering. 3095-3112 - Shivam Gupta, Eric Price:

Sharp Constants in Uniformity Testing via the Huber Statistic. 3113-3192 - Parikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao:

Low-Degree Multicalibration. 3193-3234 - Taylan Kargin

, Sahin Lale, Kamyar Azizzadenesheli, Animashree Anandkumar, Babak Hassibi:
Thompson Sampling Achieves $\tilde{O}(\sqrt{T})$ Regret in Linear Quadratic Control. 3235-3284 - Julian Zimmert, Tor Lattimore:

Return of the bias: Almost minimax optimal high probability bounds for adversarial linear bandits. 3285-3312 - Amit Attia, Tomer Koren:

Uniform Stability for First-Order Empirical Risk Minimization. 3313-3332 - Ingvar M. Ziemann, Henrik Sandberg, Nikolai Matni:

Single Trajectory Nonparametric Learning of Nonlinear Dynamics. 3333-3364 - Tom F. Sterkenburg

:
On characterizations of learnability with computable learners. 3365-3379 - Matan Schliserman, Tomer Koren:

Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond. 3380-3394 - Max Hahn-Klimroth

, Noëla Müller:
Near optimal efficient decoding from pooled data. 3395-3409 - Simon Buchholz:

Kernel interpolation in Sobolev spaces is not consistent in low dimensions. 3410-3440 - Haoyu Wang, Yihong Wu, Jiaming Xu, Israel Yolou:

Random Graph Matching in Geometric Models: the Case of Complete Graphs. 3441-3488 - Dylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, Yunzong Xu:

Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation. 3489 - Yingli Ran, Zhao Zhang, Shaojie Tang:

Improved Parallel Algorithm for Minimum Cost Submodular Cover Problem. 3490-3502 - Mohammad Reza Karimi, Ya-Ping Hsieh, Panayotis Mertikopoulos, Andreas Krause:

The Dynamics of Riemannian Robbins-Monro Algorithms. 3503 - Renato Paes Leme, Chara Podimata, Jon Schneider:

Corruption-Robust Contextual Search through Density Updates. 3504-3505 - Tomer Berg, Or Ordentlich, Ofer Shayevitz:

On The Memory Complexity of Uniformity Testing. 3506-3523 - Gábor Lugosi, Gergely Neu:

Generalization Bounds via Convex Analysis. 3524-3546 - Oren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Thakurta, Nisheeth K. Vishnoi:

Private Matrix Approximation and Geometry of Unitary Orbits. 3547-3588 - Asaf B. Cassel, Alon Cohen, Tomer Koren:

Efficient Online Linear Control with Stochastic Convex Costs and Unknown Dynamics. 3589-3604 - Theophile Thiery, Justin Ward:

Two-Sided Weak Submodularity for Matroid Constrained Optimization and Regression. 3605-3634 - Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou:

Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits. 3635-3684 - Max Dabagia, Santosh S. Vempala, Christos H. Papadimitriou:

Assemblies of neurons learn to classify well-separated distributions. 3685-3717 - Enrique B. Nueve, Rafael M. Frongillo, Jessica Finocchiaro:

The Structured Abstain Problem and the Lovász Hinge. 3718-3740 - Jingqiu Ding, Tommaso d'Orsi, Chih-Hung Liu, David Steurer

, Stefan Tiegel:
Fast algorithm for overcomplete order-3 tensor decomposition. 3741-3799 - Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie:

Hardness of Maximum Likelihood Learning of DPPs. 3800-3819 - Anastasios Tsiamis, Ingvar M. Ziemann, Manfred Morari, Nikolai Matni, George J. Pappas:

Learning to Control Linear Systems can be Hard. 3820-3857 - Zihan Zhang, Xiangyang Ji, Simon S. Du:

Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary Policies. 3858-3904 - Hongjie Chen, Tommaso d'Orsi:

On the well-spread property and its relation to linear regression. 3905-3935 - Ilias Diakonikolas, Daniel M. Kane, Yuxin Sun:

Optimal SQ Lower Bounds for Robustly Learning Discrete Product Distributions and Ising Models. 3936-3978 - Shyam Narayanan:

Private High-Dimensional Hypothesis Testing. 3979-4027 - Itay Evron, Edward Moroshko, Rachel A. Ward, Nathan Srebro, Daniel Soudry:

How catastrophic can catastrophic forgetting be in linear regression? 4028-4079 - Daniel Freund, Thodoris Lykouris

, Wentao Weng:
Efficient decentralized multi-agent learning in asymmetric queuing systems. 4080-4084 - Wenxuan Guo, YoonHaeng Hur, Tengyuan Liang, Chris Ryan:

Online Learning to Transport via the Minimal Selection Principle. 4085-4109 - Elad Romanov, Tamir Bendory, Or Ordentlich:

On the Role of Channel Capacity in Learning Gaussian Mixture Models. 4110-4159 - Andrew Jacobsen, Ashok Cutkosky

:
Parameter-free Mirror Descent. 4160-4211 - Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet:

Chained generalisation bounds. 4212-4257 - Ilias Diakonikolas, Daniel Kane:

Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise. 4258-4282 - Chirag Gupta, Aaditya Ramdas:

Faster online calibration without randomization: interval forecasts and the power of two choices. 4283-4309 - Andrea Montanari, Basil Saeed:

Universality of empirical risk minimization. 4310-4312 - Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis:

Learning a Single Neuron with Adversarial Label Noise via Gradient Descent. 4313-4361 - Yujia Jin, Aaron Sidford, Kevin Tian:

Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods. 4362-4415 - Milad Sefidgaran, Amin Gohari, Gaël Richard, Umut Simsekli:

Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms. 4416-4463 - Jack J. Mayo, Hédi Hadiji, Tim van Erven:

Scale-free Unconstrained Online Learning for Curved Losses. 4464-4497 - Maria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma:

Robustly-reliable learners under poisoning attacks. 4498-4534 - Ilias Diakonikolas, Daniel Kane:

Non-Gaussian Component Analysis via Lattice Basis Reduction. 4535-4547 - Noah Golowich, Ankur Moitra:

Can Q-learning be Improved with Advice? 4548-4619 - Blake E. Woodworth, Francis R. Bach, Alessandro Rudi:

Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares. 4620-4642 - Sepehr Assadi, Vaggos Chatziafratis, Jakub Lacki, Vahab Mirrokni, Chen Wang:

Hierarchical Clustering in Graph Streams: Single-Pass Algorithms and Space Lower Bounds. 4643-4702 - Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas:

Robust Sparse Mean Estimation via Sum of Squares. 4703-4763 - Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth

, Alexander S. Wein, Ilias Zadik:
Statistical and Computational Phase Transitions in Group Testing. 4764-4781 - Emmanuel Abbe, Enric Boix Adserà, Theodor Misiakiewicz:

The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks. 4782-4887 - Lechao Xiao:

Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks. 4888-4944 - Frederic Koehler, Holden Lee, Andrej Risteski:

Sampling Approximately Low-Rank Ising Models: MCMC meets Variational Methods. 4945-4988 - Gavin Brown, Mark Bun, Adam D. Smith:

Strong Memory Lower Bounds for Learning Natural Models. 4989-5029 - Guy Blanc, Jane Lange, Ali Malik, Li-Yang Tan:

On the power of adaptivity in statistical adversaries. 5030-5061 - Yonathan Efroni, Dylan J. Foster, Dipendra Misra, Akshay Krishnamurthy, John Langford:

Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information. 5062-5127 - Simina Brânzei, Jiawei Li

:
The Query Complexity of Local Search and Brouwer in Rounds. 5128-5145 - Dhruv Malik, Yuanzhi Li, Aarti Singh:

Complete Policy Regret Bounds for Tallying Bandits. 5146-5174 - Qinghua Liu, Alan Chung, Csaba Szepesvári, Chi Jin:

When Is Partially Observable Reinforcement Learning Not Scary? 5175-5220 - Yishay Mansour, Mehryar Mohri, Jon Schneider, Balasubramanian Sivan:

Strategizing against Learners in Bayesian Games. 5221-5252 - Lang Liu, Carlos Cinelli, Zaïd Harchaoui:

Orthogonal Statistical Learning with Self-Concordant Loss. 5253-5277 - Alberto Del Pia, Mingchen Ma, Christos Tzamos:

Clustering with Queries under Semi-Random Noise. 5278-5313 - Zakaria Mhammedi:

Efficient Projection-Free Online Convex Optimization with Membership Oracle. 5314-5390 - Daogao Liu:

Better Private Algorithms for Correlation Clustering. 5391-5412 - Alexandru Damian, Jason D. Lee, Mahdi Soltanolkotabi

:
Neural Networks can Learn Representations with Gradient Descent. 5413-5452 - Matus Telgarsky:

Stochastic linear optimization never overfits with quadratically-bounded losses on general data. 5453-5488 - Simon Weissmann, Ashia Wilson, Jakob Zech:

Multilevel Optimization for Inverse Problems. 5489-5524 - Kangjie Zhou, Andrea Montanari:

High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks. 5525-5527 - Chen Cheng, John C. Duchi, Rohith Kuditipudi:

Memorize to generalize: on the necessity of interpolation in high dimensional linear regression. 5528-5560 - Zakaria Mhammedi, Alexander Rakhlin:

Damped Online Newton Step for Portfolio Selection. 5561-5595 - Nima Anari, Thuy-Duong Vuong:

From Sampling to Optimization on Discrete Domains with Applications to Determinant Maximization. 5596-5618

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














