default search action
Peter Richtárik
Person information
- affiliation: King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
- affiliation (former): University of Edinburgh, UK
- affiliation (former): Moscow Institute of Physics and Technology (MIPT), Dolgoprudny, Russia
- affiliation (former, PhD 2007): Cornell University, Ithaca, NY, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j54]Haoyu Zhao, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
Faster Rates for Compressed Federated Learning with Client-Variance Reduction. SIAM J. Math. Data Sci. 6(1): 154-175 (2024) - [j53]Lukang Sun, Adil Salim, Peter Richtárik:
Federated Sampling with Langevin Algorithm under Isoperimetry. Trans. Mach. Learn. Res. 2024 (2024) - [c109]Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou:
Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization. AAAI 2024: 20344-20352 - [c108]Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtárik:
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates. AISTATS 2024: 1207-1215 - [c107]Rafal Szlendak, Elnur Gasanov, Peter Richtárik:
Understanding Progressive Training Through the Framework of Randomized Coordinate Descent. AISTATS 2024: 2161-2169 - [c106]Hanmin Li, Avetik G. Karagulyan, Peter Richtárik:
Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization. ICLR 2024 - [c105]Peter Richtárik, Elnur Gasanov, Konstantin Burlachenko:
Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants. ICLR 2024 - [c104]Kai Yi, Nidham Gazagnadou, Peter Richtárik, Lingjuan Lyu:
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity. ICLR 2024 - [c103]Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise. ICML 2024 - [c102]Egor Shulgin, Peter Richtárik:
Towards a Better Theoretical Understanding of Independent Subnetwork Training. ICML 2024 - [i193]Andrei Panferov, Yury Demidovich, Ahmad Rammal, Peter Richtárik:
Correlated Quantization for Faster Nonconvex Distributed Optimization. CoRR abs/2401.05518 (2024) - [i192]Alexander Tyurin, Marta Pozzi, Ivan Ilin, Peter Richtárik:
Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity. CoRR abs/2402.04785 (2024) - [i191]Kaja Gruntkowska, Alexander Tyurin, Peter Richtárik:
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity. CoRR abs/2402.06412 (2024) - [i190]Peter Richtárik, Elnur Gasanov, Konstantin Burlachenko:
Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants. CoRR abs/2402.10774 (2024) - [i189]Laurent Condat, Artavazd Maranjyan, Peter Richtárik:
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression. CoRR abs/2403.04348 (2024) - [i188]Yury Demidovich, Grigory Malinovsky, Peter Richtárik:
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction. CoRR abs/2403.06677 (2024) - [i187]Kai Yi, Georg Meinhardt, Laurent Condat, Peter Richtárik:
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models. CoRR abs/2403.09904 (2024) - [i186]Kai Yi, Nidham Gazagnadou, Peter Richtárik, Lingjuan Lyu:
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity. CoRR abs/2404.09816 (2024) - [i185]Vladimir Malinovskii, Denis Mazur, Ivan Ilin, Denis Kuznedelev, Konstantin Burlachenko, Kai Yi, Dan Alistarh, Peter Richtárik:
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression. CoRR abs/2405.14852 (2024) - [i184]Alexander Tyurin, Kaja Gruntkowska, Peter Richtárik:
Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations. CoRR abs/2405.15545 (2024) - [i183]Ionut-Vlad Modoranu, Mher Safaryan, Grigory Malinovsky, Eldar Kurtic, Thomas Robert, Peter Richtárik, Dan Alistarh:
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence. CoRR abs/2405.15593 (2024) - [i182]Peter Richtárik, Abdurakhmon Sadiev, Yury Demidovich:
A Unified Theory of Stochastic Proximal Point Methods without Smoothness. CoRR abs/2405.15941 (2024) - [i181]Avetik G. Karagulyan, Egor Shulgin, Abdurakhmon Sadiev, Peter Richtárik:
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning. CoRR abs/2405.20127 (2024) - [i180]Georg Meinhardt, Kai Yi, Laurent Condat, Peter Richtárik:
Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning. CoRR abs/2405.20623 (2024) - [i179]Kai Yi, Timur Kharisov, Igor Sokolov, Peter Richtárik:
Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning. CoRR abs/2406.01115 (2024) - [i178]Eduard Gorbunov, Nazarii Tupitsa, Sayantan Choudhury, Alen Aliev, Peter Richtárik, Samuel Horváth, Martin Takác:
Methods for Convex (L0,L1)-Smooth Optimization: Clipping, Acceleration, and Adaptivity. CoRR abs/2409.14989 (2024) - 2023
- [j52]Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan:
On Biased Compression for Distributed Learning. J. Mach. Learn. Res. 24: 276:1-276:50 (2023) - [j51]Ahmed Khaled, Othmane Sebbouh, Nicolas Loizou, Robert M. Gower, Peter Richtárik:
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization. J. Optim. Theory Appl. 199(2): 499-540 (2023) - [j50]Samuel Horváth, Dmitry Kovalev, Konstantin Mishchenko, Peter Richtárik, Sebastian U. Stich:
Stochastic distributed learning with gradient quantization and double-variance reduction. Optim. Methods Softw. 38(1): 91-106 (2023) - [j49]El Houcine Bergou, Konstantin Burlachenko, Aritra Dutta, Peter Richtárik:
Personalized Federated Learning with Communication Compression. Trans. Mach. Learn. Res. 2023 (2023) - [j48]Rustem Islamov, Xun Qian, Slavomír Hanzely, Mher Safaryan, Peter Richtárik:
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation. Trans. Mach. Learn. Res. 2023 (2023) - [j47]Ahmed Khaled, Peter Richtárik:
Better Theory for SGD in the Nonconvex World. Trans. Mach. Learn. Res. 2023 (2023) - [j46]Maksim Makarenko, Elnur Gasanov, Abdurakhmon Sadiev, Rustem Islamov, Peter Richtárik:
Adaptive Compression for Communication-Efficient Distributed Training. Trans. Mach. Learn. Res. 2023 (2023) - [j45]Zheng Shi, Abdurakhmon Sadiev, Nicolas Loizou, Peter Richtárik, Martin Takác:
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods. Trans. Mach. Learn. Res. 2023 (2023) - [j44]Alexander Tyurin, Lukang Sun, Konstantin Burlachenko, Peter Richtárik:
Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling. Trans. Mach. Learn. Res. 2023 (2023) - [c101]Xun Qian, Hanze Dong, Tong Zhang, Peter Richtárik:
Catalyst Acceleration of Error Compensated Methods Leads to Better Communication Complexity. AISTATS 2023: 615-649 - [c100]Michal Grudzien, Grigory Malinovsky, Peter Richtárik:
Can 5th Generation Local Training Methods Support Client Sampling? Yes! AISTATS 2023: 1055-1092 - [c99]Lukang Sun, Avetik G. Karagulyan, Peter Richtárik:
Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition. AISTATS 2023: 3693-3717 - [c98]Jihao Xin, Ivan Ilin, Shunkang Zhang, Marco Canini, Peter Richtárik:
Kimad: Adaptive Gradient Compression with Bandwidth Awareness. DistributedML@CoNEXT 2023: 35-48 - [c97]Konstantin Burlachenko, Abdulmajeed Alrowithi, Fahad Ali Albalawi, Peter Richtárik:
Federated Learning is Better with Non-Homomorphic Encryption. DistributedML@CoNEXT 2023: 49-84 - [c96]Grigory Malinovsky, Konstantin Mishchenko, Peter Richtárik:
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization. DistributedML@CoNEXT 2023: 85-104 - [c95]Laurent Condat, Peter Richtárik:
RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates. ICLR 2023 - [c94]Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel:
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top. ICLR 2023 - [c93]Alexander Tyurin, Peter Richtárik:
DASHA: Distributed Nonconvex Optimization with Communication Compression and Optimal Oracle Complexity. ICLR 2023 - [c92]Kaja Gruntkowska, Alexander Tyurin, Peter Richtárik:
EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression. ICML 2023: 11761-11807 - [c91]Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance. ICML 2023: 29563-29648 - [c90]Yury Demidovich, Grigory Malinovsky, Igor Sokolov, Peter Richtárik:
A Guide Through the Zoo of Biased SGD. NeurIPS 2023 - [c89]Ilyas Fatkhullin, Alexander Tyurin, Peter Richtárik:
Momentum Provably Improves Error Feedback! NeurIPS 2023 - [c88]Alexander Tyurin, Peter Richtárik:
2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression. NeurIPS 2023 - [c87]Alexander Tyurin, Peter Richtárik:
Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model. NeurIPS 2023 - [c86]Alexander Tyurin, Peter Richtárik:
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting. NeurIPS 2023 - [c85]Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik:
Random Reshuffling with Variance Reduction: New Analysis and Better Rates. UAI 2023: 1347-1357 - [i177]Konstantin Mishchenko, Slavomír Hanzely, Peter Richtárik:
Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes. CoRR abs/2301.06806 (2023) - [i176]Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance. CoRR abs/2302.00999 (2023) - [i175]Grigory Malinovsky, Samuel Horváth, Konstantin Burlachenko, Peter Richtárik:
Federated Learning with Regularized Client Participation. CoRR abs/2302.03662 (2023) - [i174]Laurent Condat, Grigory Malinovsky, Peter Richtárik:
TAMUNA: Accelerated Federated Learning with Local Training and Partial Participation. CoRR abs/2302.09832 (2023) - [i173]Avetik G. Karagulyan, Peter Richtárik:
ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression. CoRR abs/2303.04622 (2023) - [i172]Kai Yi, Laurent Condat, Peter Richtárik:
Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning. CoRR abs/2305.13170 (2023) - [i171]Ilyas Fatkhullin, Alexander Tyurin, Peter Richtárik:
Momentum Provably Improves Error Feedback! CoRR abs/2305.15155 (2023) - [i170]Peter Richtárik, Elnur Gasanov, Konstantin Burlachenko:
Error Feedback Shines when Features are Rare. CoRR abs/2305.15264 (2023) - [i169]Yury Demidovich, Grigory Malinovsky, Igor Sokolov, Peter Richtárik:
A Guide Through the Zoo of Biased SGD. CoRR abs/2305.16296 (2023) - [i168]Jihao Xin, Marco Canini, Peter Richtárik, Samuel Horváth:
Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees. CoRR abs/2305.18627 (2023) - [i167]Sarit Khirirat, Eduard Gorbunov, Samuel Horváth, Rustem Islamov, Fakhri Karray, Peter Richtárik:
Clip21: Error Feedback for Gradient Clipping. CoRR abs/2305.18929 (2023) - [i166]Michal Grudzien, Grigory Malinovsky, Peter Richtárik:
Improving Accelerated Federated Learning with Compression and Importance Sampling. CoRR abs/2306.03240 (2023) - [i165]Rafal Szlendak, Elnur Gasanov, Peter Richtárik:
Understanding Progressive Training Through the Framework of Randomized Coordinate Descent. CoRR abs/2306.03626 (2023) - [i164]Egor Shulgin, Peter Richtárik:
Towards a Better Theoretical Understanding of Independent Subnetwork Training. CoRR abs/2306.16484 (2023) - [i163]Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise. CoRR abs/2310.01860 (2023) - [i162]Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtárik:
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates. CoRR abs/2310.09804 (2023) - [i161]Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov:
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences. CoRR abs/2311.14127 (2023) - [i160]Yury Demidovich, Grigory Malinovsky, Egor Shulgin, Peter Richtárik:
MAST: Model-Agnostic Sparsified Training. CoRR abs/2311.16086 (2023) - [i159]Konstantin Burlachenko, Abdulmajeed Alrowithi, Fahad Ali Albalawi, Peter Richtárik:
Federated Learning is Better with Non-Homomorphic Encryption. CoRR abs/2312.02074 (2023) - [i158]Jihao Xin, Ivan Ilin, Shunkang Zhang, Marco Canini, Peter Richtárik:
Kimad: Adaptive Gradient Compression with Bandwidth Awareness. CoRR abs/2312.08053 (2023) - 2022
- [j43]Aritra Dutta, El Houcine Bergou, Yunming Xiao, Marco Canini, Peter Richtárik:
Direct nonlinear acceleration. EURO J. Comput. Optim. 10: 100047 (2022) - [j42]Adil Salim, Laurent Condat, Konstantin Mishchenko, Peter Richtárik:
Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms. J. Optim. Theory Appl. 195(1): 102-130 (2022) - [j41]Albert S. Berahas, Majid Jahani, Peter Richtárik, Martin Takác:
Quasi-Newton methods for machine learning: forget the past, just sample. Optim. Methods Softw. 37(5): 1668-1704 (2022) - [j40]Samuel Horváth, Lihua Lei, Peter Richtárik, Michael I. Jordan:
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization. SIAM J. Math. Data Sci. 4(2): 634-648 (2022) - [j39]Wenlin Chen, Samuel Horváth, Peter Richtárik:
Optimal Client Sampling for Federated Learning. Trans. Mach. Learn. Res. 2022 (2022) - [j38]Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael G. Rabbat:
FedShuffle: Recipes for Better Use of Local Work in Federated Learning. Trans. Mach. Learn. Res. 2022 (2022) - [c84]Xun Qian, Rustem Islamov, Mher Safaryan, Peter Richtárik:
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning. AISTATS 2022: 680-720 - [c83]Adil Salim, Laurent Condat, Dmitry Kovalev, Peter Richtárik:
An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints. AISTATS 2022: 4482-4498 - [c82]Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtárik:
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning. AISTATS 2022: 11374-11421 - [c81]Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtárik, Michael W. Mahoney, Martin Takác:
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information. ICLR 2022 - [c80]Konstantin Mishchenko, Bokun Wang, Dmitry Kovalev, Peter Richtárik:
IntSGD: Adaptive Floatless Compression of Stochastic Gradients. ICLR 2022 - [c79]Rafal Szlendak, Alexander Tyurin, Peter Richtárik:
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization. ICLR 2022 - [c78]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Proximal and Federated Random Reshuffling. ICML 2022: 15718-15749 - [c77]Konstantin Mishchenko, Grigory Malinovsky, Sebastian U. Stich, Peter Richtárik:
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! ICML 2022: 15750-15769 - [c76]Peter Richtárik, Igor Sokolov, Elnur Gasanov, Ilyas Fatkhullin, Zhize Li, Eduard Gorbunov:
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation. ICML 2022: 18596-18648 - [c75]Mher Safaryan, Rustem Islamov, Xun Qian, Peter Richtárik:
FedNL: Making Newton-Type Methods Applicable to Federated Learning. ICML 2022: 18959-19010 - [c74]Adil Salim, Lukang Sun, Peter Richtárik:
A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1. ICML 2022: 19139-19152 - [c73]Laurent Condat, Peter Richtárik:
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization. MSML 2022: 81-96 - [c72]Samuel Horváth, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtárik:
Natural Compression for Distributed Deep Learning. MSML 2022: 129-141 - [c71]Aleksandr Beznosikov, Peter Richtárik, Michael Diskin, Max Ryabinin, Alexander V. Gasnikov:
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees. NeurIPS 2022 - [c70]Laurent Condat, Kai Yi, Peter Richtárik:
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. NeurIPS 2022 - [c69]Slavomír Hanzely, Dmitry Kamzolov, Dmitry Pasechnyuk, Alexander V. Gasnikov, Peter Richtárik, Martin Takác:
A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate. NeurIPS 2022 - [c68]Dmitry Kovalev, Aleksandr Beznosikov, Abdurakhmon Sadiev, Michael Persiianov, Peter Richtárik, Alexander V. Gasnikov:
Optimal Algorithms for Decentralized Stochastic Variational Inequalities. NeurIPS 2022 - [c67]Dmitry Kovalev, Alexander V. Gasnikov, Peter Richtárik:
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling. NeurIPS 2022 - [c66]Grigory Malinovsky, Kai Yi, Peter Richtárik:
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning. NeurIPS 2022 - [c65]Abdurakhmon Sadiev, Dmitry Kovalev, Peter Richtárik:
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox. NeurIPS 2022 - [c64]Bokun Wang, Mher Safaryan, Peter Richtárik:
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques. NeurIPS 2022 - [c63]Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtárik, Yuejie Chi:
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression. NeurIPS 2022 - [c62]Egor Shulgin, Peter Richtárik:
Shifted compression framework: generalizations and improvements. UAI 2022: 1813-1823 - [i157]Grigory Malinovsky, Konstantin Mishchenko, Peter Richtárik:
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization. CoRR abs/2201.11066 (2022) - [i156]Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtárik, Yuejie Chi:
BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression. CoRR abs/2201.13320 (2022) - [i155]Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin, Elnur Gasanov, Zhize Li, Eduard Gorbunov:
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation. CoRR abs/2202.00998 (2022) - [i154]Alexander Tyurin, Peter Richtárik:
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization. CoRR abs/2202.01268 (2022) - [i153]Dmitry Kovalev, Aleksandr Beznosikov, Abdurakhmon Sadiev, Michael Persiianov, Peter Richtárik, Alexander V. Gasnikov:
Optimal Algorithms for Decentralized Stochastic Variational Inequalities. CoRR abs/2202.02771 (2022) - [i152]Konstantin Burlachenko, Samuel Horváth, Peter Richtárik:
FL_PyTorch: optimization research simulator for federated learning. CoRR abs/2202.03099 (2022) - [i151]Konstantin Mishchenko, Grigory Malinovsky, Sebastian U. Stich, Peter Richtárik:
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! CoRR abs/2202.09357 (2022) - [i150]Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael G. Rabbat:
FedShuffle: Recipes for Better Use of Local Work in Federated Learning. CoRR abs/2204.13169 (2022) - [i149]Grigory Malinovsky, Peter Richtárik:
Federated Random Reshuffling with Compression and Variance Reduction. CoRR abs/2205.03914 (2022) - [i148]Laurent Condat, Kai Yi, Peter Richtárik:
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. CoRR abs/2205.04180 (2022) - [i147]Alexander Tyurin, Peter Richtárik:
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting. CoRR abs/2205.15580 (2022) - [i146]Lukang Sun, Avetik G. Karagulyan, Peter Richtárik:
Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition. CoRR abs/2206.00508 (2022) - [i145]Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel:
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top. CoRR abs/2206.00529 (2022) - [i144]Lukang Sun, Adil Salim, Peter Richtárik:
Federated Learning with a Sampling Algorithm under Isoperimetry. CoRR abs/2206.00920 (2022) - [i143]Alexander Tyurin, Lukang Sun, Konstantin Burlachenko, Peter Richtárik:
Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling. CoRR abs/2206.02275 (2022) - [i142]Motasem Alfarra, Juan C. Pérez, Egor Shulgin, Peter Richtárik, Bernard Ghanem:
Certified Robustness in Federated Learning. CoRR abs/2206.02535 (2022) - [i141]Rustem Islamov, Xun Qian, Slavomír Hanzely, Mher Safaryan, Peter Richtárik:
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation. CoRR abs/2206.03588 (2022) - [i140]Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik:
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression. CoRR abs/2206.07021 (2022) - [i139]Lukang Sun, Peter Richtárik:
A Note on the Convergence of Mirrored Stein Variational Gradient Descent under (L0, L1)-Smoothness Condition. CoRR abs/2206.09709 (2022) - [i138]Egor Shulgin, Peter Richtárik:
Shifted Compression Framework: Generalizations and Improvements. CoRR abs/2206.10452 (2022) - [i137]Abdurakhmon Sadiev, Dmitry Kovalev, Peter Richtárik:
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox. CoRR abs/2207.03957 (2022) - [i136]Grigory Malinovsky, Kai Yi, Peter Richtárik:
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning. CoRR abs/2207.04338 (2022) - [i135]Samuel Horváth, Konstantin Mishchenko, Peter Richtárik:
Adaptive Learning Rates for Faster Stochastic Gradient Methods. CoRR abs/2208.05287 (2022) - [i134]El Houcine Bergou, Konstantin Burlachenko, Aritra Dutta, Peter Richtárik:
Personalized Federated Learning with Communication Compression. CoRR abs/2209.05148 (2022) - [i133]Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou:
Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization. CoRR abs/2209.07883 (2022) - [i132]Kaja Gruntkowska, Alexander Tyurin, Peter Richtárik:
EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression. CoRR abs/2209.15218 (2022) - [i131]Lukang Sun, Peter Richtárik:
Improved Stein Variational Gradient Descent with Importance Weights. CoRR abs/2210.00462 (2022) - [i130]Laurent Condat, Ivan Agarský, Peter Richtárik:
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Compressed Communication. CoRR abs/2210.13277 (2022) - [i129]Artavazd Maranjyan, Mher Safaryan, Peter Richtárik:
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity. CoRR abs/2210.16402 (2022) - [i128]Maksim Makarenko, Elnur Gasanov, Rustem Islamov, Abdurakhmon Sadiev, Peter Richtárik:
Adaptive Compression for Communication-Efficient Distributed Training. CoRR abs/2211.00188 (2022) - [i127]Michal Grudzien, Grigory Malinovsky, Peter Richtárik:
Can 5th Generation Local Training Methods Support Client Sampling? Yes! CoRR abs/2212.14370 (2022) - 2021
- [j37]Filip Hanzely, Peter Richtárik, Lin Xiao:
Accelerated Bregman proximal gradient methods for relatively smooth convex optimization. Comput. Optim. Appl. 79(2): 405-440 (2021) - [j36]Filip Hanzely, Peter Richtárik:
Fastest rates for stochastic mirror descent methods. Comput. Optim. Appl. 79(3): 717-766 (2021) - [j35]Xun Qian, Zheng Qu, Peter Richtárik:
L-SVRG and L-Katyusha with Arbitrary Sampling. J. Mach. Learn. Res. 22: 112:1-112:47 (2021) - [j34]Robert M. Gower, Peter Richtárik, Francis R. Bach:
Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. Math. Program. 188(1): 135-192 (2021) - [j33]Nicolas Loizou, Peter Richtárik:
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols. IEEE Trans. Inf. Theory 67(12): 8300-8324 (2021) - [c61]Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau:
Hyperparameter Transfer Learning with Adaptive Complexity. AISTATS 2021: 1378-1386 - [c60]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
Local SGD: Unified Theory and New Efficient Methods. AISTATS 2021: 3556-3564 - [c59]Dmitry Kovalev, Anastasia Koloskova, Martin Jaggi, Peter Richtárik, Sebastian U. Stich:
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! AISTATS 2021: 4087-4095 - [c58]Konstantin Burlachenko, Samuel Horváth, Peter Richtárik:
FL_PyTorch: optimization research simulator for federated learning. DistributedML@CoNEXT 2021: 1-7 - [c57]Samuel Horváth, Peter Richtárik:
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning. ICLR 2021 - [c56]Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
MARINA: Faster Non-Convex Distributed Learning with Compression. ICML 2021: 3788-3798 - [c55]Rustem Islamov, Xun Qian, Peter Richtárik:
Distributed Second Order Methods with Fast Rates and Compressed Communication. ICML 2021: 4617-4628 - [c54]Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Alexander Rogozin, Alexander V. Gasnikov:
ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks. ICML 2021: 5784-5793 - [c53]Zhize Li, Hongyan Bao, Xiangliang Zhang, Peter Richtárik:
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization. ICML 2021: 6286-6295 - [c52]Mher Safaryan, Peter Richtárik:
Stochastic Sign Descent Methods: New Algorithms and Better Theory. ICML 2021: 9224-9234 - [c51]Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin:
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback. NeurIPS 2021: 4384-4396 - [c50]Zhize Li, Peter Richtárik:
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression. NeurIPS 2021: 13770-13781 - [c49]Dmitry Kovalev, Elnur Gasanov, Alexander V. Gasnikov, Peter Richtárik:
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks. NeurIPS 2021: 22325-22335 - [c48]Mher Safaryan, Filip Hanzely, Peter Richtárik:
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization. NeurIPS 2021: 25688-25702 - [c47]Xun Qian, Peter Richtárik, Tong Zhang:
Error Compensated Distributed SGD Can Be Accelerated. NeurIPS 2021: 30401-30413 - [c46]Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports, Peter Richtárik:
Scaling Distributed Machine Learning with In-Network Aggregation. NSDI 2021: 785-808 - [i126]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Proximal and Federated Random Reshuffling. CoRR abs/2102.06704 (2021) - [i125]Rustem Islamov, Xun Qian, Peter Richtárik:
Distributed Second Order Methods with Fast Rates and Compressed Communication. CoRR abs/2102.07158 (2021) - [i124]Mher Safaryan, Filip Hanzely, Peter Richtárik:
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization. CoRR abs/2102.07245 (2021) - [i123]Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
MARINA: Faster Non-Convex Distributed Learning with Compression. CoRR abs/2102.07845 (2021) - [i122]Konstantin Mishchenko, Bokun Wang, Dmitry Kovalev, Peter Richtárik:
IntSGD: Floatless Compression of Stochastic Gradients. CoRR abs/2102.08374 (2021) - [i121]Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Alexander Rogozin, Alexander V. Gasnikov:
ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks. CoRR abs/2102.09234 (2021) - [i120]Zheng Shi, Nicolas Loizou, Peter Richtárik, Martin Takác:
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods. CoRR abs/2102.09700 (2021) - [i119]Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau:
Hyperparameter Transfer Learning with Adaptive Complexity. CoRR abs/2102.12810 (2021) - [i118]Zhize Li, Peter Richtárik:
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation. CoRR abs/2103.01447 (2021) - [i117]Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik:
Random Reshuffling with Variance Reduction: New Analysis and Better Rates. CoRR abs/2104.09342 (2021) - [i116]Mher Safaryan, Rustem Islamov, Xun Qian, Peter Richtárik:
FedNL: Making Newton-Type Methods Applicable to Federated Learning. CoRR abs/2106.02969 (2021) - [i115]Laurent Condat, Peter Richtárik:
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization. CoRR abs/2106.03056 (2021) - [i114]Adil Salim, Lukang Sun, Peter Richtárik:
Complexity Analysis of Stein Variational Gradient Descent Under Talagrand's Inequality T1. CoRR abs/2106.03076 (2021) - [i113]Bokun Wang, Mher Safaryan, Peter Richtárik:
Smoothness-Aware Quantization Techniques. CoRR abs/2106.03524 (2021) - [i112]Dmitry Kovalev, Elnur Gasanov, Peter Richtárik, Alexander V. Gasnikov:
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks. CoRR abs/2106.04469 (2021) - [i111]Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin:
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback. CoRR abs/2106.05203 (2021) - [i110]Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Agüera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas N. Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horváth, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecný, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtárik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake E. Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu:
A Field Guide to Federated Optimization. CoRR abs/2107.06917 (2021) - [i109]Zhize Li, Peter Richtárik:
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression. CoRR abs/2107.09461 (2021) - [i108]Haoyu Zhao, Zhize Li, Peter Richtárik:
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning. CoRR abs/2108.04755 (2021) - [i107]Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtárik, Michael W. Mahoney, Martin Takác:
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information. CoRR abs/2109.05198 (2021) - [i106]Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik:
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback. CoRR abs/2110.03294 (2021) - [i105]Rafal Szlendak, Alexander Tyurin, Peter Richtárik:
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization. CoRR abs/2110.03300 (2021) - [i104]Aleksandr Beznosikov, Peter Richtárik, Michael Diskin, Max Ryabinin, Alexander V. Gasnikov:
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees. CoRR abs/2110.03313 (2021) - [i103]Xun Qian, Rustem Islamov, Mher Safaryan, Peter Richtárik:
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning. CoRR abs/2111.01847 (2021) - [i102]Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtárik:
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning. CoRR abs/2111.11556 (2021) - [i101]Haoyu Zhao, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
Faster Rates for Compressed Federated Learning with Client-Variance Reduction. CoRR abs/2112.13097 (2021) - [i100]Dmitry Kovalev, Alexander V. Gasnikov, Peter Richtárik:
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling. CoRR abs/2112.15199 (2021) - 2020
- [j32]Nicolas Loizou, Peter Richtárik:
Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods. Comput. Optim. Appl. 77(3): 653-710 (2020) - [j31]Robert M. Gower, Mark Schmidt, Francis R. Bach, Peter Richtárik:
Variance-Reduced Methods for Machine Learning. Proc. IEEE 108(11): 1968-1983 (2020) - [j30]El Houcine Bergou, Eduard Gorbunov, Peter Richtárik:
Stochastic Three Points Method for Unconstrained Smooth Minimization. SIAM J. Optim. 30(4): 2726-2749 (2020) - [j29]Peter Richtárik, Martin Takác:
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory. SIAM J. Matrix Anal. Appl. 41(2): 487-524 (2020) - [j28]Nicolas Loizou, Peter Richtárik:
Convergence Analysis of Inexact Randomized Iterative Methods. SIAM J. Sci. Comput. 42(6): A3979-A4016 (2020) - [j27]Aritra Dutta, Filip Hanzely, Jingwei Liang, Peter Richtárik:
Best Pair Formulation & Accelerated Scheme for Non-Convex Principal Component Pursuit. IEEE Trans. Signal Process. 68: 6128-6141 (2020) - [c45]Adel Bibi, El Houcine Bergou, Ozan Sener, Bernard Ghanem, Peter Richtárik:
A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control. AAAI 2020: 3275-3282 - [c44]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent. AISTATS 2020: 680-690 - [c43]Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik:
Tighter Theory for Local SGD on Identical and Heterogeneous Data. AISTATS 2020: 4519-4529 - [c42]Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky:
Revisiting Stochastic Extragradient. AISTATS 2020: 4573-4582 - [c41]Dmitry Kovalev, Samuel Horváth, Peter Richtárik:
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. ALT 2020: 451-467 - [c40]Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou, Peter Richtárik:
A Stochastic Derivative Free Optimization Method with Momentum. ICLR 2020 - [c39]Filip Hanzely, Nikita Doikov, Yurii E. Nesterov, Peter Richtárik:
Stochastic Subspace Cubic Newton Method. ICML 2020: 4027-4038 - [c38]Filip Hanzely, Dmitry Kovalev, Peter Richtárik:
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems. ICML 2020: 4039-4048 - [c37]Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtárik:
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization. ICML 2020: 5895-5904 - [c36]Grigory Malinovskiy, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik:
From Local SGD to Local Fixed-Point Methods for Federated Learning. ICML 2020: 6692-6701 - [c35]Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtárik:
Linearly Converging Error Compensated SGD. NeurIPS 2020 - [c34]Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik:
Lower Bounds and Optimal Algorithms for Personalized Federated Learning. NeurIPS 2020 - [c33]Dmitry Kovalev, Adil Salim, Peter Richtárik:
Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization. NeurIPS 2020 - [c32]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Random Reshuffling: Simple Analysis with Vast Improvements. NeurIPS 2020 - [c31]Adil Salim, Peter Richtárik:
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm. NeurIPS 2020 - [c30]Konstantin Mishchenko, Filip Hanzely, Peter Richtárik:
99% of Worker-Master Communication in Distributed Optimization Is Not Needed. UAI 2020: 979-988 - [i99]Ahmed Khaled, Peter Richtárik:
Better Theory for SGD in the Nonconvex World. CoRR abs/2002.03329 (2020) - [i98]Filip Hanzely, Dmitry Kovalev, Peter Richtárik:
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems. CoRR abs/2002.04670 (2020) - [i97]Samuel Horváth, Lihua Lei, Peter Richtárik, Michael I. Jordan:
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization. CoRR abs/2002.05359 (2020) - [i96]Filip Hanzely, Peter Richtárik:
Federated Learning of a Mixture of Global and Local Models. CoRR abs/2002.05516 (2020) - [i95]Mher Safaryan, Egor Shulgin, Peter Richtárik:
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor. CoRR abs/2002.08958 (2020) - [i94]Filip Hanzely, Nikita Doikov, Peter Richtárik, Yurii E. Nesterov:
Stochastic Subspace Cubic Newton Method. CoRR abs/2002.09526 (2020) - [i93]Dmitry Kovalev, Robert M. Gower, Peter Richtárik, Alexander Rogozin:
Fast Linear Convergence of Randomized BFGS. CoRR abs/2002.11337 (2020) - [i92]Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtárik:
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization. CoRR abs/2002.11364 (2020) - [i91]Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan:
On Biased Compression for Distributed Learning. CoRR abs/2002.12410 (2020) - [i90]Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik:
From Local SGD to Local Fixed Point Methods for Federated Learning. CoRR abs/2004.01442 (2020) - [i89]Atal Narayan Sahu, Aritra Dutta, Aashutosh Tiwari, Peter Richtárik:
On the Convergence Analysis of Asynchronous SGD for Solving Consistent Linear Systems. CoRR abs/2004.02163 (2020) - [i88]Adil Salim, Laurent Condat, Konstantin Mishchenko, Peter Richtárik:
Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms. CoRR abs/2004.02635 (2020) - [i87]Motasem Alfarra, Slavomír Hanzely, Alyazeed Albasyoni, Bernard Ghanem, Peter Richtárik:
Adaptive Learning of the Optimal Mini-Batch Size of SGD. CoRR abs/2005.01097 (2020) - [i86]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Random Reshuffling: Simple Analysis with Vast Improvements. CoRR abs/2006.05988 (2020) - [i85]Zhize Li, Peter Richtárik:
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization. CoRR abs/2006.07013 (2020) - [i84]Adil Salim, Peter Richtárik:
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm. CoRR abs/2006.09270 (2020) - [i83]Samuel Horváth, Peter Richtárik:
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning. CoRR abs/2006.11077 (2020) - [i82]Ahmed Khaled, Othmane Sebbouh, Nicolas Loizou, Robert M. Gower, Peter Richtárik:
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization. CoRR abs/2006.11573 (2020) - [i81]Zhize Li, Hongyan Bao, Xiangliang Zhang, Peter Richtárik:
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization. CoRR abs/2008.10898 (2020) - [i80]Robert M. Gower, Mark Schmidt, Francis R. Bach, Peter Richtárik:
Variance-Reduced Methods for Machine Learning. CoRR abs/2010.00892 (2020) - [i79]Laurent Condat, Grigory Malinovsky, Peter Richtárik:
Distributed Proximal Splitting Algorithms with Rates and Acceleration. CoRR abs/2010.00952 (2020) - [i78]Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik:
Lower Bounds and Optimal Algorithms for Personalized Federated Learning. CoRR abs/2010.02372 (2020) - [i77]Alyazeed Albasyoni, Mher Safaryan, Laurent Condat, Peter Richtárik:
Optimal Gradient Compression for Distributed and Federated Learning. CoRR abs/2010.03246 (2020) - [i76]Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtárik:
Linearly Converging Error Compensated SGD. CoRR abs/2010.12292 (2020) - [i75]Wenlin Chen, Samuel Horváth, Peter Richtárik:
Optimal Client Sampling for Federated Learning. CoRR abs/2010.13723 (2020) - [i74]Dmitry Kovalev, Anastasia Koloskova, Martin Jaggi, Peter Richtárik, Sebastian U. Stich:
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! CoRR abs/2011.01697 (2020) - [i73]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
Local SGD: Unified Theory and New Efficient Methods. CoRR abs/2011.02828 (2020)
2010 – 2019
- 2019
- [j26]Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takác, Marten van Dijk:
New Convergence Aspects of Stochastic Gradient Algorithms. J. Mach. Learn. Res. 20: 176:1-176:49 (2019) - [j25]Ion Necoara, Peter Richtárik, Andrei Patrascu:
Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates. SIAM J. Optim. 29(4): 2814-2852 (2019) - [c29]Aritra Dutta, Filip Hanzely, Peter Richtárik:
A Nonconvex Projection Method for Robust PCA. AAAI 2019: 1468-1476 - [c28]Filip Hanzely, Peter Richtárik:
Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches. AISTATS 2019: 304-312 - [c27]Nicolas Loizou, Michael G. Rabbat, Peter Richtárik:
Provably Accelerated Randomized Gossip Algorithms. ICASSP 2019: 7505-7509 - [c26]Samuel Horváth, Peter Richtárik:
Nonconvex Variance Reduced Optimization with Arbitrary Sampling. ICML 2019: 2781-2789 - [c25]Xun Qian, Zheng Qu, Peter Richtárik:
SAGA with Arbitrary Sampling. ICML 2019: 5190-5199 - [c24]Xun Qian, Peter Richtárik, Robert M. Gower, Alibek Sailanbayev, Nicolas Loizou, Egor Shulgin:
SGD with Arbitrary Sampling: General Analysis and Improved Rates. ICML 2019: 5200-5209 - [c23]Robert M. Gower, Dmitry Kovalev, Felix Lieder, Peter Richtárik:
RSN: Randomized Subspace Newton. NeurIPS 2019: 614-623 - [c22]Adil Salim, Dmitry Kovalev, Peter Richtárik:
Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates. NeurIPS 2019: 6649-6661 - [c21]Jinhui Xiong, Peter Richtárik, Wolfgang Heidrich:
Stochastic Convolutional Sparse Coding. VMV 2019: 47-54 - [c20]Aritra Dutta, Peter Richtárik:
Online and Batch Supervised Background Estimation Via L1 Regression. WACV 2019: 541-550 - [i72]Xun Qian, Zheng Qu, Peter Richtárik:
SAGA with Arbitrary Sampling. CoRR abs/1901.08669 (2019) - [i71]Dmitry Kovalev, Samuel Horváth, Peter Richtárik:
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. CoRR abs/1901.08689 (2019) - [i70]Konstantin Mishchenko, Eduard Gorbunov, Martin Takác, Peter Richtárik:
Distributed Learning with Compressed Gradient Differences. CoRR abs/1901.09269 (2019) - [i69]Filip Hanzely, Jakub Konecný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko:
A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion. CoRR abs/1901.09367 (2019) - [i68]Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, Peter Richtárik:
SGD: General Analysis and Improved Rates. CoRR abs/1901.09401 (2019) - [i67]Konstantin Mishchenko, Filip Hanzely, Peter Richtárik:
99% of Parallel Optimization is Inevitably a Waste of Time. CoRR abs/1901.09437 (2019) - [i66]Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports, Peter Richtárik:
Scaling Distributed Machine Learning with In-Network Aggregation. CoRR abs/1903.06701 (2019) - [i65]Nicolas Loizou, Peter Richtárik:
Convergence Analysis of Inexact Randomized Iterative Methods. CoRR abs/1903.07971 (2019) - [i64]Nicolas Loizou, Peter Richtárik:
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols. CoRR abs/1905.08645 (2019) - [i63]Aritra Dutta, Filip Hanzely, Jingwei Liang, Peter Richtárik:
Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit. CoRR abs/1905.10598 (2019) - [i62]Samuel Horváth, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtárik:
Natural Compression for Distributed Deep Learning. CoRR abs/1905.10988 (2019) - [i61]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent. CoRR abs/1905.11261 (2019) - [i60]Filip Hanzely, Peter Richtárik:
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods. CoRR abs/1905.11266 (2019) - [i59]Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky:
Revisiting Stochastic Extragradient. CoRR abs/1905.11373 (2019) - [i58]Aritra Dutta, El Houcine Bergou, Yunming Xiao, Marco Canini, Peter Richtárik:
Direct Nonlinear Acceleration. CoRR abs/1905.11692 (2019) - [i57]Adil Salim, Dmitry Kovalev, Peter Richtárik:
Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates. CoRR abs/1905.11768 (2019) - [i56]Jinhui Xiong, Peter Richtárik, Wolfgang Heidrich:
Stochastic Convolutional Sparse Coding. CoRR abs/1909.00145 (2019) - [i55]Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik:
First Analysis of Local GD on Heterogeneous Data. CoRR abs/1909.04715 (2019) - [i54]Ahmed Khaled, Peter Richtárik:
Gradient Descent with Compressed Iterates. CoRR abs/1909.04716 (2019) - [i53]Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik:
Better Communication Complexity for Local SGD. CoRR abs/1909.04746 (2019) - [i52]Dmitry Kovalev, Konstantin Mishchenko, Peter Richtárik:
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates. CoRR abs/1912.01597 (2019) - [i51]Sélim Chraibi, Ahmed Khaled, Dmitry Kovalev, Peter Richtárik, Adil Salim, Martin Takác:
Distributed Fixed Point Methods with Compressed Iterates. CoRR abs/1912.09925 (2019) - 2018
- [j24]Jakub Konecný, Peter Richtárik:
Randomized Distributed Mean Estimation: Accuracy vs. Communication. Frontiers Appl. Math. Stat. 4: 62 (2018) - [j23]Dominik Csiba, Peter Richtárik:
Importance Sampling for Minibatches. J. Mach. Learn. Res. 19: 27:1-27:21 (2018) - [j22]Rachael Tappenden, Martin Takác, Peter Richtárik:
On the complexity of parallel coordinate descent. Optim. Methods Softw. 33(2): 372-395 (2018) - [j21]Antonin Chambolle, Matthias J. Ehrhardt, Peter Richtárik, Carola-Bibiane Schönlieb:
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications. SIAM J. Optim. 28(4): 2783-2808 (2018) - [c19]Nicolas Loizou, Peter Richtárik:
Accelerated Gossip via Stochastic Heavy Ball Method. Allerton 2018: 927-934 - [c18]Dominik Csiba, Peter Richtárik:
Coordinate Descent Faceoff: Primal or Dual? ALT 2018: 246-267 - [c17]Nikita Doikov, Peter Richtárik:
Randomized Block Cubic Newton Method. ICML 2018: 1289-1297 - [c16]Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takác:
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption. ICML 2018: 3747-3755 - [c15]Robert M. Gower, Filip Hanzely, Peter Richtárik, Sebastian U. Stich:
Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization. NeurIPS 2018: 1626-1636 - [c14]Filip Hanzely, Konstantin Mishchenko, Peter Richtárik:
SEGA: Variance Reduction via Gradient Sketching. NeurIPS 2018: 2086-2097 - [c13]Dmitry Kovalev, Peter Richtárik, Eduard Gorbunov, Elnur Gasanov:
Stochastic Spectral and Conjugate Descent Methods. NeurIPS 2018: 3362-3371 - [c12]Jakub Marecek, Peter Richtárik, Martin Takác:
Matrix Completion Under Interval Uncertainty: Highlights. ECML/PKDD (3) 2018: 621-625 - [i50]Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takác:
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption. CoRR abs/1802.03801 (2018) - [i49]Robert M. Gower, Filip Hanzely, Peter Richtárik, Sebastian U. Stich:
Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization. CoRR abs/1802.04079 (2018) - [i48]Filip Hanzely, Peter Richtárik:
Fastest Rates for Stochastic Mirror Descent Methods. CoRR abs/1803.07374 (2018) - [i47]Aritra Dutta, Xin Li, Peter Richtárik:
Weighted Low-Rank Approximation of Matrices and Background Modeling. CoRR abs/1804.06252 (2018) - [i46]Aritra Dutta, Filip Hanzely, Peter Richtárik:
A Nonconvex Projection Method for Robust PCA. CoRR abs/1805.07962 (2018) - [i45]Filip Hanzely, Konstantin Mishchenko, Peter Richtárik:
SEGA: Variance Reduction via Gradient Sketching. CoRR abs/1809.03054 (2018) - [i44]Nicolas Loizou, Peter Richtárik:
Accelerated Gossip via Stochastic Heavy Ball Method. CoRR abs/1809.08657 (2018) - [i43]Nicolas Loizou, Michael G. Rabbat, Peter Richtárik:
Provably Accelerated Randomized Gossip Algorithms. CoRR abs/1810.13084 (2018) - [i42]Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takác, Marten van Dijk:
New Convergence Aspects of Stochastic Gradient Algorithms. CoRR abs/1811.12403 (2018) - 2017
- [j20]Jakub Marecek, Peter Richtárik, Martin Takác:
Matrix completion under interval uncertainty. Eur. J. Oper. Res. 256(1): 35-43 (2017) - [j19]Jakub Konecný, Peter Richtárik:
Semi-Stochastic Gradient Descent Methods. Frontiers Appl. Math. Stat. 3: 9 (2017) - [j18]Chenxin Ma, Jakub Konecný, Martin Jaggi, Virginia Smith, Michael I. Jordan, Peter Richtárik, Martin Takác:
Distributed optimization with arbitrary local solvers. Optim. Methods Softw. 32(4): 813-848 (2017) - [j17]Jakub Konecný, Zheng Qu, Peter Richtárik:
Semi-stochastic coordinate descent. Optim. Methods Softw. 32(5): 993-1005 (2017) - [j16]Robert M. Gower, Peter Richtárik:
Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms. SIAM J. Matrix Anal. Appl. 38(4): 1380-1409 (2017) - [c11]Xin Li, Aritra Dutta, Peter Richtárik:
A Batch-Incremental Video Background Estimation Model Using Weighted Low-Rank Approximation of Matrices. ICCV Workshops 2017: 1835-1843 - [i41]Peter Richtárik, Martin Takác:
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory. CoRR abs/1706.01108 (2017) - [i40]Antonin Chambolle, Matthias J. Ehrhardt, Peter Richtárik, Carola-Bibiane Schönlieb:
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications. CoRR abs/1706.04957 (2017) - [i39]Aritra Dutta, Xin Li, Peter Richtárik:
A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices. CoRR abs/1707.00281 (2017) - [i38]Nicolas Loizou, Peter Richtárik:
Linearly convergent stochastic heavy ball method for minimizing generalization error. CoRR abs/1710.10737 (2017) - [i37]Aritra Dutta, Peter Richtárik:
Online and Batch Supervised Background Estimation via L1 Regression. CoRR abs/1712.02249 (2017) - [i36]Nicolas Loizou, Peter Richtárik:
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods. CoRR abs/1712.09677 (2017) - 2016
- [j15]Peter Richtárik, Martin Takác:
Distributed Coordinate Descent Method for Learning with Big Data. J. Mach. Learn. Res. 17: 75:1-75:25 (2016) - [j14]Rachael Tappenden, Peter Richtárik, Jacek Gondzio:
Inexact Coordinate Descent: Complexity and Preconditioning. J. Optim. Theory Appl. 170(1): 144-176 (2016) - [j13]Jakub Konecný, Jie Liu, Peter Richtárik, Martin Takác:
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting. IEEE J. Sel. Top. Signal Process. 10(2): 242-255 (2016) - [j12]Peter Richtárik, Martin Takác:
Parallel coordinate descent methods for big data optimization. Math. Program. 156(1-2): 433-484 (2016) - [j11]Peter Richtárik, Martin Takác:
On optimal probabilities in stochastic coordinate descent methods. Optim. Lett. 10(6): 1233-1243 (2016) - [j10]Zheng Qu, Peter Richtárik:
Coordinate descent with arbitrary sampling I: algorithms and complexity. Optim. Methods Softw. 31(5): 829-857 (2016) - [j9]Zheng Qu, Peter Richtárik:
Coordinate descent with arbitrary sampling II: expected separable overapproximation. Optim. Methods Softw. 31(5): 858-884 (2016) - [j8]Olivier Fercoq, Peter Richtárik:
Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent. SIAM Rev. 58(4): 739-771 (2016) - [c10]Nicolas Loizou, Peter Richtárik:
A new perspective on randomized gossip algorithms. GlobalSIP 2016: 440-444 - [c9]Zeyuan Allen Zhu, Zheng Qu, Peter Richtárik, Yang Yuan:
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling. ICML 2016: 1110-1119 - [c8]Zheng Qu, Peter Richtárik, Martin Takác, Olivier Fercoq:
SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization. ICML 2016: 1823-1832 - [c7]Robert M. Gower, Donald Goldfarb, Peter Richtárik:
Stochastic Block BFGS: Squeezing More Curvature out of Data. ICML 2016: 1869-1878 - [i35]Robert M. Gower, Peter Richtárik:
Randomized Quasi-Newton Updates are Linearly Convergent Matrix Inversion Algorithms. CoRR abs/1602.01768 (2016) - [i34]Dominik Csiba, Peter Richtárik:
Importance Sampling for Minibatches. CoRR abs/1602.02283 (2016) - [i33]Sashank J. Reddi, Jakub Konecný, Peter Richtárik, Barnabás Póczos, Alexander J. Smola:
AIDE: Fast and Communication Efficient Distributed Optimization. CoRR abs/1608.06879 (2016) - [i32]Jakub Konecný, H. Brendan McMahan, Daniel Ramage, Peter Richtárik:
Federated Optimization: Distributed Machine Learning for On-Device Intelligence. CoRR abs/1610.02527 (2016) - [i31]Nicolas Loizou, Peter Richtárik:
A New Perspective on Randomized Gossip Algorithms. CoRR abs/1610.04714 (2016) - [i30]Jakub Konecný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon:
Federated Learning: Strategies for Improving Communication Efficiency. CoRR abs/1610.05492 (2016) - [i29]Jakub Konecný, Peter Richtárik:
Randomized Distributed Mean Estimation: Accuracy vs Communication. CoRR abs/1611.07555 (2016) - 2015
- [j7]Rachael Tappenden, Peter Richtárik, Burak Büke:
Separable approximations and decomposition methods for the augmented Lagrangian. Optim. Methods Softw. 30(3): 643-668 (2015) - [j6]Olivier Fercoq, Peter Richtárik:
Accelerated, Parallel, and Proximal Coordinate Descent. SIAM J. Optim. 25(4): 1997-2023 (2015) - [j5]Robert Mansel Gower, Peter Richtárik:
Randomized Iterative Methods for Linear Systems. SIAM J. Matrix Anal. Appl. 36(4): 1660-1690 (2015) - [c6]Dominik Csiba, Zheng Qu, Peter Richtárik:
Stochastic Dual Coordinate Ascent with Adaptive Probabilities. ICML 2015: 674-683 - [c5]Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtárik, Martin Takác:
Adding vs. Averaging in Distributed Primal-Dual Optimization. ICML 2015: 1973-1982 - [c4]Zheng Qu, Peter Richtárik, Tong Zhang:
Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling. NIPS 2015: 865-873 - [i28]Zheng Qu, Peter Richtárik, Martin Takác, Olivier Fercoq:
SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization. CoRR abs/1502.02268 (2015) - [i27]Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtárik, Martin Takác:
Adding vs. Averaging in Distributed Primal-Dual Optimization. CoRR abs/1502.03508 (2015) - [i26]Dominik Csiba, Zheng Qu, Peter Richtárik:
Stochastic Dual Coordinate Ascent with Adaptive Probabilities. CoRR abs/1502.08053 (2015) - [i25]Jakub Konecný, Jie Liu, Peter Richtárik, Martin Takác:
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting. CoRR abs/1504.04407 (2015) - [i24]Dominik Csiba, Peter Richtárik:
Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex Losses. CoRR abs/1506.02227 (2015) - [i23]Martin Takác, Peter Richtárik, Nathan Srebro:
Distributed Mini-Batch SDCA. CoRR abs/1507.08322 (2015) - [i22]Chenxin Ma, Jakub Konecný, Martin Jaggi, Virginia Smith, Michael I. Jordan, Peter Richtárik, Martin Takác:
Distributed Optimization with Arbitrary Local Solvers. CoRR abs/1512.04039 (2015) - [i21]Robert Mansel Gower, Peter Richtárik:
Stochastic Dual Ascent for Solving Linear Systems. CoRR abs/1512.06890 (2015) - 2014
- [j4]Peter Richtárik, Martin Takác:
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 144(1-2): 1-38 (2014) - [c3]Olivier Fercoq, Zheng Qu, Peter Richtárik, Martin Takác:
Fast distributed coordinate descent for non-strongly convex losses. MLSP 2014: 1-6 - [i20]Olivier Fercoq, Zheng Qu, Peter Richtárik, Martin Takác:
Fast Distributed Coordinate Descent for Non-Strongly Convex Losses. CoRR abs/1405.5300 (2014) - [i19]Martin Takác, Jakub Marecek, Peter Richtárik:
Inequality-Constrained Matrix Completion: Adding the Obvious Helps! CoRR abs/1408.2467 (2014) - [i18]Jakub Konecný, Peter Richtárik:
Simple Complexity Analysis of Direct Search. CoRR abs/1410.0390 (2014) - [i17]Jakub Konecný, Jie Liu, Peter Richtárik, Martin Takác:
mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting. CoRR abs/1410.4744 (2014) - [i16]Zheng Qu, Peter Richtárik, Tong Zhang:
Randomized Dual Coordinate Ascent with Arbitrary Sampling. CoRR abs/1411.5873 (2014) - [i15]Jakub Konecný, Zheng Qu, Peter Richtárik:
Semi-Stochastic Coordinate Descent. CoRR abs/1412.6293 (2014) - [i14]Zheng Qu, Peter Richtárik:
Coordinate Descent with Arbitrary Sampling I: Algorithms and Complexity. CoRR abs/1412.8060 (2014) - [i13]Zheng Qu, Peter Richtárik:
Coordinate Descent with Arbitrary Sampling II: Expected Separable Overapproximation. CoRR abs/1412.8063 (2014) - 2013
- [c2]Martin Takác, Avleen Singh Bijral, Peter Richtárik, Nati Srebro:
Mini-Batch Primal and Dual Methods for SVMs. ICML (3) 2013: 1022-1030 - [i12]Martin Takác, Avleen Singh Bijral, Peter Richtárik, Nathan Srebro:
Mini-Batch Primal and Dual Methods for SVMs. CoRR abs/1303.2314 (2013) - [i11]Rachael Tappenden, Peter Richtárik, Jacek Gondzio:
Inexact Coordinate Descent: Complexity and Preconditioning. CoRR abs/1304.5530 (2013) - [i10]Rachael Tappenden, Peter Richtárik, Burak Büke:
Separable Approximations and Decomposition Methods for the Augmented Lagrangian. CoRR abs/1308.6774 (2013) - [i9]Olivier Fercoq, Peter Richtárik:
Smooth minimization of nonsmooth functions with parallel coordinate descent methods. CoRR abs/1309.5885 (2013) - [i8]Peter Richtárik, Martin Takác:
Distributed Coordinate Descent Method for Learning with Big Data. CoRR abs/1310.2059 (2013) - [i7]Peter Richtárik, Martin Takác:
On Optimal Probabilities in Stochastic Coordinate Descent Methods. CoRR abs/1310.3438 (2013) - [i6]Martin Takác, Selin Damla Ahipasaoglu, Ngai-Man Cheung, Peter Richtárik:
TOP-SPIN: TOPic discovery via Sparse Principal component INterference. CoRR abs/1311.1406 (2013) - [i5]Jakub Konecný, Peter Richtárik:
Semi-Stochastic Gradient Descent Methods. CoRR abs/1312.1666 (2013) - [i4]Olivier Fercoq, Peter Richtárik:
Accelerated, Parallel and Proximal Coordinate Descent. CoRR abs/1312.5799 (2013) - 2012
- [j3]Peter Richtárik:
Approximate Level Method for Nonsmooth Convex Minimization. J. Optim. Theory Appl. 152(2): 334-350 (2012) - [i3]Peter Richtárik, Martin Takác:
Parallel Coordinate Descent Methods for Big Data Optimization. CoRR abs/1212.0873 (2012) - [i2]William Hulme, Peter Richtárik, Lynne McGuire, Alison Green:
Optimal diagnostic tests for sporadic Creutzfeldt-Jakob disease based on support vector machine classification of RT-QuIC data. CoRR abs/1212.2617 (2012) - [i1]Peter Richtárik, Martin Takác, Selin Damla Ahipasaoglu:
Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes. CoRR abs/1212.4137 (2012) - 2011
- [j2]Peter Richtárik:
Improved Algorithms for Convex Minimization in Relative Scale. SIAM J. Optim. 21(3): 1141-1167 (2011) - [c1]Peter Richtárik, Martin Takác:
Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design. OR 2011: 27-32 - 2010
- [j1]Michel Journée, Yurii E. Nesterov, Peter Richtárik, Rodolphe Sepulchre:
Generalized Power Method for Sparse Principal Component Analysis. J. Mach. Learn. Res. 11: 517-553 (2010)
Coauthor Index
aka: Robert Mansel Gower
aka: Grigory Malinovskiy
aka: Alexander Tyurin
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 20:13 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint