


default search action
SIAM Journal on Mathematics of Data Science, Volume 7
Volume 7, Number 1, 2025
- Gage DeZoort

, Boris Hanin:
Principles for Initialization and Architecture Selection in Graph Neural Networks with ReLU Activations. 1-27 - Damir Filipovic, Michael D. Multerer

, Paul Schneider
:
Adaptive Joint Distribution Learning. 28-54 - Chaoyu Liu

, Zhonghua Qiao
, Chao Li, Carola-Bibiane Schönlieb:
Inverse Evolution Layers: Physics-Informed Regularizers for Image Segmentation. 55-85 - Giovanni Conforti, Alain Durmus

, Marta Gentiloni Silveri:
KL Convergence Guarantees for Score Diffusion Models under Minimal Data Assumptions. 86-109 - Rahul Parhi

, Michael Unser:
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities. 110-135 - Arvind K. Saibaba

, Agnieszka Miedlar
:
Randomized Low-Rank Approximations beyond Gaussian Random Matrices. 136-162 - Haoyue Wang, Shibal Ibrahim, Rahul Mazumder

:
Nonparametric Finite Mixture Models with Possible Shape Constraints: A Cubic Newton Approach. 163-188 - Aimee Maurais

, Terrence Alsup, Benjamin Peherstorfer, Youssef M. Marzouk:
Multifidelity Covariance Estimation via Regression on the Manifold of Symmetric Positive Definite Matrices. 189-223 - Julia Lindberg, Carlos Améndola, Jose Israel Rodriguez:

Estimating Gaussian Mixtures Using Sparse Polynomial Moment Systems. 224-252 - Jiamin Liu, Junzhuo Gao

, Heng Lian:
Kernel-Based Regularized Learning with Random Projections: Beyond Least Squares. 253-273 - Nicolas Lanzetti

, Saverio Bolognani
, Florian Dörfler:
First-Order Conditions for Optimization in the Wasserstein Space. 274-300 - Zixuan Cang

, Yaqi Wu, Yanxiang Zhao
:
Supervised Gromov-Wasserstein Optimal Transport with Metric-Preserving Constraints. 301-328 - Csaba Tóth, Harald Oberhauser, Zoltán Szabó:

Random Fourier Signature Features. 329-354 - Anna C. Gilbert, Kevin O'Neill:

CA-PCA: Manifold Dimension Estimation, Adapted for Curvature. 355-383
Volume 7, Number 2, 2025
- Gianluca Fabiani

:
Random Projection Neural Networks of Best Approximation: Convergence Theory and Practical Applications. 385-409 - Andrew Lee

, Harlin Lee
, Jose A. Perea
, Nikolas Schonsheck
, Madeleine Weinstein:
\({O({k})}\)-Equivariant Dimensionality Reduction on Stiefel Manifolds. 410-437 - Mingqi Wu, Qiang Sun:

Ensemble Linear Interpolators: The Role of Ensembling. 438-467 - Jeremy E. Cohen

, Valentin Leplat:
Efficient Algorithms for Regularized Nonnegative Scale-Invariant Low-Rank Approximation Models. 468-494
Volume 7, Number 3, 2025
- Geyu Liang

, Gavin Zhang, Salar Fattahi
, Richard Y. Zhang
:
Simple Alternating Minimization Provably Solves Complete Dictionary Learning. 855-883 - Belen Martin-Urcelay

, Christopher J. Rozell, Matthieu R. Bloch:
Online Machine Teaching under Learner Uncertainty: Gradient Descent Learners of a Quadratic Loss. 884-905 - Mohammad Sadegh Salehi

, Subhadip Mukherjee, Lindon Roberts
, Matthias J. Ehrhardt
:
An Adaptively Inexact First-Order Method for Bilevel Optimization with Application to Hyperparameter Learning. 906-936 - Biraj Pandey, Bamdad Hosseini

, Pau Batlle
, Houman Owhadi
:
Diffeomorphic Measure Matching with Kernels for Generative Modeling. 937-964 - Konstantinos E. Nikolakakis

, Amin Karbasi, Dionysis Kalogerias
:
Select without Fear: Almost All Minibatch Schedules Generalize Optimally. 965-992 - Jason M. Altschuler

, Kunal Talwar:
Resolving the Mixing Time of the Langevin Algorithm to Its Stationary Distribution for Log-Concave Sampling. 993-1020 - Suzanna Parkinson

, Greg Ongie
, Rebecca Willett
:
ReLU Neural Networks with Linear Layers Are Biased towards Single- and Multi-index Models. 1021-1052 - Yulei Liao

, Pingbing Ming
:
Spectral Barron Space for Deep Neural Network Approximation. 1053-1076 - Ery Arias-Castro

, Siddharth Vishwanath
:
Stability of Sequential Lateration and of Stress Minimization in the Presence of Noise. 1077-1097 - Stanislav Budzinskiy

:
When Big Data Actually Are Low-Rank, or Entrywise Approximation of Certain Function-Generated Matrices. 1098-1122
Volume 7, Number 4, 2025
- Andraz Jelincic, Jiajie Tao, William F. Turner

, Thomas Cass, James M. Foster
, Hao Ni:
Generative Modeling of Lévy Area for High Order SDE Simulation. 1541-1567 - Rajdeep Haldar, Qifan Song:

On Neural Network Approximation of Ideal Adversarial Attack and Convergence of Adversarial Training. 1568-1593 - Dimitri Meunier, Zhu Li, Arthur Gretton

, Samory Kpotufe:
Nonlinear Meta-learning Can Guarantee Faster Rates. 1594-1615 - Sebastian Hofmann

, Alfio Borzì:
The Pontryagin Maximum Principle for Training Convolutional Neural Networks. 1616-1642 - Vitaliy A. Kurlin

:
Complete and Continuous Invariants of 1-Periodic Sequences in Polynomial Time. 1643-1663 - Bernard Bercu, Jérémie Bigot, Gauthier Thurin:

Stochastic Optimal Transport in Banach Spaces for Regularized Estimation of Multivariate Quantiles. 1664-1689 - Erdong Guo, David Draper:

Representation Theorems for Matrix Product States. 1690-1704 - Tamir Bendory

, Nadav Dym, Dan Edidin
, Arun Suresh:
Phase Retrieval with Semialgebraic and ReLU Neural Network Priors. 1705-1728 - Chaoyan Huang

, Zhongming Wu, Yanqi Cheng, Tieyong Zeng, Carola-Bibiane Schönlieb, Angelica I. Avilés-Rivero:
Deep Block Proximal Linearized Minimization Algorithm for Nonconvex Inverse Problems. 1729-1754 - Yunfei Yang, Han Feng

, Ding-Xuan Zhou
:
On the Rates of Convergence for Learning with Convolutional Neural Networks. 1755-1772 - Rishi Sonthalia, Anna Seigal

, Guido Montúfar:
Supermodular Rank: Set Function Decomposition and Optimization. 1773-1800

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














