


default search action
7th L4DC 2025: Ann Arbor, MI, USA
- Necmiye Ozay, Laura Balzano, Dimitra Panagou, Alessandro Abate:

7th Annual Learning for Dynamics & Control Conference, Ann Arbor, MI, USA, 4-6 June 2025. Proceedings of Machine Learning Research 283, PMLR 2025 - Ingvar M. Ziemann, Nikolai Matni, George J. Pappas:

State space models, emergence, and ergodicity: How many parameters are needed for stable predictions? 1-11 - Fatemeh Ghaffari, Xuchuang Wang, Jinhang Zuo, Mohammad Hajiesmaili:

Multi-agent Stochastic Bandits Robust to Adversarial Corruptions. 12-25 - Ingvar M. Ziemann:

A Short Information-Theoretic Analysis of Linear Auto-Regressive Learning. 26-30 - Batuhan Yardim, Niao He:

Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning. 31-44 - Anjian Li, Zihan Ding, Adji Bousso Dieng, Ryne Beeson:

DiffuSolve: Diffusion-based Solver for Non-convex Trajectory Optimization. 45-58 - Jingyun Ning, Madhur Behl:

DKMGP: A Gaussian Process Approach to Multi-Task and Multi-Step Vehicle Dynamics Modeling in Autonomous Racing. 59-71 - Behrad Moniri, Hamed Hassani:

Asymptotics of Linear Regression with Linearly Dependent Data. 72-85 - Emi Soroka, Rohan Sinha, Sanjay Lall:

Learning Temporal Logic Predicates from Data with Statistical Guarantees. 86-98 - Anant A. Joshi, Heng-Sheng Chang, Amirhossein Taghvaei, Prashant G. Mehta, Sean P. Meyn:

Interacting Particle Systems for Fast Linear Quadratic RL. 99-111 - Hansung Kim, Edward L. Zhu, Chang Seok Lim, Francesco Borrelli:

Learning Two-agent Motion Planning Strategies from Generalized Nash Equilibrium for Model Predictive Control. 112-123 - Vinod Raman, Unique Subedi, Ambuj Tewari:

The Complexity of Sequential Prediction in Dynamical Systems. 124-138 - Feihan Li, Abulikemu Abuduweili, Yifan Sun, Rui Chen, Weiye Zhao, Changliu Liu:

Continual Learning and Lifting of Koopman Dynamics for Linear Control of Legged Robots. 136-148 - Mostafa M. Shibl, Wesley Suttle, Vijay Gupta:

Scalable Natural Policy Gradient for General-Sum Linear Quadratic Games with Known Parameters. 139-152 - William D. Compton, Max H. Cohen, Aaron D. Ames:

Learning for Layered Safety-Critical Control with Predictive Control Barrier Functions. 153-165 - Bendegúz M. Györök, Jan H. Hoekstra, Johan Kon, Tamas Peni, Maarten Schoukens, Roland Tóth:

Orthogonal projection-based regularization for efficient model augmentation. 166-178 - Michael Cummins, Guner Dilsad Er, Michael Muehlebach:

Controlling Participation in Federated Learning with Feedback. 174-186 - Luke Bhan, Peijia Qin, Miroslav Krstic, Yuanyuan Shi:

Neural Operators for Predictor Feedback Control of Nonlinear Delay Systems. 179-193 - Jose Leopoldo Contreras, Ola Shorinwa, Mac Schwager:

Safe, Out-of-Distribution-Adaptive MPC with Conformalized Neural Network Ensembles. 194-207 - Haruto Nakashima, Siddhartha Ganguly, Kohei Morimoto, Kenji Kashima:

Formation Shape Control using the Gromov-Wasserstein Metric. 208-220 - Pol Mestres, Arnau Marzabal, Jorge Cortés:

Anytime Safe Reinforcement Learning. 221-232 - Onno Eberhard, Claire Vernade, Michael Muehlebach:

A Pontryagin Perspective on Reinforcement Learning. 233-244 - Ruhan Wang, Dongruo Zhou:

Safe Decision Transformer with Learning-based Constraints. 245-258 - Sunil Madhow, Dan Qiao, Ming Yin, Yuxiang Wang:

Rates for Offline Reinforcement Learning with Adaptively Collected Data. 259-271 - Matteo Cercola, Nicola Gatti, Pedro Huertas-Leyva, Benedetto Carambia, Simone Formentin:

Automating the loop in traffic incident management on highway. 272-284 - Yifan Sun, Feihan Li, Weiye Zhao, Rui Chen, Tianhao Wei, Changliu Liu:

Learn With Imagination: Safe Set Guided State-wise Constrained Policy Optimization. 298-309 - Arne Troch, Kevin Mets, Siegfried Mercelis:

Action-Conditioned Hamiltonian Generative Networks (AC-HGN) for Supervised and Reinforcement Learning. 310-322 - Joshua Ott, Mykel J. Kochenderfer, Stephen Boyd:

Informative Input Design for Dynamic Mode Decomposition. 336-349 - Dima Tretiak, Anastasia Bizyaeva, J. Nathan Kutz, Steven L. Brunton:

Physics-Enforced Reservoir Computing for Forecasting Spatiotemporal Systems. 350-364 - Will Sharpless, Zeyuan Feng, Somil Bansal, Sylvia L. Herbert:

Linear Supervision for Nonlinear, High-Dimensional Neural Control and Differential Games. 365-377 - Yeoneung Kim, Gihun Kim, Jiwhan Park, Insoon Yang:

Approximate Thompson Sampling for Learning Linear Quadratic Regulators with $O(\sqrt{T})$ Regret. 378-391 - Yang Zheng, Chih-Fan Pai, Yujie Tang:

Extended Convex Lifting for Policy Optimization of Optimal and Robust Control. 392-404 - Saray Bakker, Rodrigo Pérez-Dattari, Cosimo Della Santina, Wendelin Böhmer, Javier Alonso-Mora:

TamedPUMA: safe and stable imitation learning with geometric fabrics. 405-418 - Vasanth Reddy Baddam, Hoda Eldardiry, Almuatazbellah Boker:

Data-Driven Near-Optimal Control of Nonlinear Systems Over Finite Horizon. 419-430 - Rikhat Akizhanov, Victor Dhédin, Majid Khadiv, Ivan Laptev:

Learning Feasible Transitions for Efficient Contact Planning. 431-442 - Leon Khalyavin, Alessio Moreschini, Thomas Parisini:

Learning Kolmogorov-Arnold Neural Activation Functions by Infinite-Dimensional Optimization. 443-455 - Daniel Arnström, André M. H. Teixeira:

Data-Driven and Stealthy Deactivation of Safety Filters. 456-468 - Rishabh Agrawal, Nathan Dahlin, Rahul Jain, Ashutosh Nayyar:

Conditional Kernel Imitation Learning for Continuous State Environments. 469-483 - Yuhang Mei, Mohammad Al-Jarrah, Amirhossein Taghvaei, Yongxin Chen:

Flow matching for stochastic linear control systems. 484-496 - Christian Lagemann, Ludger Paehler, Jared Callaham, Sajeda Mokbel, Samuel Ahnert, Kai Lagemann, Esther Lagemann, Nikolaus A. Adams, Steven L. Brunton:

HydroGym: A Reinforcement Learning Platform for Fluid Dynamics. 497-512 - Hanjiang Hu, Changliu Liu:

Safe PDE Boundary Control with Neural Operators. 513-526 - Nicolas Christianson, Wenqi Cui, Steven H. Low, Weiwei Yang, Baosen Zhang:

Fast and Reliable N - k Contingency Screening with Input-Convex Neural Networks. 527-539 - Vinay Kanakeri, Aritra Mitra:

Outlier-Robust Linear System Identification Under Heavy-Tailed Noise. 540-551 - James Wang, Bruce D. Lee, Ingvar M. Ziemann, Nikolai Matni:

Logarithmic Regret for Nonlinear Control. 552-565 - Ahmad Ahmad, Mehdi Kermanshah, Kevin Leahy, Zachary Serlin, Ho Chit Siu, Makai Mann, Cristian-Ioan Vasile, Roberto Tron, Calin Belta:

Accelerating Proximal Policy Optimization Learning Using Task Prediction for Solving Environments with Delayed Rewards. 566-578 - Kohei Morimoto, Kenji Kashima:

Linear System Identification from Snapshot Data by Schrodinger bridge. 579-590 - Kinjal Bhar, He Bai, Jemin George, Carl E. Busart:

Scalability Enhancement and Data-Heterogeneity Awareness in Gradient Tracking based Decentralized Bayesian Learning. 591-605 - Jayanth Bhargav, Shreyas Sundaram, Mahsa Ghasemi:

Sensor Scheduling in Intrusion Detection Games with Uncertain Payoffs. 606-618 - Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, Mohammad Ghavamzadeh:

Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage. 619-634 - Michael Tang, Miroslav Krstic, Jorge Poveda:

Stochastic Real-Time Deception in Nash Equilibrium Seeking for Games with Quadratic Payoffs. 635-646 - Vijeth Hebbar, Cedric Langbort:

Responding to Promises: No-regret learning against followers with memory. 647-659 - Fethi Bencherki, Anders Rantzer:

Adaptive Control of Positive Systems with Application to Learning SSP. 660-672 - Michael Cummins, Alberto Padoan, Keith Moffat, Florian Dörfler, John Lygeros:

DeePC-Hunt: Data-enabled Predictive Control Hyperparameter Tuning via Differentiable Optimization. 673-685 - Murad Dawood, Ahmed Shokry, Maren Bennewitz:

A Dynamic Safety Shield for Safe and Efficient Reinforcement Learning of Navigation Tasks. 686-697 - Chenggang Wang, Xinyi Wang, Yutong Dong, Lei Song, Xinping Guan:

Multi-Constraint Safe Reinforcement Learning via Closed-form Solution for Log-Sum-Exp Approximation of Control Barrier Functions. 698-710 - Sébastien Labbé, Andrea Del Prete:

Analytical Integral Global Optimization. 711-722 - Yang Hu, Haitong Ma, Na Li, Bo Dai:

Efficient Duple Perturbation Robustness in Low-rank MDPs. 723-737 - Xiaoshan Lin, Sadik Bera Yüksel, Yasin Yazicioglu, Derya Aksaray:

Probabilistic Satisfaction of Temporal Logic Constraints in Reinforcement Learning via Adaptive Policy-Switching. 738-749 - Bosen Lian, Wenqian Xue, Nhan Nguyen:

Robust Inverse Reinforcement Learning Control with Unknown States. 750-762 - Jun Wang, Hosein Hasanbeig, Kaiyuan Tan, Zihe Sun, Yiannis Kantaros:

Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications. 763-776 - Tzu-Yuan Huang, Armin Lederer, Nicolas Hoischen, Jan Brüdigam, Xuehua Xiao, Stefan Sosnowski, Sandra Hirche:

Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models. 777-790 - Ralf Römer, Alexander von Rohr, Angela P. Schoellig:

Diffusion Predictive Control with Constraints. 791-803 - Thomas A. Henzinger, Fabian Kresse, Kaushik Mallik, Emily Yu, Djordje Zikelic:

Predictive Monitoring of Black-Box Dynamical Systems. 804-816 - Yuxi Wang, Peng Wu, Mahdi Imani:

Federated Posterior Sharing for Multi-Agent Systems in Uncertain Environments. 817-829 - Amirhossein Ravari, Seyede Fatemeh Ghoreishi, Tian Lan, Nathaniel D. Bastian, Mahdi Imani:

Hybrid Modeling of Heterogeneous Human Teams for Collaborative Decision Processes. 830-843 - Xiyuan Zhang, Daniel Ochoa, Regina Talonia, Jorge Poveda:

Deep Source-Seekers with Obstacle Avoidance: Adaptive Hybrid Control with Transformers In-The-Loop. 844-855 - Sean Anderson, João Pedro Hespanha:

Learning with contextual information in non-stationary environments. 856-868 - Leonard Jung, Alexander Estornell, Michael Everett:

Contingency Constrained Planning with MPPI within MPPI. 869-880 - Zhuoyu Xiao, Uday V. Shanbhag:

Computing Quasi-Nash Equilibria via Gradient-Response Schemes. 881-893 - Cevahir Köprülü, Po-han Li, Tianyu Qiu, Ruihan Zhao, Tyler Westenbroek, David Fridovich-Keil, Sandeep Chinchali, Ufuk Topcu:

Dense Dynamics-Aware Reward Synthesis: Integrating Prior Experience with Demonstrations. 894-906 - Tesshu Fujinami, Bruce D. Lee, Nikolai Matni, George J. Pappas:

Domain Randomization is Sample Efficient for Linear Quadratic Control. 907-919 - Ali Baheri, Zahra Shahrooei, Chirayu Salgarkar:

WAVE: Wasserstein Adaptive Value Estimation for Actor-Critic Reinforcement Learning. 920-931 - Kyungmin Kim, Davide Corsi, Andoni Rodríguez, JB Lanier, Benjami Parellada, Pierre Baldi, César Sánchez, Roy Fox:

Realizable Continuous-Space Shields for Safe Reinforcement Learning. 932-945 - Srikar Gouru, Siddharth Lakkoju, Rohan Chandra:

LiveNet: Robust, Minimally Invasive Multi-Robot Control for Safe and Live Navigation in Constrained Environments. 946-958 - Dvij Kalaria, Chinmay Maheshwari, Shankar Sastry:

α-RACER: Real-Time Algorithm for Game-Theoretic Motion Planning and Control in Autonomous Racing using Near-Potential Function. 959-972 - Damola Ajeyemi, Saber Jafarpour, Emiliano Dall'Anese:

Neural Network-assisted Interval Reachability for Systems with Control Barrier Function-Based Safe Controllers. 973-986 - Marcin Paluch, Florian Bolli, Pehuen Moure, Xiang Deng, Tobi Delbruck:

A-NC: Adaptive Neural Control with implicit online inference of privileged parameters. 987-998 - Nikolaos Bousias, Stefanos Pertigkiozoglou, Kostas Daniilidis, George J. Pappas:

Symmetries-enhanced Multi-Agent Reinforcement Learning. 999-1011 - Ahmad Al-Tawaha, Javad Lavaei, Ming Jin:

A Dynamic Penalization Framework for Online Rank-1 Semidefinite Programming Relaxations. 1012-1024 - Sunbochen Tang, Haoyuan Sun, Navid Azizan:

Meta-Learning for Adaptive Control with Automated Mirror Descent. 1025-1037 - Petar Bevanda, Nicolas Hoischen, Tobias Wittmann, Jan Brüdigam, Sandra Hirche, Boris Houska:

Kernel-Based Optimal Control: An Infinitesimal Generator Approach. 1038-1052 - Yangge Li, Chenxi Ji, Jai Anchalia, Yixuan Jia, Benjamin C. Yang, Daniel Zhuang, Sayan Mitra:

Lyapunov Perception Contracts for Operating Design Domains. 1053-1065 - Ruiyang Wang, Bowen He, Miroslav Pajic:

Neuro-Symbolic Deadlock Resolution in Multi-Robot Systems. 1066-1077 - Saptarshi Mandal, Xiaojun Lin, Rayadurgam Srikant:

A Theoretical Analysis of Soft-Label vs Hard-Label Training in Neural Networks. 1078-1089 - Alan Williams, Christopher Leon, Alexander Scheinker:

QP Based Constrained Optimization for Reliable PINN Training. 1090-1101 - Shuo Yang, Hongrui Zheng, Cristian-Ioan Vasile, George J. Pappas, Rahul Mangharam:

STLGame: Signal Temporal Logic Games in Adversarial Multi-Agent Systems. 1102-1114 - Taehyeun Kim, Anouck Girard, Ilya V. Kolmanovsky:

CIKAN: Constraint Informed Kolmogorov-Arnold Networks for Autonomous Spacecraft Rendezvous using Time Shift Governor. 1115-1126 - Zhexuan Zeng, Ruikun Zhou, Yiming Meng, Jun Liu:

Data-driven optimal control of unknown nonlinear dynamical systems using the Koopman operator. 1127-1139 - Haonan He, Yuheng Qiu, Junyi Geng:

Imperative MPC: An End-to-End Self-Supervised Learning with Differentiable MPC for UAV Attitude Control. 1140-1153 - Azra Begzadic, Nikhil Shinde, Sander Tonkens, Dylan Hirsch, Kaleb Ugalde, Michael C. Yip, Jorge Cortés, Sylvia L. Herbert:

Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters. 1154-1166 - Behrad Samari, Mahdieh Zaker, Abolfazl Lavaei:

Abstraction-Based Control of Unknown Continuous-Space Models with Just Two Trajectories. 1167-1179 - Muhammad Qasim Elahi, Somtochukwu Oguchienti, Maheed H. Ahmed, Mahsa Ghasemi:

Reinforcement Learning from Multi-level and Episodic Human Feedback. 1180-1193 - Noel Brindise, Vijeth Hebbar, Riya Shah, Cedric Langbort:

"What are my options?": Explaining RL Agents with Diverse Near-Optimal Alternatives. 1194-1205 - Mishal Assif P. K, Yuliy Baryshnikov:

Topological State Space Inference for Dynamical Systems. 1206-1216 - Keyan Miao, Liqun Zhao, Han Wang, Konstantinos Gatsis, Antonis Papachristodoulou:

Opt-ODENet: Neural ODE Controller Design with Differentiable Optimization Layers for Safety and Stability. 1217-1229 - Peilun Li, Kaiyuan Tan, Thomas Beckers:

NAPI-MPC: Neural Accelerated Physics-Informed MPC for Nonlinear PDE Systems. 1230-1242 - Shubh Maheshwari, Anwesh Mohanty, Yadi Cao, Swithin Razu, Andrew McCulloch, Rose Yu:

BIGE : Biomechanics-informed GenAI for Exercise Science. 1243-1256 - Michael Lu, Jashanraj Gosain, Luna Sang, Mo Chen:

Safe Learning in the Real World via Adaptive Shielding with Hamilton-Jacobi Reachability. 1257-1270 - Yahya Sattar, Yassir Jedra, Maryam Fazel, Sarah Dean:

Finite Sample Identification of Partially Observed Bilinear Dynamical Systems. 1271-1285 - Uday Kiran Reddy Tadipatri, Benjamin D. Haeffele, Joshua Agterberg, Ingvar M. Ziemann, René Vidal:

Nonconvex Linear System Identification with Minimal State Representation. 1286-1299 - Hsin-Jung Yang, Mahsa Khosravi, Benjamin Walt, Girish Krishnan, Soumik Sarkar:

Zero-shot Sim-to-Real Transfer for Reinforcement Learning-based Visual Servoing of Soft Continuum Arms. 1300-1312 - Maryann Rui, Munther A. Dahleh:

Finite Sample Analysis of Tensor Decomposition for Learning Mixtures of Linear Systems. 1313-1325 - Pedram Rabiee, Amirsaeid Safari:

Safe Exploration in Reinforcement Learning: Training Backup Control Barrier Functions with Zero Training-Time Safety Violations. 1326-1337 - Evangelos Chatzipantazis, Nishanth Rao, Kostas Daniilidis:

STRiDE: STate-space Riemannian Diffusion for Equivariant Planning. 1338-1352 - Hao-Lun Hsu, Miroslav Pajic:

Safe Cooperative Multi-Agent Reinforcement Learning with Function Approximation. 1353-1364 - Hanna Krasowski, Eric Palanques-Tost, Calin Belta, Murat Arcak:

Learning Biomolecular Models using Signal Temporal Logic. 1365-1377 - Shahbaz P. Qadri Syed, He Bai:

Exploiting inter-agent coupling information for efficient reinforcement learning of cooperative LQR. 1378-1391 - Fengze Xie, Sizhe Wei, Yue Song, Yisong Yue, Lu Gan:

Morphological-Symmetry-Equivariant Heterogeneous Graph Neural Network for Robotic Dynamics Learning. 1392-1405 - Minah Lee, Uday Kamal, Saibal Mukhopadhyay:

Learning Collective Dynamics of Multi-Agent Systems using Event-based Vision. 1406-1418 - Seyed Yousef Soltanian, Wenlong Zhang:

PACE: A Framework for Learning and Control in Linear Incomplete-Information Differential Games. 1419-1433 - Jörn Tebbe, Andreas Besginow, Markus Lange-Hegermann:

Physics-informed Gaussian Processes as Linear Model Predictive Controller. 1434-1446 - Haoyu Li, Xiangru Zhong, Bin Hu, Huan Zhang:

Neural Contraction Metrics with Formal Guarantees for Discrete-Time Nonlinear Dynamical Systems. 1447-1459 - Negar Monir, Mahdieh Sadat Sadabadi, Sadegh Soudjani:

Robust Control of Uncertain Switched Affine Systems via Scenario Optimization. 1460-1471 - Zhiyu An, Zhibo Hou, Wan Du:

Disentangling Uncertainties by Learning Compressed Data Representation. 1472-1483 - Gokul Puthumanaillam, Jae Hyuk Song, Nurzhan Yesmagambet, Shinkyu Park, Melkior Ornik:

TAB-Fields: A Maximum Entropy Framework for Mission-Aware Adversarial Planning. 1484-1497 - Yichao Zhong, Chong Zhang, Tairan He, Guanya Shi:

Bridging Adaptivity and Safety: Learning Agile Collision-Free Locomotion Across Varied Physics. 1498-1511 - Ilayda Canyakmaz, Iosif Sakos, Wayne Lin, Antonios Varvitsiotis, Georgios Piliouras:

Learning and steering game dynamics towards desirable outcomes. 1512-1524 - Haokun Yu, Jingyuan Zhou, Kaidi Yang:

Interaction-Aware Parameter Privacy-Preserving Data Sharing in Coupled Systems via Particle Filter Reinforcement Learning. 1525-1536 - Ibon Gracia, Luca Laurenti, Manuel Mazo Jr., Alessandro Abate, Morteza Lahijanian:

Temporal Logic Control for Nonlinear Stochastic Systems Under Unknown Disturbances. 1537-1549 - Mahdi Nazeri, Thom Badings, Sadegh Soudjani, Alessandro Abate:

Data-Driven Yet Formal Policy Synthesis for Stochastic Nonlinear Dynamical Systems. 1550-1564 - Mohamed Abou-Taleb, Maximilian Raff, Kathrin Flaßkamp, C. David Remy:

Koopman Based Trajectory Optimization with Mixed Boundaries. 1565-1577

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














